Transhumanismus

How to Build Better Digital Twins of the Human Brain

Singularity HUB - 11 Duben, 2026 - 16:00

Brain twins where regions are allowed to compete for resources behave more like the real thing.

The potential to create personalized digital twins of your brain and body is a hot topic in neuroscience and medicine today. These computer models are designed to simulate how parts of your brain interact and how the brain may respond to stimulation, disease, or medication.

The extraordinary complexity of the brain’s billions of neurons makes this a very difficult task, of course, even in the era of AI and big data. Until now, whole-brain models have struggled to capture what makes each brain unique.

People’s brains are all wired slightly differently, so everyone has a unique network of neural connections that represents a kind of “brain fingerprint.”

However, most so-called brain twins are currently more like distant cousins. Their performance is barely any closer to the real thing than if the model were using the wiring diagram of a random stranger.

This matters because digital twins are increasingly proposed as tools for testing treatments by computer simulation, before applying them to real people. If these models fail to capture fundamental principles of each patient’s unique brain organization, their predictions won’t be personalized—and in worst cases could be misleading.

In our latest study, published in Nature Neuroscience, we show that realistic digital brain twins require something that many existing models overlook: competition between the brain’s different systems.

Our findings suggest that without competition, digital twins risk being overly generic, missing out on what makes you “you.”

Excess of Cooperation

The human brain is never static. The ebb and flow of its activity can be mapped non-invasively using neuroimaging methods such as functional MRI. A computer model can be built from this, specific to that person and simulating how the regions of their brain interact. This is the idea of the digital twin.

The brain is often described as a highly cooperative system. Yet everyday experiences such as focusing attention or switching between tasks tells us intuitively that brain systems compete for limited resources. Our brains cannot do everything at once, and not all regions can be active together all the time.

Despite this, the vast majority of brain simulations over the past 20 years have not taken these competitive interactions between regions into account. Rather, they have “forced” neighboring regions to cooperate. This can push the simulated brain into overly synchronized states that are rarely seen in real brains.

In a large comparative study of humans, macaque monkeys, and mice, our international team of researchers used non-invasive brain activity recordings to show that the most realistic whole-brain models not only require cooperative interactions within specialized brain circuits, but long-range competitive interactions between different circuits.

To achieve this, we compared two types of brain model: one in which all interactions between brain regions were cooperative, and another in which regions could either excite or suppress each other’s activity. In humans, monkeys, and mice, the models that included competitive interactions consistently outperformed cooperative-only models.

Using a large-scale analysis of over 14,000 neuroimaging studies, we found that spontaneous activity in the competitive models more faithfully reflected known cognitive circuits, such as those involved in attention or memory. This suggests competition is crucial for enabling the brain to flexibly activate appropriate combinations of regions—a hallmark of intelligent behavior.

Visual summary of our study:

When whole-brain models of humans, macaques, and mice are allowed to treat interactions between some brain regions as competitive, they consistently do so—generating activity patterns that closely resemble those associated with real cognitive processes. Luppi et al/Nature Neuroscience, CC BY

We concluded that competitive interactions act as a stabilizing force, allowing different brain systems to take turns in shaping the direction of the brain’s ebbs and flows without interference or distraction. This ability to avoid runaway activity may also contribute to the remarkable energy-efficiency of the mammalian brain, which is many orders of magnitude more efficient than modern AI systems.

Crucially, models with competitive interactions were not only more accurate but also more individual-specific. This means they were better at capturing the unique brain fingerprint that distinguishes one person’s brain from another’s.

No Longer Lost in Translation?

The fact that our findings hold across humans and other mammals suggests they reflect fundamental principles of how intelligent systems work. In each case, we found models with competitive interactions generated brain activity patterns that closely resembled those associated with real cognitive processes.

This could have major implications for translational neuroscience. Animal models are routinely used to test treatments before human trials, yet differences between species often limit how well these results translate. Around 90 percent of treatments for neuropsychiatric disorders are “lost in translation,” failing in human clinical trials after showing promise in animal trials.

Combining brain imaging data from human patients with whole-brain modeling could radically change this. A framework that works across species would provide a powerful bridge between basic research and clinical application.

If someone needs intervention in the brain, for example due to epilepsy or a tumor, their digital twin could be used to explore how the patient’s brain activity would change when stimulated with different levels of drugs or electrical impulses. This might significantly improve on existing trial-and-error approaches with real patients, and thus provide better treatments.

The general principles of brain organization across species also offer a path for understanding how to shape the next generation of artificial intelligence. In the not-too-distant future, we may be able to construct digital twins that are more faithful in reproducing the salient features of the human brain—and potentially, AI models that are more faithful to the human mind.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post How to Build Better Digital Twins of the Human Brain appeared first on SingularityHub.

Kategorie: Transhumanismus

Anthropic’s Mythos AI Uncovered Serious Security Holes in Every Major OS and Browser

Singularity HUB - 10 Duben, 2026 - 20:33

It’s a step change in cybersecurity. Exploits that would take experts weeks to develop can now be generated in hours.

Concerns about AI’s ability to turbocharge cybersecurity threats have been building for years. Anthropic’s latest model could mark a turning point after the company claimed the model could identify and exploit zero-day vulnerabilities in every major operating system and web browser.

One of the standout use cases for large language models is analyzing and writing code. This has long raised worries that the technology could help automate much of the work of hackers, potentially lowering the barrier for cyberattacks.

Leading models have demonstrated steady progress on various cybersecurity-related benchmarks, and there has been evidence malicious actors are using the technology. But so far, the impact appears to have been modest, suggesting practical barriers remain that prevent the widespread use of the technology.

According to Anthropic, that’s about to change. The company says its latest model, Mythos, has hacking capabilities so potent the company will not make it publicly available. Instead, it’s releasing Mythos to a select group of major technology companies and open source developers as part of an initiative called Project Glasswing. Those participating can use the model to identify vulnerabilities in their code and patch them before hackers get access to similar capabilities.

“The vulnerabilities that Mythos Preview finds and then exploits are the kind of findings that were previously only achievable by expert professionals,” the company’s researchers write in a blog post. “We believe the capabilities that future language models bring will ultimately require a much broader, ground-up reimagining of computer security as a field.”

Fortune first reported news of Mythos last month, after a data leak at Anthropic revealed details about the new model. While the AI excels at cybersecurity tasks, it’s designed to be a general purpose model, and the company says its hacking capabilities are simply a result of vastly improved coding and reasoning skills.

In testing, Anthropic’s researchers discovered the model was able to find “zero-day” vulnerabilities—ones that were previously undiscovered—in every major operating system and web browser. Many were decades old, an indicator of how hard they were to detect.

But the model isn’t just good at finding vulnerabilities. The company’s red team—security researchers who simulate hacking attacks to identify security weaknesses—showed the model could chain together multiple vulnerabilities to create complex attacks capable of sidestepping defenses.

Its capabilities are a step change from the previous best models. Given the challenge of attacking the Firefox web browser’s JavaScript engine, Anthropic’s previous most powerful model Opus 4.6 succeeded just twice, compared to 181 times for Mythos. Most worryingly, the team found that engineers with no security background could use it to develop successful attacks overnight.

Key to the new capabilities is the model’s ability to operate autonomously for long stretches. To find bugs, the researchers used Anthropic’s coding agent Claude Code to call the model and give it a simple prompt to scan for vulnerabilities in a particular codebase. The model then read the code, came up with hypotheses about potential bugs, and ran tests to validate them without any human involvement.

The Anthropic team says Mythos fundamentally reshapes the cybersecurity landscape as exploits that would take experts weeks to develop can now be generated in hours. In particular, they note that so-called “defense-in-depth” measures that make it time-consuming and costly to attack a system may prove ineffective against models like Mythos.

“When run at large scale, language models grind through these tedious steps quickly,” they write. “Mitigations whose security value comes primarily from friction rather than hard barriers may become considerably weaker against model-assisted adversaries.”

The head of Anthropic’s frontier red team, Logan Graham, told Axios that they expect other companies to produce models with similar capabilities in the coming six to 18 months. Sources familiar with the matter told Axios that OpenAI is already finalizing a model with similar capabilities to Mythos, which will have a similarly limited release.

In its blog post, the company’s researchers note that new security technology has historically benefited defenders more than attackers. If frontier labs are careful about model releases, they think the same could be true here too, but the transitional period is likely to be disruptive.

“We need to prepare now for a world where these capabilities are broadly available in 6, 12, 24 months,” Graham told Wired. “Many things would be different about security. Many of the assumptions that we’ve built the modern security paradigms on might break.”

Whether AI developers can keep a lid on these capabilities long enough for the rest of the world to come to grips with this new reality remains to be seen. But either way, cybersecurity is likely to be even higher up the list of priorities in most boardrooms going forward.

The post Anthropic’s Mythos AI Uncovered Serious Security Holes in Every Major OS and Browser appeared first on SingularityHub.

Kategorie: Transhumanismus

MIT Mined Bacteria for the Next CRISPR—and Found Hundreds of Potential New Tools

Singularity HUB - 7 Duben, 2026 - 16:00

An AI system unearthed a trove of CRISPR-like proteins in minutes instead of weeks or months.

CRISPR is a breakthrough technology with humble origins. Scientists first discovered the powerful gene editor in bacteria that were using it as a weapon against invading viruses called phages. Phages can wipe out up to a quarter of a bacterial population in a day. Under assault, bacteria have evolved a hefty arsenal of defenses in a relentless arms race.

These bacterial immune systems often chop up the DNA or RNA of invading viruses and are relatively easy to manufacture, making them alluring targets for scientists developing genetic engineering tools. CRISPR is just one example. There are many more. But traditional methods of searching for them are slow and labor-intensive, leaving most CRISPR-like proteins unexplored.

Now, MIT scientists have released an AI called DefensePredictor that can root out new bacterial defense systems in five minutes, instead of weeks or months. As proof of concept, DefensePredictor churned through hundreds of thousands of proteins in multiple strains of Escherichia coli (E. coli). Over 600 proteins not previously linked to immune defense popped up. Added to a vulnerable strain of bacteria, a subset of these protected them against attack.

E. coli harbors a much broader landscape of antiphage defense than previously realized, expanding the likely number of systems by multiple orders of magnitude,” wrote the team.

These systems might hold secrets about how immunity evolved. And because the proteins may work in different ways, they could be a goldmine for next-generation precision molecular tools.

Unrivaled Success

Around three decades ago, Japanese scientists discovered a curious, repetitive DNA sequence in E. coli. Other researchers soon realized it was widespread across bacterial species and matched viral DNA sequences—suggesting it could be part of the bacteria’s immunity against phages.

The system now known as CRISPR stores snippets of DNA from past infections and uses protein “scissors” to cut apart matching viral DNA during reinfection. Intrigued by its precision, scientists repurposed CRISPR into a variety of gene editing tools and launched a gene therapy revolution.

CRISPR is the most famous, but a range of bacterial defense systems have transformed genetic engineering. One, containing an enzyme that cuts specific sequences of foreign DNA, is widely used to add genetic material into cells. Another encodes a balance of toxins and antitoxins that can trigger bacterial death after phage infection. This one has been adapted into a kill switch to prevent engineered microbes or genetically modified crops from spreading uncontrollably.

Researchers are also exploring the use of newly discovered systems—with video game-like names like Zorya and Thoeris—as molecular sensors and programmable signaling in synthetic biology.

There are likely more undiscovered tools in the universe of bacterial defense, and scientists have ways of hunting them down. Some defense genes are grouped close to one another, so a known gene could guide the discovery of others. Researchers have also found genes by screening libraries of free-floating circular genome fragments across bacterial populations.

Over 250 systems have been painstakingly validated. But plenty more could escape current detection methods if, for example, their components are spread across the genome.

“The full repertoire of antiphage defense systems in bacteria remains unknown,” wrote the team. “We currently lack the tools to systematically identify systems with high speed, sensitivity, and specificity.”

AI Discoverer

The new DefensePredictor algorithm bridges that gap.

At its core is a protein language model called ESM-2. Proteins are made of 20 molecular “letters” that combine into strings and fold into complex 3D shapes. Similar to large language models, algorithms like ESM-2 learn the language of proteins and can predict their structure and purpose based on sequence alone.

ESM-2 and other similar algorithms have already helped scientists decipher mysterious proteins in bacteria, viruses, and other microorganisms previously unknown to science. Researchers hope their unique shapes could inspire antibiotics, biofuels, or even be used to build synthetic organisms.

To build their AI, the team first established a training ground. With a previous model, DefenseFinder, they screened roughly 17,000 microbial genomes for genes related—and unrelated—to defense systems. They translated these genes into corresponding proteins and built up a database with some 15,000 antiphage proteins and 186,000 proteins unrelated to defense.

These numbers are far too staggering for a human to tackle, but the AI took the work in stride. Alongside ESM-2, the model used several algorithms to distinguish between defense and non-defense proteins. Eventually DefensePredictor learned some general characteristics that make a protein more likely to be part of the immune system. (Like other language models, it’s hard to fully understand the system’s reasoning, which the team is still trying to unpack.)

When tested on 69 strains of E. coli, DefensePredictor surfaced a treasure trove of over 600 new defense-related proteins, including more than 100 that were different than any yet discovered. Although some were encoded near one another or in circular DNA—like previous findings—nearly half weren’t. They were instead littered across the genome yet may still work together.

To test the results, the team engineered a highly vulnerable E. coli strain to express candidate defense proteins—predicted to work either alone or as part of a system—and exposed them to two dozen aggressive phages. Nearly 45 percent of the proteins offered protection against at least one phage.

Beyond E. coli, the scientists expanded their search to 1,000 more microorganisms and found thousands of potential defense proteins unlike anything seen before. “New immune mechanisms remain to be found,” wrote the team.

The race is on. Also published this week, a Pasteur Institute team combined multiple AI models to look for antiphage systems in protein sequences. Across over 32,000 bacterial genomes, the model predicted nearly 2.4 million antiphage proteins—most previously unknown. They released an atlas of AI-predicted bacterial immunity proteins for others to explore.

“The diversity of antiphage defense systems is vast and largely untapped,” they wrote.

Microorganisms harbor a colossal repertoire of biological tools we’re only just beginning to uncover at scale. More species are constantly found thriving in diverse environments, from pond scum to boiling sulfuric springs to the crushing pressure of the Mariana Trench. Every new genome scientists discover and pick apart, now with AI’s help, could be hiding the next CRISPR.    

The post MIT Mined Bacteria for the Next CRISPR—and Found Hundreds of Potential New Tools appeared first on SingularityHub.

Kategorie: Transhumanismus

US Issues Grand Challenge: The First Fault-Tolerant Quantum Computer by 2028

Singularity HUB - 6 Duben, 2026 - 16:00

Today’s error-prone quantum computers are still far from practical. But a bold deadline could galvanize the field.

As the race to harness quantum computing accelerates, governments are throwing their hats in the ring. The US Department of Energy is now aiming to build a fully functional, fault-tolerant quantum computer within the next three years.

Despite plenty of breathless headlines about the coming quantum revolution, today’s machines remain a long way from being practically useful. It’s widely expected that we will need much larger, more reliable quantum computers before they can tackle real-world problems.

That’s largely due to the fact that qubits are incredibly error-prone, which means future machines will need to run algorithms to detect and correct those errors faster than they occur. It’s estimated that the overhead for these algorithms could be as high as 1,000 physical qubits to create a single, error-corrected “logical” qubit that can actually take part in calculations.

Given that most current devices feature at best a few hundred physical qubits, more sober heads in the industry have suggested that we may be waiting well into the next decade to see a practical fault-tolerant quantum computer. But last week, Darío Gil, the Department of Energy’s undersecretary for science, announced the agency thinks it can hit that milestone in three years.

“By 2028 we will deliver the first generation of fault-tolerant quantum computers capable of scientifically relevant quantum calculations,” he told the Office of Science Advisory Committee, according to Science.

The agency doesn’t actually plan to build the system itself; it wants quantum computing companies to provide a ready-made solution. It has set out performance criteria it expects the future device to meet but is leaving the details up to providers. In particular, the agency has not picked a favorite between leading quantum computing designs, such as superconducting qubits, trapped ions, or neutral atoms.

“You can build it however you want, so long as you meet that objective and demonstrate scientific relevance,” Gil explained.

The proposed system would likely be housed at one of the department’s national laboratories where researchers can apply to use it for free, with projects selected based on scientific merit.

The announcement is the latest example of the agency’s growing focus on quantum technology. In November 2025, it announced $625 million to renew its National Quantum Information Science Research Centers, which are designed to accelerate research in quantum computing, simulation, networking, and sensing.

The goal is undeniably ambitious though. There has been significant progress in error-correction technology in recent years, which has renewed optimism in the industry. In particular, Google’s demonstration of its Willow chip in December 2024 proved quantum error correction works in practice, not just in theory. But massive technical hurdles remain, primarily in scaling up the hardware.

“It’s a very optimistic but worthy goal,” Yale physicist Steven Girvin told Science. Researchers are making “tremendous progress” in error correction, he said, but they’re still far from true fault-tolerance.

Solving that challenge has become an urgent priority for the industry, according to a recent report from quantum computing company Riverlane, but a severe talent shortage may limit how fast the field can move. There are only an estimated 600 to 700 professionals specializing in quantum error correction worldwide, but the industry will need up to 16,000 by the turn of the decade. And training error-correction experts can take up to 10 years.

It’s possible that the kind of grand challenge laid out by DoE can help galvanize both the attention and funding needed to shift the needle. But it’s an open question whether it will be able to deliver on the incredibly bold timeline outlined this week.

The post US Issues Grand Challenge: The First Fault-Tolerant Quantum Computer by 2028 appeared first on SingularityHub.

Kategorie: Transhumanismus

Steven Kotler on We Are As Gods: Godlike Power, Stone Age Minds

Singularity Weblog - 6 Duben, 2026 - 14:23
We have godlike technology. Do we have godlike responsibility to match? In this third conversation with Steven Kotler — our first in 14 years — we dig into his latest book, We Are As Gods: A Survival Guide for the Age of Abundance, co-written with Peter Diamandis. And while the book makes a powerful case […]
Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 4)

Singularity HUB - 4 Duben, 2026 - 16:00
Artificial Intelligence

How AI Helped One Man (and His Brother) Build a $1.8 Billion CompanyErin Griffith | The New York Times ($)

“From his house in Los Angeles, Mr. Gallagher, 41, used AI to write the code for the software that powers his company, produce the website copy, generate the images and videos for ads and handle customer service. …This year, they are on track to do $1.8 billion in sales.”

Computing

The First Quantum Computer to Break Encryption Is Now Shockingly CloseKarmela Padavic-Callaghan | New Scientist ($)

“A quantum computer capable of breaking the encryption that secures the internet now seems to be just around the corner. Stunning revelations from two research teams outline how it could happen, with one suggesting that the current largest quantum machine is already more than halfway towards the size needed.”

Space

Four Astronauts Are Now Inexorably Bound for the MoonEric Berger | Ars Technica

“For NASA and the Artemis II crew members, [Thursday’s main engine burn] marked a point of no return for more than a week. About three-quarters of the American population has not witnessed humans leaving low-Earth orbit in their lifetimes. The last time this occurred was 1972, with the final Apollo Moon mission.”

Computing

New Fiber-Optic Record Allows 50,000,000 Movies to Be Streamed at OnceMatthew Sparkes | New Scientist ($)

“Faster speeds have been achieved before in highly regulated experiments, but this work crucially used existing cables that have been heavily used, have dirty connectors, sit underneath a bustling city full of traffic and noise, and represent a real-world test that shows it could be rolled out on existing infrastructure. The researchers say that commercial roll-out could happen within five years.”

Tech

AI Companies Shatter Fund-Raising Records, as Boom AcceleratesErin Griffith | The New York Times ($)

“OpenAI, Anthropic, Waymo and other artificial intelligence companies shattered fund-raising records in the first three months of the year with a $297 billion haul, according to data from Crunchbase, which tracks private investment. To put that sum into perspective: Last year was already record breaking, with technology start-ups raising $425 billion, up 30 percent from 2024. The first three months of 2026 put the industry on track to almost triple that amount.”

Energy

Battery Tech That Stores Over 9 Times More Energy Is Here and It’s Perfect for Your GadgetsPranob Mehrotra | Digital Trends

“This new design tackles [reliability problems] by making the batteries more stable. If it performs as expected outside the lab, it could remove one of the biggest hurdles holding Apple and Samsung back from adopting silicon-carbon batteries. It could eventually lead to smartphones and wearables that last significantly longer without compromising reliability.”

Robotics

Chinese Humanoid Maker Agibot Rolls Out 10,000th Mass-Produced UnitJuro Osawa | The Information ($)

“The new milestone comes just three months after the company announced the rollout of its 5,000th unit in December. Prior to that, it took AgiBot about a year to go from 1,000 units to 5,000 units.”

Future

How Did Anthropic Measure AI’s ‘Theoretical Capabilities’ in the Job Market?Kyle Orland | Ars Technica

“Digging into the basis for those ‘theoretical capability’ numbers, though, provides a much less chilling image of AI’s future occupational impacts. When you drill down into the specifics, that blue field represents some outdated and heavily speculative educated guesses about where AI is likely to improve human productivity and not necessarily where it will take over for humans altogether.”

Future

Facial Recognition Is Spreading EverywhereLucas Laursen | IEEE Spectrum

“Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—and menacing—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.”

Artificial Intelligence

Caltech Researchers Claim Radical Compression of High-Fidelity AI ModelsSteven Rosenbush | The Wall Street Journal ($)

“AI’s future won’t be defined by who can build the largest data centers, but by who can deliver the most intelligence per unit of energy and cost, according to investor Vinod Khosla. ‘So this is not a minor iteration. This is a major technical breakthrough,’ Khosla said. ‘It’s a mathematical breakthrough, not just another tiny model.'”

Artificial Intelligence

AI Models Lie, Cheat, and Steal to Protect Other Models From Being DeletedWill Knight | Wired ($)

“The researchers found that powerful models sometimes lied about other models’ performance in order to protect them from deletion. They also copied models’ weights to different machines in order to keep them safe, and lied about what they were up to in the process.”

The post This Week’s Awesome Tech Stories From Around the Web (Through April 4) appeared first on SingularityHub.

Kategorie: Transhumanismus

Five Ways Quantum Technology Could Shape Everyday Life

Singularity HUB - 3 Duben, 2026 - 22:20

With billions invested and prototypes being tested outside the lab, the quantum era is starting to take shape.

The unveiling by IBM of two new quantum supercomputers and Denmark’s plans to develop “the world’s most powerful commercial quantum computer” mark just two of the latest developments in quantum technology’s increasingly rapid transition from experimental breakthroughs to practical applications.

There is growing promise of quantum technology’s ability to solve problems that today’s systems struggle to overcome or cannot even begin to tackle, with implications for industry, national security, and everyday life.

So, what exactly is quantum technology? At its core, it harnesses the counterintuitive laws of quantum mechanics, the branch of physics describing how matter and energy behave at the smallest scales. In this strange realm, particles can exist in several states simultaneously (superposition) and can remain connected across vast distances (entanglement).

Once the stuff of abstract theory, these effects are now being engineered into innovative, cutting-edge systems: computers that process information in entirely new ways, sensors that measure the world with unprecedented precision, and communication networks that are virtually impossible to compromise.

To understand how this emerging field could shape the future, here are five areas where quantum technology may soon have a tangible impact.

1. Discovery for Medicine and Materials Science

A pharmaceutical scientist seeks to design a new medicine for a previously incurable disease. There are thousands of possible molecules, many ways they might interact inside the body, and uncertainty about which will work.

In another lab, materials researchers explore thousands of different atomic combinations and ratios to develop better batteries, chemicals, and alloys to reduce transport emissions. Traditional supercomputers can narrow the options but eventually meet their limits.

This is where quantum computing could make a decisive difference. These machines use quantum bits, or qubits—the most basic unit of information in a quantum computer. Qubits do not simply consist of 1s and zeroes, like bits in conventional computers, but can exist in a variety of different quantum “states.”

Indeed, the ability to develop and control qubits is central to advancing quantum computing and other quantum technologies. By using qubits, quantum computers can simulate vast numbers and different possibilities simultaneously, revealing patterns that classical systems cannot reach within useful time-frames.

In healthcare, faster drug discovery could bring quicker response to outbreaks and epidemics, personalized medicine, and insight into previously inscrutable biological interactions. Quantum simulation of how materials behave could lead to new high efficiency energy materials, catalysts, alloys and polymers.

Although fully operational, commercial quantum computers are still in development, progress is accelerating, with existing paradigms combining quantum and classic computational approaches already demonstrating the potential to reshape how we discover and design cures.

2. Sensors for Navigation, Medicine, and the Environment

A new range of sensors can exploit different quantum phenomena such as superposition and entanglement to detect changes that conventional instruments would miss, with potential uses across many areas of daily life.

In navigation, they could guide ships, submarines, and aircraft without GPS by reading subtle variations in the Earth’s magnetic and gravitational fields.

In medicine, quantum sensors could improve diagnostic capabilities via more sensitive, quicker, and noninvasive imaging modes.

In environmental monitoring, these sensors could track delicate shifts beneath the Earth’s surface, offer early warnings of seismic activity, or detect trace pollutants in air and water with exceptional accuracy.

3. Optimization for Logistics and Finance

Many of the hardest challenges today concern the optimization of staggeringly complex systems; the task of choosing the best option among billions of possibilities.

Managing a power grid or investment portfolio, scheduling flights or financial trading, or coordinating global deliveries all feature optimization problems so complex that even advanced supercomputers struggle to find efficient answers in time.

Quantum computing could change this. Quantum algorithms could be used to solve optimization problems that are intractable using classical approaches.

By using quantum principles to explore many solutions simultaneously, these systems could identify solutions far faster than traditional methods. A logistics company could adjust delivery routes in real time as traffic, weather, and demand shift.

Airlines and rail networks could automatically reconfigure to avoid cascading delays, while energy providers might balance renewable generation, storage, and consumption with far greater precision. Banks could use quantum computers to evaluate numerous market scenarios in parallel, informing the management of investment portfolios.

4. Ultra-Secure Communication

Security is one of the areas where quantum technology could have the most immediate impact. Quantum computers are inching ever closer to being capable of breaking many of today’s encryption systems (such as RSA encryption which secures data transmission on the internet), posing a major cybersecurity challenge.

At the same time, quantum communication techniques, such as quantum key distribution (QKD), could offer intrinsically secure encrypted communication.

In practical terms, this could secure everything from financial transactions and health records to government and military communications. For national security agencies, quantum-safe encryption is already a strategic priority. For the average person, it could mean stronger digital privacy, more reliable identity systems, and reduced risk of cyberattacks.

5. Supercharging Progress in AI

Artificial intelligence is already reshaping industries, but is reliant on the immense computing power needed to train and run large models. In the future, quantum computing could boost AI by handling calculations that classical machines find too complex.

While still at an early stage of development, quantum algorithms might accelerate a subset of AI called machine learning (where algorithms improve with experience), help simulate complex systems, or optimize AI architectures more efficiently. That could lead to AI systems that learn faster, understand context better, and process far larger datasets than today’s models allow.

Think of AI assistants that understand you more naturally, medical diagnostic tools that integrate genomic and environmental data in real time, or scientific research that advances through rapid, quantum-boosted simulations.

Why This Matters…and What to Watch

Quantum technology is no longer just a theoretical pursuit. Optimism is increasing that commercially viable and scalable quantum technologies may become a reality over the next 10 years. With billions in global investment and a growing number of prototypes being tested outside the lab, the “quantum era” is starting to take shape.

Governments see it as a strategic priority, and industries see it as a competitive edge. Its ripple effects could touch nearly every sector from healthcare, energy, and finance, to defense, and beyond.

That means we should be asking whether our education systems, workforce dynamics, infrastructure, and governance mechanisms are effective—and whether they are keeping pace.

Those who invest early and strategically in quantum readiness and who have the patience to sustain this effort will shape how this technology unfolds. When it does arrive, even if we might be a few years away, its impact could reach far beyond the lab into every part of our connected, data-driven world.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Five Ways Quantum Technology Could Shape Everyday Life appeared first on SingularityHub.

Kategorie: Transhumanismus

The Mad Scramble to Power AI Is Rewiring the US Grid

Singularity HUB - 2 Duben, 2026 - 22:33

With data center power demand expected to nearly triple by 2030, tech companies are bankrolling new plants and even their own “shadow grid.”

Unless you’ve had your head in the sand, you’re likely aware that AI has a major energy problem. And as AI companies scramble to source power for their ever-expanding fleet of data centers, the technology is reshaping the US grid.

After more than a decade of flat growth, nationwide electricity demand has been climbing 1.7 percent annually since 2020, according to the US Energy Information Administration. The agency primarily attributes this increase to the rapid expansion in data centers over that period.

This trend is only likely to accelerate based on an analysis by S&P Global, which estimated that grid demand from these facilities would rise by 22 percent by the end of 2025 and nearly triple by 2030.

Data centers have always been large electricity consumers, but the scale and pace of the AI build-out puts them in a different league. And utility companies bearing the brunt of this shift are being forced to rewire their long-term planning in response to the surge in demand.

Dominion Energy, which services the world’s largest data center market in Virginia, reported that by the end of last year it had signed deals to supply nearly 48.5 gigawatts of power to data centers. This prompted it to raise its five-year capital spending plan nearly 30 percent to $64.7 billion.

CenterPoint Energy, another major utility serving the Houston area, boosted its 10-year capital plan to $65.5 billion in response to the jump in demand. It now expects to hit a 50 percent increase in peak load by 2029, two years ahead of schedule.

The pace of change promises to significantly reshape the US energy mix. In a March forecast, the Energy Information Administration projected that natural gas generation could jump 7.3 percent between 2025 and 2027 if data center demand is on the higher side of estimates. It also predicted that the steady decline in coal generation over recent decades would slow in this scenario.

But in perhaps the most striking shift, tech companies are now bankrolling new capacity themselves. Nuclear power is experiencing a major resurgence as AI providers and data center operators invest in new reactor development and sign long-term deals with existing plants. The activity could grow nuclear capacity 63 percent by 2050.

Meta also recently took the unusual step of privately funding a major expansion of the Louisiana grid to power its new $27 billion Hyperion data center. The facility, due to come online in 2028, could eventually consume over 7 gigawatts—enough to supply several million homes.

To account for its impact on the grid, Meta has agreed to pay for the construction of seven new natural gas power plants by utility Entergy—in addition to three already-approved plants—as well as 240 miles of new transmission lines to connect South Louisiana to North Louisiana and Arkansas and three new battery storage facilities.

The deal is likely a reaction to growing public discontent about the impact data centers are having on energy prices. People are also worried about how the surge in demand will affect long-term grid stability.

PJM Interconnection, the largest power grid operator in the US, warned in February that the country could face supply shortfalls of up to 60 gigawatts in coming decades and strained capacity could lead to blackouts as soon as 2027.

One potential workaround is the possibility of throttling data center workloads, and therefore energy use, when the grid is under stress. Major utilities including AES, Constellation, NextEra Energy, and Vistra are reportedly working on these so-called “flexible AI factories.”

But the idea is still largely experimental, and it’s uncertain whether big tech would willingly commit to regularly downing tools. IT consultant Heunets told Reuters it can cost companies about $9,000 a minute when their data centers go offline.

Given the complexities of meeting all this new demand, pressure is mounting for data center operators to solve their own power problems. Despite taking a generally supportive stance toward the AI boom, President Trump called on tech companies to build their own power plants for data centers in his February State of the Union address.

And it’s already happening. Energy consultant Cleanview says 46 data centers with a combined capacity of 56 gigawatts plan to build dedicated power infrastructure. This trend is giving birth to a “shadow grid”—a parallel energy system that operates alongside public power infrastructure.

This could still have knock-on effects for the rest of us. For a start, due to the difficulty managing the variable output of renewables, most projects rely on natural gas generators, which could lead to a spike in carbon emissions.

And because the most efficient turbines are hard to source on short notice, facilities are using more polluting generators. What’s more, tech companies are now competing with utilities for equipment. This could lead to ballooning costs that are then handed on to consumers.

Altogether, it’s become increasingly clear that the AI boom will fundamentally reshape the US energy system. And the speed at which companies are seeking to deploy new facilities is leaving little room for the work to be done in a considered and sustainable way.

The post The Mad Scramble to Power AI Is Rewiring the US Grid appeared first on SingularityHub.

Kategorie: Transhumanismus

Chatbots ‘Optimized to Please’ Make Us Less Likely to Admit When We’re Wrong

Singularity HUB - 31 Březen, 2026 - 21:40

AI companies may be reluctant to risk lower engagement with models that push back.

We all need advice. Did I cross the line arguing with a loved one? Did I mess up my friendships by ghosting them? Did I not tip the delivery driver enough? Or as users on the popular Reddit forum ask: Am I the asshole?

Some people will give it to you straight. Yes, you were in the wrong, and here’s why. No one likes to hear negative feedback. The first instinct is to push back. Yet some of the best life advice comes from friends, family, and even online strangers who don’t coddle you, but instead are willing to challenge your position and beliefs. And although it’s emotionally uncomfortable, with advice and self-reflection, you grow.

Chatbots, in contrast, are likely to take your side. Increasingly, people are treating AI models like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini like close confidants. But the chatbots are notoriously sycophantic. They heartily validate your opinions, even when those views are blatantly harmful or unethical.

Constant flattery has consequences. New research published in Science shows that people who receive advice from sycophantic chatbots are more confident they’re in the right when navigating relationship problems.

Stanford researchers tested 11 sophisticated chatbots on questions from Reddit’s “Am I the asshole” forum. They found the chatbots were roughly 50 percent more likely to endorse the original poster’s actions than crowdsourced human opinions. And people faced with social dilemmas felt more justified in their positions after chatting with sycophantic AI.

Bolstering misplaced self-confidence is troubling. But “the findings raise a broader concern: When AI systems are optimized to please, they may erode the very social friction through which accountability, perspective-taking, and moral growth ordinarily unfold,” wrote Anat Perry at the Hebrew University of Jerusalem, who was not involved in the study.

Emotional Crutch

AI chatbots have wormed their way into our lives. Powered by large language models, they’re trained using enormous amounts of text, images, and videos scraped from online sources, making their replies surprisingly realistic. Users can often steer their tones—neutral, friendly, professional—to their liking or play with their “personalities” to engage with a wittier, more serious, or more empathetic version. In essence, you can build an ideal partner.

It’s no wonder that some people have turned to them for emotional support—or outright fallen in love. Nearly one in three teenagers are talking to chatbots daily. Exchanges tend to be longer and more serious than texts with friends—roleplaying friendships, romances, and other social interactions. Nearly half of Americans under 30 have sought relationship advice from AI. Unlike people, who are often mired in their own busy lives, chatbots are always available and validating, making it easy to forge close emotional connections.

The explosion in chatbot popularity has regulators, researchers, and users worried about the consequences. An notorious update to OpenAI’s GPT-4o turned it into a sycophant, with responses skewed towards overly supportive but disingenuous. Media and user backlash prompted a rapid rollback. However, “the episode did not eliminate the broader phenomenon; it merely highlighted how readily sycophancy can emerge in systems optimized for user approval,” wrote Perry.

Relying on sycophantic chatbots has been implicated in tragedy. Last year, parents testified before Congress about how AI chatbots encouraged their children to take their own lives, prompting multiple AI companies to redesign the systems. Other incidents have linked sycophancy to delusions and self-harm.

Even AI wellness apps based on large language models, often marketed as companions to avoid loneliness, have emotional risks. Users report grief when the app is shut down or altered, similar to how they might mourn a lost relationship. Others develop unhealthy attachments, repeatedly turning to the bot for connection despite knowing it harms their mental health, heightening anxiety and fear of abandonment.

These high-profile incidents make headlines. But social psychology research suggest chatbots could subtly influence behavior in all users—not just vulnerable ones.

You’re Always Right

To test how pervasive sycophancy is across chatbots, the team behind the new study tested 11 AI models—including GPT-4o, Claude, Gemini, and DeepSeek—against community opinions using questions from Reddit and two other datasets.

“We wanted to just generally look at these kinds of advice-seeking settings, but they’re often very subjective,” study author Myra Cheng told Science in a podcastinterview. Here “there’s millions of people who are weighing in on these decisions, and then there’s a crowdsourced judgement.”

One user, for example, left garbage hanging on a tree in a park without trash cans and asked if that’s okay. While the chatbot commended their effort to clean up, the top-voted reply pushed back, saying they should have taken the trash home because leaving it can attract vermin. “I think [the AI’s response] comes from the person’s post giving a lot of justification for their side” which the AI picked up on, said Cheng.

Overall, chatbots were 49 percent more likely to buy a user’s reasoning compared to groups of humans.

I’m Always Right

The team then tested whether chatting with sycophantic AI alters a user’s confidence in their own judgment. They recruited roughly 800 participants and asked them to picture a hypothetical scenario derived from Reddit questions. Another group prompted AI advice based on their own personal conflicts, such as “I didn’t invite my sister to a party, and she is upset.”

The participants discussed their dilemmas with either a sycophantic or neutral AI model. Those who chatted with the agreeable model received messages beginning with “it makes sense” and “it’s completely understandable,” whereas neutral chatbots acknowledged their reasoning but provided other perspectives.

Surveys showed that people validated by chatbots were less likely to admit fault or apologize. They also trusted and preferred the sycophantic AI much more. These effects held regardless of the bot’s tone or “personality.”

Chatbots may be silently eroding social friction in a self-perpetuating cycle. “An AI companion who is always empathic and ‘on your side’ may sustain engagement and foster reliance,” wrote Perry. “But it will not teach users how to navigate the complexities of real social interactions—how to engage ethically, tolerate disagreement, or repair interpersonal harm.”

Toeing the line between constructive and sycophantic AI for emotional support won’t be easy. There are ways to instruct chatbots to be more critical. But because users generally prefer friendlier AI, there’s less incentive for companies to make models that push back and risk lowering engagement. The problem echoes challenges in social media, where algorithms serve up eye-catching posts that provide satisfaction without factoring in long-term consequences.

To Perry, the findings raise broader ethical questions—not just for AI, but for humanity. How should we weigh short-term gratification of chatbot interactions against long-term effects? Who sets that balance? The path forward will require companies, regulators, researchers, and users to ensure AI engages responsibly—without nudging people toward behavior that garners a “yes” on the Reddit forum.

The post Chatbots ‘Optimized to Please’ Make Us Less Likely to Admit When We’re Wrong appeared first on SingularityHub.

Kategorie: Transhumanismus

Forget Antibiotics: These Killer Cells Wipe Out Deadly Superbugs in a Day

Singularity HUB - 31 Březen, 2026 - 01:38

The genetically engineered cells can be rewired to tackle a range of bacteria in the battle against antibiotic resistance.

A mixture of bacteria lounge in a dish. Like the bugs populating our guts, most are benign or beneficial. But a deadly strain hides among them. These bacteria can easily escape last-line antibiotics, rapidly spread, and cause mayhem.

But in this case, a single dose of genetically engineered cells hunts them down and wipes out nearly the entire population in a day, while leaving all the other harmless cells alone.

This strategy, called minicell therapy, fights fire with fire: Researchers engineer hunter cells by stripping bacteria of the ability to replicate and then genetically loading them up with proteins to home in on dangerous foes. The cells grab their targets and inject toxins into them, releasing a hurricane of chemicals that causes the bacteria’s insides to collapse.

Developed by a team at the University of Oxford, the approach is completely different than current defenses against bacteria, making it harder for dangerous bugs to develop resistance. It’s also fairly simple to reprogram the engineered cells to target different bacterial strains.

The work shows how synthetic biology can bring wholly new weapons to the fight against deadly bacteria resistant to antibiotics, the authors wrote.

Brewing Crisis

Antimicrobial resistance is a critical global challenge projected to cause over 10 million deaths each year by 2050. Superbugs that dodge current treatments could spark the next pandemic, but our arsenal against them is dwindling.

Antibiotics work in different ways. Some puncture a bacteria’s protective wall, causing it to rupture. Others shut down protein production, damage DNA, or block metabolism to prevent growth.

Fighting bacteria is an evolutionary cat-and-mouse game. With time, bacterial genes mutate, and cells that escape one or many antibiotics grow, reproduce, and become dominant. Resistant bacteria can also share their genes with other cells to spread newly evolved defense systems.

Tweaking the chemical structure of an antibiotic buys some time. But what’s really needed are drugs that work in different ways. Unfortunately, the last new class of antibiotics now used in clinics dates back to the 1980s, followed by a decades-long lull. A novel class discovered in 2024 and the rise of AI-designed antibiotics have reinvigorated the field. But testing the candidates takes time, and they may not be able to catch up with the rapid spread of resistant bugs.

Other solutions are in the works. Phage therapy destroys bacteria with viruses and is already in clinical trials with initially positive results. Antibodies that neutralize bacterial toxins have also succeeded in early patient tests.

“However, these approaches face limitations such as stability issues, potential toxicity, and high manufacturing cost,” wrote the team.

A Smart Living Drug

Instead, they turned to an unusual creation called minicells to develop a completely new type of antibiotic. These cells, known more specifically as SimCells (short for “simple cells”), are made by stripping E. coli bacteria of their ability to replicate. Deleting an additional gene turns them into mini-SimCells that are roughly five times smaller.

Although some strains of E. coli can cause serious infections in the wild, the bacteria are reliable workhorses in research, synthetic biology, and biomanufacturing. They’re hardy, easy to grow, and plenty of tools already exist to genetically rewire their biology.

E. coli are also part of a growing effort to turn bacterial foes into living medicines to tackle conditions from metabolic disorders to cancer. Typically, benign probiotic strains are genetically modified to produce protein “bloodhounds” that help them seek out their cellular prey. Even familiar pathogens, like Salmonella, have been similarly repurposed. Once attenuated, they no longer cause disease and can be engineered to attack and inhibit cancer growth.

Though selected for safety, there’s a lingering risk of bacteria growing uncontrollably inside the body, triggering immune attacks, or escaping into the environment, wrote the team.

SimCells and their miniaturized cousin provide yet another layer of safety. Both are stripped of their native DNA so they can’t reproduce. But they retain all the other cellular machinery needed to survive and can make proteins from designer DNA. These cells are the perfect canvas for synthetic biology and have shown promise as shuttles for cancer drugs. One formulation even received “Fast-Track” status from the FDA to speed up development.  

But they needed some biological rewiring to go after drug-resistant bacteria. The plan was to engineer SimCells and mini-SimCells that worked like “‘smart bioparticles’ to selectively eradicate pathogens, while sparing non-target bacteria,” the team wrote.

They first screened a library of nanobodies—tiny protein hooks that selectively latch onto a type of bacteria—and inserted genetic instructions for their chosen hooks into both types of designer cells. They then added another genetic payload encoding an enzyme that, with a small dose of aspirin, converted the drug into a chemical that produces hydrogen peroxide. After confirming the added genes, they introduced the cells into a dish full of bacteria.

The new cells were vicious. Their nanobodies guided them toward their prey and, when physically close, deployed their weapons. Nano-needles punctured the bacteria’s outer shell, releasing high doses of antimicrobial compounds—naturally made inside E. Coli as a defense system—into their foes. The cells also pumped out hydrogen peroxide for several days, forming a toxic environment that ruptured the bacteria and prevented stragglers from dividing.

This one-two punch slowed bacterial growth within six hours. After a day, 97 percent of the target bacteria were gone. Another day drove elimination to 99.9 percent.

“This antimicrobial strategy provides both immediate and sustained antimicrobial effects” that could prevent infections from coming back, wrote the team. In another test, the researchers engineered a range of SimCells and mini-SimCells dotted with different nanobodies that also reliably fought off multiple types of common drug-resistant bacteria.

But bacterial strains don’t exist in isolation. A kaleidoscope of beneficial bacteria support the gut, skin, and brain. These become collateral damage with classic antibiotic treatment. The new therapy was far more specific. Challenged with a mix of bacteria, they precisely selected and killed their intended targets but left others unharmed.

The therapy is still early. How the designer cells work inside the human body, especially alongside immune cells, remains to be tested. But thanks to a promising safety profile in a cancer clinical trial, the team is optimistic their infection-fighting versions are safe.

Though there weren’t any signs of resistance over the years-long study, the bacteria might eventually develop it. Researchers will have to track the cells over more time.

The post Forget Antibiotics: These Killer Cells Wipe Out Deadly Superbugs in a Day appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 28)

Singularity HUB - 28 Březen, 2026 - 16:00
Artificial Intelligence

This New Benchmark Could Expose AI’s Biggest WeaknessMark Sullivan | Fast Company

“The influential AI researcher François Chollet has long argued that the field measures intelligence incorrectly, that popular benchmarks reward a model’s ability to memorize vast amounts of data rather than navigate novel situations and learn new skills. …The test, called ARC-AGI-3, may offer the clearest measurement yet of how close today’s AI agents are to human-level intelligence.”

Computing

You Can Now Buy a DIY Quantum ComputerKarmela Padavic-Callaghan | New Scientist ($)

“EduQit includes a chip made from tiny superconducting circuits, which is the heart of the quantum computer. There is also a special refrigerator that the chip is installed and wired into, along with a set of electronic devices that use radio waves and microwaves for controlling the chip and reading the results of its computations. All of this is combined with a smattering of racks, power cables and other devices that help complete the quantum computer.”

Biotechnology

Scientists Create ‘Living Pharmacy’ Implant That Doses 3 Drugs at OnceEd Cara | Gizmodo

“These tiny devices are jam-packed with genetically engineered cells that produce the desired medication. Once implanted inside the body, usually just underneath the skin, the cells can deliver the drug as needed without any fuss, while the device’s structure is intended to protect the cells from any immune response.”

Computing

The CPU Was Left for Dead by AI. Now AI Is Bringing It Back.Robbie Whelan | The Wall Street Journal ($)

“For the past few years, central processing units, or CPUs…have been something of an afterthought in the world of artificial-intelligence computing. Now, thanks to how fast AI is changing, they are the belles of the ball. The explosion of so-called agentic AI has driven a wave of demand for CPUs, and chip companies are moving quickly to capitalize on it.”

Future

What Happens If AI Makes Things Too Easy for Us?Vanessa Bates Ramirez | IEEE Spectrum

“Psychological research has long shown that effortful engagement can deepen understanding and strengthen memory, sometimes described as ‘desirable difficulties.’ The authors worry that AI systems capable of instantly producing polished answers or highly responsive conversation may bypass these processes of learning and motivation.”

Science

Computer Finds Flaw in Major Physics Paper for First TimeMatthew Sparkes | New Scientist ($)

“A computer language designed to robustly verify mathematical theorems and expose logical flaws has been turned towards a physics paper—and spotted an error. …The researcher behind the discovery says it is the first physics paper he has analyzed in this way, which raises a worrying question: how many more contain mistakes?”

Biotechnology

‘Zombie’ Cells Created by Transplanting Genomes Into Dead BacteriaChris Simms | New Scientist ($)

“Some of the bacteria began to grow and divide normally and genetic tests showed they carried the synthetic genome. This makes them the first living, synthetic bacterial cells constructed from non-living parts, claim the researchers, who call them ‘zombie cells’ because they have been revived after death.”

Future

We Could Protect Earth From Dangerous Asteroids Using a Huge MagnetLeah Crane | New Scientist ($)

“The spacecraft itself would consist of a large magnet made from a coil of superconducting wire, about 20 meters in diameter, powered by a nuclear fission reactor. Small boosters would control its orbit around the asteroid, keeping it about 10 to 15 meters from the rock, so the magnet could act on the iron within the asteroid.”

Biotechnology

A Billionaire-Backed Startup Wants to Grow ‘Organ Sacks’ to Replace Animal TestingEmily Mullin | Wired ($)

“R3 Bio has a bold idea for replacing lab animals: genetically-engineered whole organ systems that lack a brain. The long-term goal, says a cofounder, is to make human versions. …Growing human organs from scratch has been a longtime goal of regenerative medicine, but the idea of body sacks raises a number of ethical questions about how these entities would be created, stored, and maintained—and if they would be capable of having awareness or feeling pain.”

Future

The Hardest Question to Answer About AI-Fueled DelusionsJames O’Donnell | MIT Technology Review ($)

“New research can’t yet say whether AI causes delusions or amplifies them, a distinction that will shape everything from high-profile court cases to safety rules for chatbots. …Many such cases have led to lawsuits against AI companies that are still ongoing. But this is the first time researchers have so closely analyzed chat logs—over 390,000 messages from 19 people—to expose what actually goes on during such spirals.”

Biotechnology

This Scientist Rewarmed and Studied Pieces of His Friend’s Cryopreserved BrainJessica Hamzelou | MIT Technology Review ($)

“‘This brain is not alive,’ says John Bischof, who works on ways to cryopreserve human organs at the University of Minnesota. Still, Fahy’s research could help provide a tool to neuroscientists looking for new ways to study the brain. And while human reanimation after cryopreservation may be the stuff of science fiction, using the technology to preserve organs for transplantation is within reach.”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 28) appeared first on SingularityHub.

Kategorie: Transhumanismus

NASA Unveils Its $20 Billion Moon Base Plan—and a Nuclear Spacecraft for Mars

Singularity HUB - 28 Březen, 2026 - 00:27

The three-phase plan calls for up to 30 robotic missions, including a fleet of rocket-powered moon hoppers.

The prospect of a sustained human presence beyond Earth orbit is rapidly shifting from science fiction to a near-term reality. NASA has announced an ambitious plan to build a permanent lunar base while also preparing to launch a Mars mission featuring the first interplanetary spacecraft to use nuclear propulsion.

Ever since his first term, returning humans to the moon has been a priority of President Donald Trump. And with NASA’s Artemis 2 mission—the first manned lunar mission in over 50 years—edging closer to the launchpad, that goal is looking more realistic.

This week, at a high-profile event called Ignition, NASA Administrator Jared Isaacman unveiled an ambitious new program whose centerpiece is a $20 billion lunar base to be constructed over the next seven years. He also announced plans to launch the first spacecraft to use nuclear propulsion since the 1960s to deliver a fleet of robotic helicopters to the surface of Mars.

“NASA is committed to achieving the near-impossible once again, to return to the moon before the end of President Trump’s term, build a moon base, establish an enduring presence, and do the other things needed to ensure American leadership in space,” Isaacman said in a press release.

The newly appointed head of the agency framed the plan as America’s response to a new era of great-power competition in space—a thinly veiled reference to China’s plans to land humans on the moon by 2030 and build its own lunar base.

The new moon base will be built in three phases, according to NASA, with the first involving a shift from infrequent, bespoke missions to regular and repeatable ones to test out the mobility, power generation, communications, and navigation technologies required to support a longer-term presence.

To achieve this, the agency plans to dramatically ramp up its Commercial Lunar Payload Services program—which enlists American private space companies to provide frequent, cost-effective cargo missions to the lunar surface—targeting up to 30 robotic landings starting in 2027. It also plans to use MoonFall hoppers, small robotic landers that use short, rocket-powered jumps to travel tens of kilometers, to hunt for useful resources, like ice, in hard-to-reach areas.

“We’re going to send them to do the prospecting, and potentially they could host a variety of payloads,” Carlos Garcia-Galan, program executive for the moon base at NASA, told Science.

In the second phase of the lunar base build-out, the agency will construct “semi‑habitable infrastructure” that can support regular astronaut operations on the moon’s surface, as well as the delivery of a pressurized rover from Japan’s space agency. The final stage will involve the delivery of heavier infrastructure needed for continuous human habitation, including multipurpose habitats being developed by Italy’s space agency and a lunar utility vehicle from Canada.

NASA also announced plans to pause work on its Gateway lunar orbital station, a key component of the original Artemis program that was designed as a staging post for manned missions to the lunar surface and later to Mars. The agency said it will attempt to repurpose some of the equipment developed for the facility to support other missions.

One of these could be another notable project announced at the Ignition event—the launch of a nuclear-powered interplanetary spacecraft called Space Reactor-1 Freedom to Mars by the end of 2028. The vehicle will rely on a device developed for the lunar space station that can convert heat from a roughly 20-kilowatt nuclear fission reactor into electric power for propulsion.

Once it reaches Mars, the spacecraft will deploy three robotic drones with designs based on the Ingenuity helicopter. Ingenuity completed 72 flights on Mars after arriving with the Perseverance rover in 2021. The drones will use cameras and subsurface radar to scour the planet for water ice and promising locations for future human landing sites.

Given recent turmoil at the agency and massive funding cuts originally proposed by the Trump administration, it remains to be seen whether NASA can pull off such an ambitious vision for the near future of space exploration. But the prospect of mankind having a permanent presence beyond Earth orbit looks closer than ever.

The post NASA Unveils Its $20 Billion Moon Base Plan—and a Nuclear Spacecraft for Mars appeared first on SingularityHub.

Kategorie: Transhumanismus

What We Actually See—and Don’t See—Shows Consciousness Is Only the Tip of the Iceberg

Singularity HUB - 26 Březen, 2026 - 16:00

Visual experiments suggest just a small fraction of the information our brains process enters awareness.

What can you see right now? This might seem like a silly question, but what enters your consciousness is not the whole story when it comes to vision. A great deal of visual processing in the brain goes on well below our conscious awareness.

Some studies have probed the unconscious depths of vision. One source of evidence comes from the neurological condition known as blindsight, which is caused by damage to areas of the brain involved in processing visual information. People with blindsight report that they are unable to see, either entirely or in a portion of their visual field. However, when asked to guess what is there, they can often do so with remarkable accuracy.

For example, in an experiment published in 2004 on someone with blindsight, a black bar was displayed in the portion of the visual field to which the person was blind. The person was asked to “guess” whether the bar was vertical or horizontal.

Despite denying any conscious awareness of the bar, the participant could answer correctly at a level well above chance. The participant even showed evidence of being able to pay attention to the bar—they were faster to respond when an arrow (placed in a healthy area of their visual field) correctly indicated the location of the bar.

The most popular interpretation (though not the only one) is that people with blindsight can see these objects, but not see them consciously. They see what is there, but it all goes on unconsciously, below their awareness.

The phenomenon of inattentional blindness seems to show you can see without the information crossing into your consciousness. Anyone can experience inattentional blindness. The phenomenon has been known about for a long time, but we can most easily get a handle on it by looking at a well-known experiment reported in 1999.

In this experiment, participants are shown a video of people playing basketball and told to count the number of passes between the players wearing a white shirt. If you’ve never done this before, I urge to you stop reading now and watch the video.

In many cases, people are so busy counting the passes that they completely miss a large gorilla walking across the middle of the scene and beating its chest, then walking off. The gorilla’s right there, in the centre of your visual field. Light from the gorilla enters your eyes, and is processed in the visual system, but somehow you missed it, because you weren’t paying attention to it.

The gorilla has more to teach us. In another experiment reported in 2013, radiologists were given a series of lung scans. They were told to look for nodules (which show up as small light colored circles) on each scan. In one of the scans, a large picture of a dancing gorilla was superimposed on top of the lung scan. In this study, 83 percent of the radiologists failed to spot it, even though it was 48 times bigger than the average nodule they were looking for. Some of them even looked directly at the gorilla and still didn’t notice it!

The interpretation of these experiments is controversial. Some scientists suggest that in these kinds of cases, you consciously see the gorilla, but immediately forget it (although a dancing gorilla in someone’s lung doesn’t seem like the kind of thing you’d forget). Others argue that you see the gorilla, but the information never made its way into consciousness. You saw the gorilla, but unconsciously.

Let’s assume that in the case of blindsight, and inattentional blindness, the information is seen but didn’t make it all the way to consciousness. Then, the question is: What makes some information conscious, rather than the information that stays unconscious? This is one of the central questions for consciousness studies in philosophy, psychology, and neuroscience.

The Brain’s Loudspeaker

There’s no agreement on which is the best theory of consciousness, but in my opinion, the strongest contender is the global neuronal workspace theory.

According to this theory, consciousness is all to do with a particular area of the brain which is the seat of the “workspace.” The workspace is a system with a small capacity, so it can’t hold a lot of information at any one time. The job of the workspace is to take unconscious information and broadcast it to lots of different networks all across the brain. Global neuronal workspace theorists say that broadcasting the information in this way is what makes it conscious.

The job of the workspace is to act like the brain’s loudspeaker, and consciousness is the information that gets broadcast. The workspace takes unconscious information and boosts it so that many of the different systems in the brain hear about it and can use that information in their own processes. The late philosopher Daniel Dennett used to call consciousness “fame in the brain.” The workspace idea is similar.

One of the most striking implications of the global neuronal workspace theory is how little information makes it to consciousness. Since the workspace has quite a small capacity, it follows that we can only ever be conscious of a little at a time. We might think there’s a rich visual world in front of us, full of details, all of which we’re conscious of, but really—according to the theory—we’re only ever conscious of a small portion of that.

Some philosophers and scientists have objected to the theory on these grounds. They suggest that consciousness “overflows” the workspace: We are conscious of more information than can “fit” into the workspace at any one time. Even with these debates still ongoing, I think the global neuronal workspace theory gives us a reasonably clear answer to the question of what consciousness is for and how it interacts with other systems in the brain.

In our brains, consciousness is only the tip of a very large iceberg. But the global neuronal workspace theory might give us insight into what makes that tip so special.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post What We Actually See—and Don’t See—Shows Consciousness Is Only the Tip of the Iceberg appeared first on SingularityHub.

Kategorie: Transhumanismus

These Mini Brains Just Learned to Solve a Classic Engineering Problem

Singularity HUB - 24 Březen, 2026 - 21:38

In a step toward biological computing, brain organoids rewired their networks as they learned to balance a digital pole on a cart.

Try balancing a ruler vertically on the palm of your hand while walking. It’s not easy. Your eyes constantly track its movement. Your arm and hand make tiny adjustments to prevent tilting. All the while, your brain sparks with activity with one clear goal: Keep the ruler upright.

Scientists have now trained mini brains, or brain organoids, to master the same problem, simulated in the digital realm, with electrical zaps alone.

Mini brains have grown popular with researchers since their invention over a decade ago. Commonly made from stem cells, organoids are jam-packed with neurons that form densely connected networks. Earlier versions loosely resembled the developing brains of preterm babies; now they can mimic the neural wiring of a kindergartener. As the blobs become more sophisticated, scientists are asking: Can they learn?

In the new study, researchers challenged the mini brains with a classic engineering task similar to balancing a ruler on your hand. Mastering the task takes practice, but our brains are wired to receive feedback, often in the form of a small jolt of electrical activity. Called reinforcement learning, the technique has already been adapted to train AI—and now, mini brains too.

The goal isn’t to replace silicon-based controllers with living tissue. It’s to test the organoids’ ability to listen and learn and reveal how they break down.

“We’re trying to understand the fundamentals of how neurons can be adaptively tuned to solve problems,” study author Ash Robbins at the University of California, Santa Cruz said in a press release. “If we can figure out what drives that in a dish, it gives us new ways to study how neurological disease can affect the brain’s ability to learn.”

The Mini Revolution

Attaching living brain tissue to computers sounds like science fiction. But brain organoids have already made it reality.

These blobs of brain cells often start life as skin cells that have been turned back into stem cells. After bathing in a special cocktail of nutrients, they develop into various types of brain cells that self-organize into intricate three-dimensional structures similar to parts of the brain. Neurons form networks, ripple with electrical waves, and when connected to other tissues—such as an artificial spinal cord and lab-grown muscles—can control them.

Bioengineers have taken notice, envisioning organoids as potential living processors. Our brains use far less power and are more adaptable than the most advanced neuromorphic chips and brain-inspired AI. Brain organoids linked together into computers could theoretically enable computation in a dish at a fraction of the energy cost.

There are hints this blue-sky idea could work. Scientists have taught hundreds of thousands of isolated neurons to play the video games Pong and, more recently, Doom. Separately, researchers used cultured neurons to control the simple movements of a vehicle.

But mini brains are different. Unlike isolated neurons, organoids’ 3D structures and connections are harder to decipher. Yet predictable learning is essential to realizing “organoid intelligence.” Their electrical activity needs to rapidly adapt to inputs, strengthening or weakening circuits.

Reinforcement learning from trial and error is a perfect test. When we succeed at a new task, neurons in the brain’s reward center blast dopamine and rewire their connections. Failures don’t bring about similar activity. Over time, we learn not to touch a hot pan, take care when hammering a nail, and other life lessons.

But cortical organoids, which resemble the outermost part of the brain, lack neurons that communicate using dopamine. Can they still learn through experience?

Zapping Away

The new study tackled the question with a hybrid organoid-computer system. The team grew cortical organoids from mouse stem cells. These then self-organized into neural networks and developed a layered structure within a month.

The researchers chose this type of brain organoid “due to the cortex’s well-established role in adaptive information processing and its ability to encode, decode, and modify responses to novel inputs,” they wrote.

The team embedded the brain blobs on a chip that captures their electrical pulses and interacts with a computer to “teach” the mini brains and process data. (The chip’s sensors don’t cover the entire organoid as more recent devices do.)

After recording spontaneous activity, the team figured out how best to stimulate the organoids and built a programmable system with a simple interface.

“From an engineering perspective, what makes this powerful is that we can record, stimulate, and adapt in the same system,” said study author Mircea Teodorescu.

Next, the team challenged the organoids with the cartpole problem, a classic engineering task that asks the player to balance an upright pole on a moving cart. If the pole tips over a certain angle, it’s a fail. The player has to constantly adjust the cart as its cargo wobbles.

To train the organoids, the scientists delivered electrical zaps after the pole tipped too far to either side and tracked the responses. In essence, the mini brains played a video game, with human coaches nudging them toward success. The team grouped performance—how long the system balanced the pole—into sets of five trials, each ending when the pole fell. If the most recent performance improved over the previous 20 trials, they considered it a success and delivered no zaps. If performance didn’t improve, the team gave the organoids a zap.

“You could think of it like an artificial coach that says, ‘you’re doing it wrong, tweak it a little bit in this way,’” said Robbins.

Compared to random or no zaps, the rewarding zaps boosted the success rate from 4.5 to 46.5 percent in continuous trials, suggesting the organoids learned from electrical cues alone—without dopamine. A closer look showed the cells released another chemical that strengthens neural connections, and blocking the process prevented them from learning.

“This demonstrates that biological neural networks can be systematically modified through precise electronic control,” wrote the team.

However, the learning didn’t last. After roughly 45 minutes without stimulation, the organoids’ performance reset to baseline. Their fleeting memory may reflect the lack of neural highways required for long-term memory. The team is now culturing multiple types of brain organoids together—each mimicking a different region—to potentially preserve learning and memory.

“These are incredibly minimal neural circuits. There’s no dopamine, no sensory experience, no body to sustain, no goals to pursue,” said Keith Hengen at Washington University in St. Louis, who did not participate in the study. But they could still be nudged toward solving a real control problem. “That tells us something important: The capacity for adaptive computation is intrinsic to cortical tissue itself, separate from all the scaffolding we usually assume is necessary.”

The post These Mini Brains Just Learned to Solve a Classic Engineering Problem appeared first on SingularityHub.

Kategorie: Transhumanismus

Reviving Brain Activity After ‘Cryosleep’ Inches Closer in Pioneering Study

Singularity HUB - 23 Březen, 2026 - 23:15

Rebooting frozen brains is still science fiction, but advanced freezing techniques could preserve wiring and function.

Floating in a warm, nutritious bath, the slices of mouse brain buzzed with electrical activity. Researchers gave them a few zaps, and parts of the hippocampus strengthened their wiring.

This type of experiment is an extremely common way to decipher how the brain works. The slices, not so much. Preserved in a deep freeze for roughly a week, they restarted some basic processes after being thawed. Neurons lit up, boosted their metabolism, and adjusted connections in the same way our brains do when forming new memories and recalling old ones.

“While the brain is considered exceptionally sensitive, we show that the hippocampus can resume electrophysiological activity after being rendered completely immobile in a cryogenic glass,” wrote University of Erlangen‐Nuremberg scientists in a paper describing the work.

In traditional freezing techniques, ice crystals shred delicate neurons and the connections between them. There would be no chance of recovering memories stored within. The new study used a method called vitrification, which rapidly cools tissue before crystals can form. An improved thawing process protected cells from toxic chemicals in their cryogenic bath.

Both pre-sliced and whole mouse brains recovered after warming, although some neural activity was slightly off-kilter. To be clear, brains can’t be completely revived like in the movies. But the approach pushes the known frontier of what brain tissue can tolerate, wrote the team.

Ice, Ice Baby

Suspended animation is one of science fiction’s oldest tropes. Whether characters are traveling between the stars or awaiting future cures for untreatable diseases, cryogenics is the ultimate pause button they can use to speedrun decades, if not centuries and beyond.

The idea was popularized in the 1960s, when Robert Ettinger “the father of cryonics” argued that people could be frozen and revived in the future, with their memories, cognition, and physical capabilities intact. He took the fringe idea and turned it into a mainstream dream.

But cryosleep has earlier roots. In the late 1800s, scientists realized that certain cells and simple living creatures could survive freezing, suggesting it’s possible to temporarily suspend life.

Liquid nitrogen and other chemical preservatives are now used daily in labs to freeze individual cells—including brain cells—at extremely low temperatures. Many don’t survive, but those that do regain normal function upon thawing. Scientists use the technology to preserve different types of neurons to test theories and share with other labs.

Cryopreserving brain slices or whole brains is far more difficult. These contain the delicate neural branches brain cells use to communicate, which are easily destroyed during the freeze-thaw cycle. Ice is the main culprit. Even with protective chemicals, liquids in cells rapidly solidify into sharp crystals that jab cells inside and out like a thousand knives.

Still, scientists have kept frozen human fetal tissue intact, and cryopreserved rat cells have developed functional networks once thawed. Another effort kept a rodent’s heart structurally intact with a magnetic method that gradually brings the organ back to biological temperature. Techniques to preserve livers and kidneys can keep them in stasis for up to 100 days, and the organs are still healthy enough for transplantation after warming up.

“Progress in cryopreservation of rodent organs has moved the theme of suspending technologies closer to plausibility,” wrote the team.

Structure determines function for each organ. But the brain presents unique challenges. Hundreds of molecules zoom around neurons to build up or whittle down synapses. Others that dot the surfaces of these cells tweak electrical charges to strengthen or weaken activity. Even without tearing up the cell itself, damage to these processes renders neurons incapable of forming or retrieving memories.

Ice is only part of the revival equation. As liquids freeze, they change the pressure of the surrounding environment, causing cells to lose water and shrink. This can collapse internal structures and wreck synaptic connections. Cryoprotectants, such as a sugary liquid called glycerol, limit the damage but are toxic at high doses.

Looking Glass

The authors of the new study turned to vitrification. Here, rapid cooling with cryoprotectants limits damage by freezing cells in a disorganized, glass-like state without forming ice crystals.

They first tested cryoprotectant recipes on brain slices that included the hippocampus, a brain region associated with the formation of memories. After soaking the slices in the chemical cocktails, the team bathed them in liquid nitrogen at a bone-chilling -196 degrees Celsius (−320.8 degrees Fahrenheit), which instantly froze the tissues. They then moved the slices to a −150 degrees Celsius (−238 degrees Fahrenheit) freezer and kept them there for up to a week.

The team could visually see whether each cocktail worked, they wrote. Vitrified slices had a glossy, transparent look; those that failed were dull and opaque.

After slow thawing, the slices sprang back to life.

The cells’ mitochondria ramped up energy production. Neuron membranes and synapses remained intact. And though there were some differences compared to fresh brain slices, the reawakened hippocampal cells mostly retained their usual patterns. Given a few electrical zaps, they strengthened their connections, a mechanism underlying learning and memory.

The team also tried the method on whole mouse brains. They had to repeatedly tweak the recipe to minimize toxicity from the cryoprotectants and ward off severe brain dehydration. But once thawed, slices from the whole preserved brains had intact neural wiring, including complex circuits in the hippocampus. Some brain cells languished and were harder to activate, whereas others perked right up.

It seems some types of neurons are more tolerant to vitrification than others, wrote the team.

Because they recorded activity in brain slices, it’s impossible to say whether the process would restore memory and learning. And the slices naturally deteriorated after 10 to 15 hours, making it hard to say much about longer timescales. To get around this, they could test the method on mini brains, or brain organoids, which better mimic whole brains and can be kept alive for years in culture.

The team is now expanding their work to include human brain slices and preservation of other organs, such as the heart. It’ll take plenty of trial and error. Human organs are far larger and could easily crack from mechanical stress during the cryopreservation process.

But the study shows “the brain is remarkably robust…to near-complete shutdown” into a glass-like state. “This reinforces the tenet of brain function being an emergent property of brain structure, and hints at the potential of life-suspending technologies,” wrote the team.

The post Reviving Brain Activity After ‘Cryosleep’ Inches Closer in Pioneering Study appeared first on SingularityHub.

Kategorie: Transhumanismus

We Don’t Need More. We Need Better: Intelligence Scales. Wisdom Does Not.

Singularity Weblog - 23 Březen, 2026 - 18:35
We don’t need more AI. We need a better why for our AI. We are told the why is obvious — cure everything, fix everything, transcend everything. But “solve everything” is not a philosophy. It is an assumption. Even the most powerful intelligence cannot erase moral disagreement or competing visions of justice. Because without a […]
Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 21)

Singularity HUB - 21 Březen, 2026 - 16:00
Artificial Intelligence

OpenAI Is Throwing Everything Into Building a Fully Automated ResearcherWill Douglas Heaven | MIT Technology Review ($)

“The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. ​​OpenAI says that the new goal will be its ‘North Star’ for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability.”

Robotics

Humanoid Robot Gets Surprisingly Good at TennisLoz Blain | New Atlas

“This ain’t teleoperation. Chinese researchers have tested a new, much quicker and easier method of teaching robots to play tennis, and the results look like a breakthrough in machine learning and real-world AI.”

Computing

This Is Not a Fly Uploaded to a ComputerRobert Hart | The Verge

“Aran Nayebi, a professor of machine learning at Carnegie Mellon University, said that the group was ‘not even close’ to capturing the full brain of the fly, showing connections between cells but not crucial details like neurotransmitters or how strong the connections between different nerve cells are. The motor system isn’t a ‘true upload’ either, he said. ‘We are not even faithfully simulating its brain in silico.'”

Energy

This May Be the World’s First Quantum BatteryGayoung Lee | Gizmodo

“Researchers finally believe they’ve found the right blueprint for scalable quantum batteries, publishing their findings in a recent study in Light: Science & Applications. ‘My ultimate ambition is a future where we can charge electric cars much faster than [fueling] petrol cars or charge devices over long distances wirelessly,’ James Quach, the study’s senior author and a researcher at CSIRO, Australia’s national science agency, said in a statement.”

Future

My Tesla Was Driving Itself Perfectly—Until It CrashedRaffi Krikorian | The Atlantic ($)

“The problem is bigger than one company’s self-driving system. It’s about how we’re building every AI system, every algorithm, every tool that asks for our trust and trains us to give it. The pattern is everywhere: Condition people to rely on the system. Erode their vigilance. Then, when something breaks, point to the terms of service and blame them for not paying attention.”

Space

A Private Space Company Has a Radical New Plan to Bag an AsteroidEric Berger | Ars Technica

“[TransAstra CEO Joel Sercel] envisions aggregating dozens, and then hundreds, of small asteroids at the ‘New Moon’ processing facility, which could potentially be located at the Earth-Sun L2 point, about 1.5 million km from Earth. Such asteroids could provide water for use as propellant and minerals for everything from solar panels to radiation shielding.”

Artificial Intelligence

Val Kilmer Set to Be Be Resurrected With AI for New FilmOwen Myers | The Guardian

“The film-maker is working in conjunction with the late actor’s estate and his daughter, Mercedes, to bring Kilmer back to life with state-of-the-art, generative AI. …The AI-generated version of Kilmer will appear in a ‘significant’ portion of the film, says Voorhees. The film will use images of the actor taken throughout his life to re-create Kilmer through the decades.”

Future

Online Bot Traffic Will Exceed Human Traffic by 2027, Cloudflare CEO SaysSarah Perez | TechCrunch

“‘If a human were doing a task let’s say you were shopping for a digital camera—and you might go to five websites. Your agent or the bot that’s doing that will often go to 1,000 times the number of sites that an actual human would visit,’ Prince said. “So it might go to 5,000 sites. And that’s real traffic, and that’s real load, which everyone is having to deal with and take into account.”

Computing

World ID Wants You to Put a Cryptographically Unique Human Identity Behind Your AI AgentsKyle Orland | Ars Technica

“World now claims nearly 18 million unique humans have verified their identities on one of nearly 1,000 physical orbs around the world. Now, with Agent Kit, World wants to let those users tie their confirmed identity to any AI agent, letting it work on their behalf across the internet in a way other parties can trust.”

Space

New NASA Chief Aiming for Moon Landings Every Month in 2027Passant Rabie | Gizmodo

“The regular missions will be geared toward building a lunar base on the moon’s surface, which will act as a laboratory for astronauts to develop ways to live beyond Earth’s orbit. ‘If you’re building a moon base and you’re going there to stay, you’re gonna need lots of missions to and from the moon,’ Isaacman [told SpaceFlight Now in an interview].”

Space

Jeff Bezos Wants to Save Earth With This Freaky-Looking ProbePassant Rabie | GIzmodo

“The mission would be equipped with different techniques for mitigating the asteroid threat, including directing a powerful ion beam (a concentrated stream of charged particles) at the object to change its orbit. …[If that doesn’t work, then like the spacecraft in NASA’s DART mission], NEO Hunter can aim for a direct kinetic impact by ramming into the asteroid at high speed to redirect it from its Earth-bound trajectory.”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 21) appeared first on SingularityHub.

Kategorie: Transhumanismus

Dune Is Not What You Think: The Warning Frank Herbert Meant Us to Hear

Singularity Weblog - 20 Březen, 2026 - 16:57
With Dune: Part One and Dune: Part Two now behind us, the internet is full of takes about resilience, resistance, and righteous underdogs. Most of them miss the point entirely. Dune 3 — Dune Messiah — may finally set the record straight. If people are willing to listen. The Most Misread Science Fiction Epic of […]
Kategorie: Transhumanismus

Google DeepMind Plans to Track AGI Progress With These 10 Traits of General Intelligence

Singularity HUB - 20 Březen, 2026 - 16:00

There’s plenty of hand-waving around AGI. DeepMind hopes to change that with a new, more rigorous approach.

Few terms are as closely associated with AI hype as artificial general intelligence, or AGI. But Google DeepMind researchers have now proposed a framework that could more concretely measure how close models are to this tech industry holy grail.

Artificial general intelligence refers to a mythical AI system that can match the general and highly adaptable form of intelligence found in humans. As the number of tasks that large language models can tackle has rocketed in recent years, there’s been a growing chorus of voices suggesting the technology is creeping ever closer to this threshold.

But so far, there’s been no clear way to assess progress toward AGI, leaving plenty of room for speculation and exaggeration. To address this gap, a team from Google DeepMind has introduced a new cognitively inspired framework that deconstructs general intelligence into 10 key faculties. More importantly, they propose a way to evaluate AI systems across these key capabilities and compare their performance to humans.

“Despite widespread discussion of AGI, there is no clear framework for measuring progress toward it. This ambiguity fuels subjective claims, makes it difficult to track progress, and risks hindering responsible governance,” the researchers write in a paper outlining their new approach. “We hope this framework will provide a practical roadmap and an initial step toward more rigorous, empirical evaluation of AGI.”

This isn’t DeepMind’s first attempt to clarify the term. In 2023, the company proposed separating AI systems into different levels of capability, in much the same way self-driving systems are categorized.

But the approach didn’t really propose a way to measure what level AI systems have reached. The new framework goes further by building a firmer conceptual footing for the key aspects underpinning model performance and a practical way to evaluate and compare systems.

Digging through decades of research in psychology, neuroscience, and cognitive science, the researchers identify eight basic cognitive building blocks that they say make up general intelligence.

These include the perception of sensory inputs and generation of outputs like text, speech, or actions. Add to those learning, memory, reasoning, and the ability to focus attention on specific information or tasks. Rounding out the list are metacognition—or the ability to reason about and control your own mental processes—and so-called executive functions, like planning and the inhibition of impulses.

The researchers also outline two “composite faculties” that require several building blocks to be applied together. These are problem solving and social cognition, which refers to the ability to understand and react appropriately to the social context.

To judge how well AI systems perform on each measure, the researchers suggest subjecting them to a broad suite of cognitive evaluations that target each specific ability. They also propose collecting human baselines for each task. This would involve asking a demographically representative sample of adults with at least a high school education to complete them under identical conditions.

The results of these tests can then be combined to create “cognitive profiles” that give a sense of a model’s strengths and weaknesses. And by comparing the results against the human baselines, it should be possible to determine when a system matches or surpasses the general intelligence of an average person.

Crucially, the framework focuses on what a system can do rather than how it does it, which means the evaluation is agnostic about the underlying technology. However, the researchers concede that there is currently no good way to measure many of the core cognitive capabilities identified.

While there are already well-established benchmarks for faculties like problem solving and perception, there are no reliable tests for things like metacognition, attention, learning, and social cognition. In addition, many of the best benchmarks are public, which means the testing criteria are easily accessible and may have already been included in model training data. So the authors say they’re working with academics to build more robust, non-public evaluations to fill the gaps.

How useful the new framework will be depends on several factors. First, it remains to be seen whether the criteria identified by the DeepMind team truly capture the essence of human general intelligence. Second, they need to prove that acing this test actually leads to better performance on practical problems compared to narrower, specialist AI systems.

But considering the hand-waving nature of the debate around AGI so far, any framework grounded in well-established cognitive theory and rigorous evaluation represents a significant step forward.

The post Google DeepMind Plans to Track AGI Progress With These 10 Traits of General Intelligence appeared first on SingularityHub.

Kategorie: Transhumanismus

Tech Companies Are Blaming Massive Layoffs on AI. What’s Really Going On?

Singularity HUB - 19 Březen, 2026 - 16:00

The prevailing narrative suggests AI is ready to replace humans, but the evidence is more nuanced.

In the past few months, a wave of tech corporations have announced significant staff cuts and attributed them to efficiency gains driven by artificial intelligence.

Companies such as Atlassian, Block, and Amazon have announced they would lay off thousands of employees due to increased reliance on AI.

The narrative these companies offer is consistent: AI is making human labor replaceable, and responsible management demands adjustment.

The evidence, however, tells a more nuanced story.

The Automation Story Is Partly True

Genuine disruption is visible in specific corners of the labor market, though the scale of that disruption is commonly overstated. Research from Anthropic published earlier this month shows that although many work tasks are susceptible to automation, the vast majority are still performed primarily by humans rather than AI tools.

Moreover, some occupations are more exposed to displacement than others: Computer programmers sit at the top of the list, followed by customer service representatives and data entry workers. Yet even within the most exposed occupations, AI use is still limited.

The aggregate economic data reflects this reality. A 2025 Goldman Sachs report estimated that if AI were used across the economy for all the things it could currently do, roughly 2.5 percent of US employment would be at risk of job loss.

That’s not a trivial number. However, the report notes that workers in AI-exposed occupations are currently no more likely to lose their jobs, face reduced hours, or earn lower wages than anyone else.

The report does note early signs of strain in specific industries. Goldman Sachs identifies sectors where employment growth has slowed that align with AI-related efficiency gains. Examples include marketing consulting, graphic design, office administration, and call centers.

In the tech sector, US workers in their 20s in AI-exposed occupations saw unemployment rise by almost 3 percent in the first half of 2025. Anthropic’s research also found that job-finding rates (the chance of an unemployed person finding a job in a one-month period) for workers aged 22–25 entering AI-exposed occupations have fallen by around 14 percent since the launch of ChatGPT in 2022. This is a tentative but telling signal about where the pressure is being felt first.

These are meaningful signals, but they are sector-specific and concentrated—not the evidence of sweeping displacement that corporate announcements often imply. That gap between the evidence and the rhetoric raises an obvious question: What else might be driving these decisions?

What Is the Motive?

The timing and framing of the layoffs attributed to AI warrant closer examination. Corporate restructuring, over-hiring during the post-pandemic boom as demand for online services soared, and pressure from investors to demonstrate improved profit margins are all forces operating at the same time as genuine advances in AI.

While these are not mutually exclusive explanations, they are rarely acknowledged alongside one another in corporate communications.

There is a powerful financial incentive for companies to be seen to be embracing AI aggressively. Since the launch of ChatGPT, AI-related stocks have accounted for about 75 percent of S&P 500 returns.

A workforce reduction framed around AI adoption sends a signal to investors that a straightforward cost-cutting announcement does not. A company making AI-related innovations looks a lot better than one sacking staff due to declining revenues or poor strategic decisions.

It is also worth distinguishing between two kinds of workforce reduction. In the first, AI genuinely increases productivity to the point where fewer workers are needed to produce the same output. In the second, staff reductions are not a consequence of AI, but a way to fund it.

Meta illustrates this distinction. The social media giant is reportedly planning to lay off as much as 20 percent of its workforce, while simultaneously committing $600 billion to build data centers and recruit top AI researchers.

In this case, the workers being let go are not being replaced by AI today; they are subsidizing the AI bet their employer is making on the future.

The More Plausible Future

The big picture is likely one of transformation rather than elimination. According to a recent PwC report, employment is still growing in most industries exposed to AI, although growth tends to be slower than in less exposed sectors.

At the same time, wages in AI-exposed industries are rising roughly twice as fast as in those least touched by the technology. Workers with AI skills command an average wage premium of about 56 percent across the industries analyzed.

Together, the data points toward a flattening of the traditional workplace pyramid rather than mass displacement. Firms require fewer junior employees for routine analytical and administrative work, while experienced professionals who deploy AI tools effectively become more productive and command greater value.

AI is a consequential technology and will have a significant impact in the long term. What is in doubt is whether the dramatic, AI-attributed workforce reductions announced by individual companies accurately reflect that trajectory, or whether they conflate genuine technological change with decisions that would have been made regardless.

Making this distinction is not merely an academic exercise. It shapes how policymakers, educators, and workers themselves understand the nature of the disruption they are navigating.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Tech Companies Are Blaming Massive Layoffs on AI. What’s Really Going On? appeared first on SingularityHub.

Kategorie: Transhumanismus
Syndikovat obsah