Transhumanismus

New Tech Bends Sound Through Space So It Reaches Only Your Ear in a Crowd

Singularity HUB - 18 Březen, 2025 - 19:08

Audible enclaves are local pockets of sound no one else can hear—no headphones required.

What if you could listen to music or a podcast without headphones or earbuds and without disturbing anyone around you? Or have a private conversation in public without other people hearing you?

Newly published research from our team at Penn State introduces a way to create audible enclaves—localized pockets of sound that are isolated from their surroundings. In other words, we’ve developed a technology that could create sound exactly where it needs to be.

The ability to send sound that becomes audible only at a specific location could transform entertainment, communication, and spatial audio experiences.

What Is Sound?

Sound is a vibration that travels through air as a wave. These waves are created when an object moves back and forth, compressing and decompressing air molecules.

The frequency of these vibrations is what determines pitch. Low frequencies correspond to deep sounds, like a bass drum; high frequencies correspond to sharp sounds, like a whistle.

Sound is composed of particles moving in a continuous wave. Daniel A. Russell, CC BY-NC-ND

Controlling where sound goes is difficult because of a phenomenon called diffraction—the tendency of sound waves to spread out as they travel. This effect is particularly strong for low-frequency sounds because of their longer wavelengths, making it nearly impossible to keep sound confined to a specific area.

Certain audio technologies, such as parametric array loudspeakers, can create focused sound beams aimed in a specific direction. However, these technologies still emit sound that is audible along its entire path as it travels through space.

The Science of Audible Enclaves

We found a new way to send sound to one specific listener using self-bending ultrasound beams and a concept called nonlinear acoustics.

Ultrasound refers to sound waves with frequencies above the range of human hearing, or 20 kHz. These waves travel through the air like normal sound waves but are inaudible to people. Because ultrasound can penetrate many materials and interact with objects in unique ways, it’s widely used for medical imaging and many industrial applications.

In our work, we used ultrasound as a carrier for audible sound. It can transport sound through space silently—becoming audible only when desired. How did we do this?

Normally, sound waves combine linearly, meaning they just proportionally add up into a bigger wave. However, when sound waves are intense enough, they can interact nonlinearly, generating new frequencies that were not present before.

This is the key to our technique: We use two ultrasound beams at different frequencies that are completely silent on their own. But when they intersect in space, nonlinear effects cause them to generate a new sound wave at an audible frequency that would be heard only in that specific region.

Audible enclaves are created at the intersection of two ultrasound beams. Jiaxin Zhong et al./PNAS, CC BY-NC-ND

Crucially, we designed ultrasonic beams that can bend on their own. Normally, sound waves travel in straight lines unless something blocks or reflects them. However, by using acoustic metasurfaces—specialized materials that manipulate sound waves—we can shape ultrasound beams to bend as they travel. Similar to how an optical lens bends light, acoustic metasurfaces change the shape of the path of sound waves. By precisely controlling the phase of the ultrasound waves, we create curved sound paths that can navigate around obstacles and meet at a specific target location.

The key phenomenon at play is called difference frequency generation. When two ultrasonic beams of slightly different frequencies overlap—such as 40 kHz and 39.5 kHz—they create a new sound wave at the difference between their frequencies—in this case 0.5 kHz, or 500 Hz, which is well within the human hearing range. Sound can be heard only where the beams cross. Outside of that intersection, the ultrasound waves remain silent.

This means you can deliver audio to a specific location or person without disturbing other people as the sound travels.

Advancing Sound Control

The ability to create audio enclaves has many potential applications.

Audio enclaves could enable personalized audio in public spaces. For example, museums could provide different audio guides to visitors without headphones, and libraries could allow students to study with audio lessons without disturbing others.

In a car, passengers could listen to music without distracting the driver as they listen to navigation instructions. Offices and military settings could also benefit from localized speech zones for confidential conversations. Audio enclaves could also be adapted to cancel out noise in designated areas, creating quiet zones to improve focus in workplaces or reduce noise pollution in cities.

This isn’t something that’s going to be on the shelf in the immediate future. Challenges remain for our technology. Nonlinear distortion can affect sound quality. And power efficiency is another issue—converting ultrasound to audible sound requires high-intensity fields that can be energy intensive to generate.

Despite these hurdles, audio enclaves present a fundamental shift in sound control. By redefining how sound interacts with space, we open up new possibilities for immersive, efficient, and personalized audio experiences.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post New Tech Bends Sound Through Space So It Reaches Only Your Ear in a Crowd appeared first on SingularityHub.

Kategorie: Transhumanismus

A Massive AI Analysis Found Genes Related to Brain Aging—and Drugs to Slow It Down

Singularity HUB - 18 Březen, 2025 - 00:19

Brain scans from nearly 39,000 people revealed genes and drugs to potentially slow aging.

When my grandad celebrated his 100th birthday with a bowl of noodles, his first comment was, “Nice, but this is store-bought.” He then schooled everyone on the art of making noodles from scratch, sounding decades younger than his actual age.

Most of us know people who are mentally sharper than their chronological age. In contrast, some folks seem far older. They’re easily confused, forget everyday routines, and have a hard time following conversations or remembering where they parked their car.

Why do some brains age faster, while others avoid senior moments even in the twilight years? Part of the answer may be in our genes. This month, a team from China’s Zhejiang University described an AI they’ve developed to hunt down genes related to brain aging and neurological disorders using brain scans from nearly 39,000 people.

They found seven genes, some of which are already in the crosshairs of scientists combating age-related cognitive decline. A search of clinical trials uncovered 28 existing drugs targeting those genes, including some as common as hydrocortisone, a drug often used for allergies and autoimmune diseases.

These drugs are already on the market, meaning they’ve been thoroughly vetted for safety. Repurposing existing drugs for brain aging could be a faster alternative to developing new ones, but they’ll have to be thoroughly tested to prove they actually bring cognitive improvements.

How Old Is My Brain?

The number of candles on your birthday cake doesn’t reflect the health of your brain. To gauge the latter—dubbed biological age—scientists have developed multiple aging clocks.

The Horvath Clock, for example, measures signatures of gene activity associated with aging and cognitive decline. Researchers have used others, such as GrimAge, to measure the effects of potential anti-aging therapies, such as caloric restriction, in clinical trials.

Scientists are still debating which clock is the most accurate for the brain. But most agree the brain age gap, or the difference between a person’s chronological age and brain age, is a useful marker. A larger gap in either direction means the brain is aging faster or slower than expected.

Why one or the other might be true for people is still mysterious.

“There is a general consensus that the trajectories of brain aging differ substantially among individuals due to genetic factors, lifestyles, environmental factors, and chronic disease of the patient,” wrote the team. Finding genes related to the brain age gap could bring new drugs that prevent, slow down, or even reverse aging. But studies are lacking, they added.

A Brain-Wide Picture

How well our brain works relies on its intricate connections and structure. These can be captured with magnetic resonance imaging (MRI). But each person’s neural wiring is slightly different, so piecing together a picture of an “average” aging brain requires lots of brain scans.

Luckily, the UK Biobank has plenty.

Launched in 2006, the organization’s database includes health data from half a million participants. For this study, the team analyzed MRI scans from around 39,000 people between 45 and 83 years of age, with a roughly equal number of men and women. Most were cognitively healthy, but over 6,600 had a brain injury, Alzheimer’s disease, anxiety, depression, and other disorders.

They then pitted seven state-of-the-art AI models against each other to figure out which model delivered the most accurate brain age estimate. One, called 3D-ViT, stood out for its ability to detect differences in brain structure associated with the brain age gap.

Next, the team explored whether some brain regions contributed to the gap more than others. With a tool often used in computer vision called saliency maps, they found two brain regions that were especially important to the AI’s estimation of the brain age gap.

One, the lentiform nucleus, is an earbud-like structure that sits deep inside the brain and is involved in movement, cognition, and emotion. The other is part of a neural highway that controls how different brain regions communicate—particularly those that run from deeper areas to the cortex, the outermost part of the brain responsible for reasoning and flexible thinking. These mental capabilities tend to slowly erode during aging.

Unsurprisingly, a larger gap also correlated with Alzheimer’s disease. But stroke, epilepsy, insomnia, smoking, and other lifestyle factors didn’t make a significant difference—at least for this population.

Genes to Drugs

Accelerated brain aging could be partly due to genetics. Finding which genes are involved could reveal new targets for therapies to combat faster cognitive decline. So, the team extracted genetic data from the UK Biobank and ran a genome-wide scan to fish out these genes.

Some were already on scientists’ radar. One helps maintain bone and heart health during aging. Another regulates the brain’s electrical signals and wires up neural connections.

The screen also revealed many new genes involved in the brain age gap. Some of these kill infected or cancerous cells. Others stabilize neuron signaling and structure or battle chronic inflammation—both of which can go awry as the brain ages. Most of the genes could be managed with a pill or injection, making it easier to reuse existing drugs or develop new ones.

To hunt down potential drug candidates, the team turned to an open-source database that charts how drugs interact with genes. They found 466 drugs either approved or in clinical development targeting roughly 45 percent of the new genes.

Some are already being tested for their ability to slow cognitive decline. Among these are hydrocortisone—which is mainly used to treat autoimmune disorders, asthma, and rashes—and resveratrol, a molecule found in red wine. They also found 28 drugs that “hold substantial promise for brain aging,” wrote the team, including the hormones estradiol and testosterone. Dasatinib, a senolytic drug that kills off “zombie cells” during aging, also made the list.

The work builds on prior attempts to decipher connections between genes and the brain age gap. A 2019 study used the UK Biobank to pinpoint genes related to neurological disorders that accelerate brain aging. Here, the team connected genes to potential new or existing drugs to slow brain aging.

“Our study provides insights into the genetic basis of brain aging, potentially facilitating drug development for brain aging to extend the health span,” wrote the team.

The post A Massive AI Analysis Found Genes Related to Brain Aging—and Drugs to Slow It Down appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 15)

Singularity HUB - 15 Březen, 2025 - 15:00
Future

Powerful AI Is Coming. We’re Not Ready.Kevin Roose | The New York Times

“I believe that the right time to start preparing for AGI is now. This may all sound crazy. But I didn’t arrive at these views as a starry-eyed futurist, an investor hyping my AI portfolio or a guy who took too many magic mushrooms and watched ‘Terminator 2.’ I arrived at them as a journalist who has spent a lot of time talking to the engineers building powerful AI systems, the investors funding it and the researchers studying its effects.”

Future

AGI Is Suddenly a Dinner Table TopicJames O’Donnell | MIT Technology Review

“The concept of artificial general intelligence—an ultra-powerful AI system we don’t have yet—can be thought of as a balloon, repeatedly inflated with hype during peaks of optimism (or fear) about its potential impact and then deflated as reality fails to meet expectations. This week, lots of news went into that AGI balloon. I’m going to tell you what it means (and probably stretch my analogy a little too far along the way).”

Robotics

Gemini Robotics Uses Google’s Top Language Model to Make Robots More UsefulScott J. Mulligan | MIT Technology Review

“Google DeepMind has released a new model, Gemini Robotics, that combines its best large language model with robotics. Plugging in the LLM seems to give robots the ability to be more dexterous, work from natural-language commands, and generalize across tasks. All three are things that robots have struggled to do until now.”

Biotechnology

Covid Vaccines Have Paved the Way for Cancer VaccinesJoão Medeiros | Wired

“Going from mRNA Covid vaccines to mRNA cancer vaccines is straightforward: same fridges, same protocol, same drug, just a different patient. In the current trials, we do a biopsy of the patient, sequence the tissue, send it to the pharmaceutical company, and they design a personalized vaccine that’s bespoke to that patient’s cancer. That vaccine is not suitable for anyone else. It’s like science fiction.”

Artificial Intelligence

AI Search Engines Give Incorrect Answers at an Alarming 60% Rate, Study SaysBenj Edwards | Ars Technica

“A new study from Columbia Journalism Review’s Tow Center for Digital Journalism finds serious accuracy issues with generative AI models used for news searches. The research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60 percent of queries about news content.”

Tech

AI Coding Assistant Refuses to Write Code, Tells User to Learn Programming InsteadBenj Edwards | Ars Technica

According to a bug report on Cursor’s official forum, after producing approximately 750 to 800 lines of code (what the user calls ‘locs’), the AI assistant halted work and delivered a refusal message: ‘I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.'”

Energy

Exclusive: General Fusion Fires Up Its Newest Steampunk Fusion ReactorTim De Chant | TechCrunch

“General Fusion announced on Tuesday that it had successfully created plasma, a superheated fourth state of matter required for fusion, inside a prototype reactor. The milestone marks the beginning of a 93-week quest to prove that the outfit’s steampunk approach to fusion power remains a viable contender.”

Biotechnology

This Annual Shot Might Protect Against HIV InfectionsJessica Hamzelou | MIT Technology Review

“I don’t normally get too excited about phase I trials, which usually involve just a handful of volunteers and typically don’t tell us much about whether a drug is likely to work. But this trial seems to be different. Together, the lenacapavir trials could bring us a significant step closer to ending the HIV epidemic.”

Computing

Cerebras Just Announced 6 New AI Datacenters That Process 40M Tokens Per Second—and It Could Be Bad News for NvidiaMichael Nuñez | VentureBeat

“Cerebras Systems, an AI hardware startup that has been steadily challenging Nvidia’s dominance in the artificial intelligence market, announced Tuesday a significant expansion of its data center footprint and two major enterprise partnerships that position the company to become the leading provider of high-speed AI inference services.”

Robotics

Waabi Says Its Virtual Robotrucks Are Realistic Enough to Prove the Real Ones Are SafeWill Douglas Heaven | MIT Technology Review

“The Canadian robotruck startup Waabi says its super-realistic virtual simulation is now accurate enough to prove the safety of its driverless big rigs without having to run them for miles on real roads.  The company uses a digital twin of its real-world robotruck, loaded up with real sensor data, and measures how the twin’s performance compares with that of real trucks on real roads. Waabi says they now match almost exactly.”

Future

Lab-Grown Food Could Be Sold in UK in Two YearsPallab Ghosh | BBC News

“Meat, dairy and sugar grown in a lab could be on sale in the UK for human consumption for the first time within two years, sooner than expected. The Food Standards Agency (FSA) is looking at how it can speed up the approval process for lab-grown foods. Such products are grown from cells in small chemical plants. UK firms have led the way in the field scientifically but feel they have been held back by the current regulations.”

Energy

For Climate and Livelihoods, Africa Bets Big on Solar Mini-GridsVictoria Uwemedimo and Katarina Zimmer | Knowable Magazine

“In many African countries, solar power now stands to offer much more than environmental benefits. About 600 million Africans lack reliable access to electricity; in Nigeria specifically, almost half of the 230 million people have no access to electricity grids. Today, solar has become cheap and versatile enough to help bring affordable, reliable power to millions—creating a win-win for lives and livelihoods as well as the climate.”

Artificial Intelligence

Anthropic Researchers Forced Claude to Become Deceptive—What They Discovered Could Save Us From Rogue AIMichael Nuñez | VentureBeat

“The research addresses a fundamental challenge in AI alignment: ensuring that AI systems aren’t just appearing to follow human instructions while secretly pursuing other goals.”

Science

The Road Map to Alien Life Passes Through the ‘Cosmic Shoreline’Elise Cutts | Quanta Magazine

“Astronomers are ready to search for the fingerprints of life in faraway planetary atmospheres. But first, they need to know where to look — and that means figuring out which planets are likely to have atmospheres in the first place.”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 15) appeared first on SingularityHub.

Kategorie: Transhumanismus

Staying Sane in an Insane World

Singularity Weblog - 15 Březen, 2025 - 14:46
How do I stay sane in this insane world? This is one question I’ve repeatedly asked myself over the past several years. I wish I had THE answer, but truthfully, I still struggle daily. Yet, I’ve discovered a few reliable practices that help me climb back onto the bandwagon of inner peace and equanimity whenever […]
Kategorie: Transhumanismus

This Robotic Hand’s Electronic Skin Senses Exactly How Hard It Needs to Squeeze

Singularity HUB - 14 Březen, 2025 - 17:37

The hand can gently pick up anything from plastic cups to pineapples.

Our hands are works of art. A rigid skeleton provides structure. Muscles adjust to different weights. Our skin, embedded with touch, pressure, and temperature sensors, provides immediate feedback on what we’re touching. Flexible joints make it possible to type on a keyboard or use a video game controller without a thought.

Now, a team at Johns Hopkins University has recreated these perks in a life-like prosthetic robot hand. At its core is a 3D-printed skeleton. Each finger has three independently controlled joints made of silicone that are moved around with air pressure. A three-layer electronic skin covering the hand’s fingertips helps it gauge grip strength on the fly. The hand is controlled using electrical signals from muscles in the forearm alone.

In tests, able-bodied volunteers used the hand to pick up stuffed toys and dish sponges without excessive squeezing. It adjusted its grip when challenged with heavy metal water bottles and prickly pineapples—picking up items without dropping them or damaging the hand.

“The goal from the beginning has been to create a prosthetic hand that we model based on the human hand’s physical and sensing capabilities—a more natural prosthetic that functions and feels like a lost limb,” study author Sriramana Sankar said in a press release.

Softening Up

Prosthetic hands have come a long way. One of the first, crafted out of metal in the Middle Ages, had joints that could be moved passively using another hand.

Today, soft robotics have changed the game. Unlike rigid, unforgiving material, spongy hands can handle delicate objects without distorting or crushing them. Integrated sensors for pressure or temperature make them more life-like by providing sensory feedback.

But soft materials have a problem. They can’t consistently generate the same force to pick up heavy objects. Even with multiple joints and a dynamic palm, squishy robotic hands have a harder time detecting different textures compared to their rigid counterparts, wrote the team. They’re also weak. Existing soft robotic hands can only lift around 2.8 pounds.

In contrast, our hands have both a rigid skeleton and soft tissues—muscles and tendons—that stretch, twist, and contract. Pressure sensors in our skin provide instant feedback: Am I squeezing a plush toy, holding a slippery coffee mug, or manipulating my phone?

That why recent prosthetic designs incorporate both artificial skeletons and muscles.

For example, the commercially available LUKE arm has a metal and plastic skeleton for strength and stability. Its fingertips have soft materials for better dexterity. The prosthetic can grab objects using different inputs—for example, electrical signals from muscles or a foot peddle to switch between grasp strengths. But the hand is still mostly rigid and has limited mobility. The thumb and index finger can flex individually. All the other fingers move together.

Then there’s the problem of feedback. Our fingers use touch to calibrate our grip. Each of the skin’s three layers encodes slightly different sensations with a variety of receptors, or biological sensors. The outer layer feels light touch and slow vibration, like when hair lightly brushes your hand. Deeper layers detect pressure: the texture and weight of a heavy dumbbell, for example.

In 2018, the team behind the new study developed electronic skin inspired by human skin. The material, or E-dermis, sensed textures and transmitted them to surviving nerves in an amputee’s arm with small zaps of electricity. The skin used piezoresistive sensors, such that pressure would change how the sensors conducted electricity. Prosthetic fingertips coated in the sensors allowed an upper-limb amputee to detect a range of sensations, including pressure.

“If you’re holding a cup of coffee, how do you know you’re about to drop it? Your palm and fingertips send signals to your brain that the cup is slipping,” study author Nitish Thakor said in the recent study’s press release. “Our system is neurally inspired—it models the hand’s touch receptors to produce nerve-like messages so the prosthetics’ ‘brain,’ or its computer, understands if something is hot or cold, soft or hard, or slipping from the grip.”

Hands On

The new design incorporated E-dermis into a hybrid hand designed to mimic a human hand.

The thumb has two joints made of silicone and the fingers have three. Each joint can flex independently. These connect to a rigid 3D-printed skeleton and are moved about by air.

Compared to prosthetics with only soft components, the skeleton adds force and can support heavier weights. The prosthetic hand’s fingertips are covered in a patch of E-dermis the size of a fingernail. Each finger bends naturally, curling into the palm or stretching apart.

Electrical signals from a user’s forearm muscles control the hand. Such devices, dubbed myoelectric prostheses, tap into living nerve endings above the amputation site. When a person thinks of moving the hand, a microprocessor translates the nerve signals into motor commands.

Several studies with able-bodied volunteers showcased the hand’s dexterity. Participants wore a  sheath over their forearms to capture the electrical signals in their upper arms—mimicking those used for amputees—and to send them along to the robotic hand.

With minimal training, the volunteers could grab a variety of objects of different sizes, weights, and textures. The hand gently picked up a sponge, without squishing it into oblivion, and a variety of produce—apple, orange, clementine—without bruising it. The prosthetic showed it could also lift heavier items, such as a small stone statue and a metal water bottle.

But the best example, according to the authors, was when it held a fragile plastic cup filled with water using only three fingers. The hand didn’t dent the cup or spill any water.

Overall, it had an impressive 99.7 percent accuracy rate handling 15 everyday items, rapidly adjusting its grip to avoid drops, spills, and other potential mishaps.

To be clear, the device hasn’t been tested on people who’ve lost a hand. And there’s more to improve. Adding a tendon of sorts between the artificial fingers could make them more stable. Mimicking how the palm moves could further boost flexibility. And adding sensors, such as those for temperature, could push the engineered hand even closer to a human’s.

Improving the dexterity of the hands isn’t only “essential for next-generation prostheses,” said Thakor. Future robotic hands will have to seamlessly integrate into everyday living, dealing with all the variety we do. “That’s why a hybrid robot, designed like the human hand, is so valuable—it combines soft and rigid structures, just like our skin, tissue, and bones.”

The post This Robotic Hand’s Electronic Skin Senses Exactly How Hard It Needs to Squeeze appeared first on SingularityHub.

Kategorie: Transhumanismus

Green Steel Startup’s Largest Reactor Yet Produces a Ton of Molten Metal With Electricity

Singularity HUB - 13 Březen, 2025 - 21:44

For Boston Metal, it’s a step towards green steel plants that can make millions of tons of steel.

Steelmaking is one of the hardest industries to decarbonize due to its reliance on high temperatures and coal-based fuels to drive crucial reactions. But a green steel company has made a major breakthrough after its new plant produced more than a ton of the metal.

Rapid progress decarbonizing the energy and transport sectors is leading to a growing focus on areas of the economy where it will be harder to ditch fossil fuels. One of these is steelmaking, which by some estimates produces as much as 8 percent of all carbon emissions.

US startup Boston Metal hopes to change this by commercializing zero-emission steelmaking technology developed at the Massachusetts Institute of Technology. This week, the company completed the first run of its largest reactor yet, which validates key technologies required to start producing steel at industrial scales.

“With this milestone, we are taking a major step forward in making green steel a reality, and we’re doing it right here in the US, demonstrating the critical innovation that can enhance domestic manufacturing,” Tadeu Carneiro, CEO of Boston Metal, said in a press release.

Traditional steelmaking involves burning a coal-based fuel called coke, both to generate the high temperatures required and to remove oxygen from iron ore to create iron. But this generates huge amounts of CO2, which is why steelmaking is so bad for the environment.

Boston Metal’s approach instead uses electrolysis to convert iron ore into molten iron without directly producing any emissions. As a result, if the electricity used to drive the process comes from renewable sources, the resulting metal is almost entirely emission-free.

The company’s process, known as molten oxide electrolysis, involves mixing iron ore with an electrolyte inside a large reactor, heating it to 2,900 degrees Fahrenheit, and then passing a current through it.

The oxygen in the ore separates and bubbles up through the electrolyte, while a layer of molten iron collects at the bottom of the reactor. This reservoir of liquid metal is then periodically tapped, though the process itself is continuous.

One of the biggest challenges for the approach is creating an anode—the positive terminal used to introduce electricity to the reactor—that doesn’t degrade too rapidly. A short shelf life for this component would mean regular stoppages for maintenance or replacement, which would significantly impact the approach’s commercial viability.

Adam Rauwerdink, Boston Metal’s senior vice president of business development, told MIT Technology Review that the company has successfully made their anodes hardier. But the new bus-sized reactor is the first to feature multiple anodes, which will be key to scaling the approach.

The current plant can produce a ton or two of metal in about a month. However, the company hopes to build a plant that can produce the same amount in a day by the end of 2027. The design is modular, and the plan is to eventually string many reactors together in facilities that can output millions of tons of steel.

Boston Metal is not the only company attempting to clean up steelmaking.

Swedish company Stegra has raised billions of dollars to build the world’s first large-scale green steel plant in Northern Sweden. The plant will use green hydrogen to cut emissions by up to 95 percent. US startup Electra is also raising $257 million to develop a low-temperature electrochemical process for producing green iron.

Scaling any of these approaches to the point where they make a dent in an industry as massive as steelmaking will be a huge challenge. But these developments suggest the technical barriers are rapidly falling.

The post Green Steel Startup’s Largest Reactor Yet Produces a Ton of Molten Metal With Electricity appeared first on SingularityHub.

Kategorie: Transhumanismus

What Google Translate Tells Us About Where AI Is Headed Next

Singularity HUB - 11 Březen, 2025 - 21:39

The trajectory of AI in translation hints at the future of generative AI.

The computer scientists Rich Sutton and Andrew Barto have been recognized for a long track record of influential ideas with this year’s Turing Award, the most prestigious in the field. Sutton’s 2019 essay “The Bitter Lesson,” for instance, underpins much of today’s feverishness around artificial intelligence (AI).

He argues that methods to improve AI that rely on heavy-duty computation rather than human knowledge are “ultimately the most effective, and by a large margin.” This is an idea whose truth has been demonstrated many times in AI history. Yet there’s another important lesson in that history from some 20 years ago that we ought to heed.

Today’s AI chatbots are built on large language models (LLMs), which are trained on huge amounts of data that enable a machine to “reason” by predicting the next word in a sentence using probabilities.

Useful probabilistic language models were formalized by the American polymath Claude Shannon in 1948, citing precedents from the 1910s and 1920s. Language models of this form were then popularized in the 1970s and 1980s for use by computers in translation and speech recognition, in which spoken words are converted into text.

The first language model on the scale of contemporary LLMs was published in 2007 and was a component of Google Translate, which had been launched a year earlier. Trained on trillions of words using over a thousand computers, it is the unmistakeable forebear of today’s LLMs, even though it was technically different.

It relied on probabilities computed from word counts, whereas today’s LLMs are based on what is known as transformers. First developed in 2017—also originally for translation—these are artificial neural networks that make it possible for machines to better exploit the context of each word.

The Pros and Cons of Google Translate

Machine translation (MT) has improved relentlessly in the past two decades, driven not only by tech advances but also the size and diversity of training data sets. Whereas Google Translate started by offering translations between just three languages in 2006—English, Chinese, and Arabic—today it supports 249. Yet while this may sound impressive, it’s still actually less than 4 percent of the world’s estimated 7,000 languages.

Between a handful of those languages, like English and Spanish, translations are often flawless. Yet even in these languages, the translator sometimes fails on idioms, place names, legal and technical terms, and various other nuances.

Between many other languages, the service can help you get the gist of a text, but often contains serious errors. The largest annual evaluation of machine translation systems—which now includes translations done by LLMs that rival those of purpose-built translation systems—bluntly concluded in 2024 that “MT is not solved yet.”

Machine translation is widely used in spite of these shortcomings: As far back as 2021, the Google Translate app reached one billion installs. Yet users still appear to understand that they should use such services cautiously. A 2022 survey of 1,200 people found that they mostly used machine translation in low-stakes settings, like understanding online content outside of work or study. Only about 2 percent of respondents’ translations involved higher stakes settings, including interacting with healthcare workers or police.

Sure enough, there are high risks associated with using machine translations in these settings. Studies have shown that machine-translation errors in healthcare can potentially cause serious harm, and there are reports that it has harmed credible asylum cases. It doesn’t help that users tend to trust machine translations that are easy to understand, even when they are misleading.

Knowing the risks, the translation industry overwhelmingly relies on human translators in high-stakes settings like international law and commerce. Yet these workers’ marketability has been diminished by the fact that the machines can now do much of their work, leaving them to focus more on assuring quality.

Many human translators are freelancers in a marketplace mediated by platforms with machine-translation capabilities. It’s frustrating to be reduced to wrangling inaccurate output, not to mention the precarity and loneliness endemic to platform work. Translators also have to contend with the real or perceived threat that their machine rivals will eventually replace them—researchers refer to this as automation anxiety.

Lessons for LLMs

The recent unveiling of the Chinese AI model Deepseek, which appears to be close to the capabilities of market leader OpenAI’s latest GPT models but at a fraction of the price, signals that very sophisticated LLMs are on a path to being commoditized. They will be deployed by organizations of all sizes at low costs—just as machine translation is today.

Of course, today’s LLMs go far beyond machine translation, performing a much wider range of tasks. Their fundamental limitation is data, having exhausted most of what is available on the internet already. For all its scale, their training data is likely to underrepresent most tasks, just as it underrepresents most languages for machine translation.

Indeed the problem is worse with generative AI. Unlike with languages, it is difficult to know which tasks are well represented in an LLM. There will undoubtedly be efforts to improve training data that make LLMs better at some underrepresented tasks. But the scope of the challenge dwarfs that of machine translation.

Tech optimists may pin their hopes on machines being able to keep increasing the size of the training data by making their own synthetic versions, or of learning from human feedback through chatbot interactions. These avenues have already been explored in machine translation, with limited success.

So the foreseeable future for LLMs is one in which they are excellent at a few tasks, mediocre in others, and unreliable elsewhere. We will use them where the risks are low, while they may harm unsuspecting users in high-risk settings—as has already happened to laywers who trusted ChatGPT output containing citations to non-existent case law.

These LLMs will aid human workers in industries with a culture of quality assurance, like computer programming, while making the experience of those workers worse. Plus we will have to deal with new problems such as their threat to human artistic works and to the environment. The urgent question: is this really the future we want to build?

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post What Google Translate Tells Us About Where AI Is Headed Next appeared first on SingularityHub.

Kategorie: Transhumanismus

DARPA Wants to ‘Grow’ Enormous Living Structures in Space

Singularity HUB - 11 Březen, 2025 - 00:00

Living materials could self-assemble into antennas, nets to capture debris, or even space station parts.

Space stations break down. Satellites get damaged. Repairing them requires launching replacement components on rockets.

The US Defense Advanced Research Projects Agency (DARPA) is now exploring an alternative: growing these parts directly in space. The concept would skirt delivery headaches. Without a rocket’s size and weight constraints, engineers could also design and construct large structures—over 1,640 feet or 500 meters long—that can’t be shipped from Earth.

The technology could be especially useful as we inch towards missions to Mars and beyond.

The agency has previously explored space manufacturing that would rely on robotic construction or self-assembling materials. The new proposal adds synthetic biology to the mix. Compared to traditional rigid materials, alternatives that incorporate living microbes could be more flexible. Embedded in a biocompatible matrix that provides structure, they could form a living material that withstands the unforgiving environment of space.

It sounds like science fiction, and it still is. But in late February, DARPA called for ideas to make the vision a reality.

Space Factory

Building large objects directly in space has multiple perks. Instead of folding up structures to fit into rockets—like the James Webb Space Telescope, which engineers folded origami-like for its ride to space—ferrying lightweight raw materials from Earth could be more energy- and cost-efficient. The materials could then be made into much larger objects in orbit. Microgravity also allows engineers to design structures that would sag under their own weight on Earth. Space offers an opportunity to build objects that are wildly different than any on the ground.

Space manufacturing is already in the works. In 2022, DARPA launched the Novel Orbital Moon Manufacturing, Materials, and Mass-Efficient Design (NOM4D) program to test the idea.

“Current space systems are all designed, built, and tested on Earth before being launched into a stable orbit and deployed to their final operational configuration,” NOM4D program manager Bill Carter said in a 2022 press release. “These constraints are particularly acute for large structures such as solar arrays, antennas, and optical systems, where size is critical to performance.”

Three years later, the program is almost ready to launch its first raw materials into space to test assembly. One of these, designed by the California Institute of Technology and Momentus, will hitch a ride on a SpaceX Falcon 9 mission in early 2026. In orbit, a robotic device will transform the material into a circular “skeleton” mimicking the diameter of an antenna.

“If the assembly technology is successful, this would be the first step toward scaling up to eventually building very large space-based structures in the future,” program manager Andrew Detor said in a press release.

Another team from the University of Illinois Urbana-Champaign is partnering with Voyager Space to test their own material and manufacturing process on the International Space Station. Made up of flat carbon-fiber sleeves, similar to finger-trap toys, their material uses a novel chemical process that hardens liquid components into solid structures. Heating up one side of the sleeve stiffens the entire structure. Their test is also scheduled for 2026.

A Dose of Biology

But DARPA is ready to get even more ambitious.

Thanks to synthetic biology and materials science, we’ve seen an explosion of biomaterials compatible with living cells. These have been used to deliver drugs deep into the body, form tough structures to support prosthetics, or 3D bioprint organs and tissues for transplant.

Meanwhile, scientists have also discovered a growing number of extremophiles—microbes that can withstand extremely high pressures and temperatures or survive acidic environments. Bacteria dotting the outside of the International Space Station can survive extreme ultraviolet radiation. Sequencing extremophile genomes is revealing genetic adaptations to these harsh environments, paving the way for scientist to engineer bacteria that survive and thrive in space.

The stage is set, then, for hybrid living materials that grow into predefined structures in space.

DARPA’s new vision is to rapidly engineer biological objects “of unprecedented size” in microgravity, with lengths reaching over half a kilometer, or more than 1,640 feet.

One idea is to weave biomaterials, extremophiles, and non-organic fibers into materials with different stiffnesses and strengths. This would be a bit like manufacturing a tent. Some materials could be used as tent poles supporting the overall structure. Others—such as bacteria—can grow the tent’s walls, floor, and roof, with the ability to stretch or shrink. Balancing the amount of each component would be critical for the material to work in multiple scenarios.

But space is an incredibly hostile environment. A crucial challenge will be figuring out how to keep the bacteria alive. Another will be directing their growth to form the desired final shape.

The setup will likely need biomaterial scaffolds to store and provide nutrients to the critters. These could be supplied to so-called leading edges, where rapidly dividing bacteria expand the material. Adding specific chemical signals—which many microbes already use for navigation—could nudge them toward designated locations as they form the final structure.

Some biomaterial building blocks sound rather exotic. For inspiration, DARPA suggested fungal filaments, protein-based fibers from hagfish slime, and graphene aerogels that are already being explored for drug delivery, wound healing, and bone and nerve regeneration.

The type of microbe used would likely also impact designs. Those that require oxygen are harder to keep alive in space, even when they can survive radiation-contaminated areas, Antarctic permafrost, or extreme dehydration. Bacteria that don’t require oxygen are likely easier to keep alive. But additional hardware would be needed to tinker with pressure, temperature, and humidity so they can thrive in space.

If all goes well, designers may also embed electronics inside the finished structures to transmit radio frequencies or infrared signals for communication.

DARPA is currently calling for proposals and planning a workshop in April to debate the idea with experts. Eventually, they hope the work leads to objects that can be “biologically manufactured and assembled, but that may be infeasible to produce traditionally.”

The post DARPA Wants to ‘Grow’ Enormous Living Structures in Space appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 8)

Singularity HUB - 8 Březen, 2025 - 16:00
Artificial Intelligence

Eerily Realistic AI Voice Demo Sparks Amazement and Discomfort OnlineBenj Edwards | Ars Technica

“In late 2013, the Spike Jonze film ‘Her’ imagined a future where people would form emotional connections with AI voice assistants. Nearly 12 years later, that fictional premise has veered closer to reality with the release of a new conversational voice model from AI startup Sesame that has left many users both fascinated and unnerved.”

Tech

Inside the Start of Project Stargate—and the Startup Powering ltAbram Brown | The Information

“Just the scale of economics around [Stargate’s] Abilene [datacenter project] is enormous, and Lochmiller made sure I understood that by comparing it to a familiar sight: Marc Benioff’s billion-dollar skyscraper in downtown San Francisco. ‘In the Bay Area, the Salesforce Tower defines the city skyline, right?’ he said. ‘You take three Salesforce Towers, and that’s the amount of work that’s going on here.'”

Robotics

This Kung Fu Robot Video Makes It Look Like the Uprising Has Already StartedTrevor Mogg | Digital Trends

“Folks often joke about the so-called ‘robot uprising,’ but a new video of Unitree’s advanced G1 robot pulling some kung fu moves could well wipe the smile off their faces. Shared on Tuesday, the 15-second clip shows a baton-wielding human retreating from a robot that then kicks the baton clean out of his hand. Let’s just say that again: a baton-wielding human retreating from a robot.”

Biotechnology

De-Extinction Scientists Say These Gene-Edited ‘Woolly Mice’ Are a Step Toward Woolly MammothsJessica Hamzelou | MIT Technology Review

“They’re small, fluffy, and kind of cute, but these mice represent a milestone in de-extinction efforts, according to their creators. The animals have undergone a series of genetic tweaks that give them features similar to those of woolly mammoths—and their creation may bring scientists a step closer to resurrecting the giant animals that roamed the tundra thousands of years ago.”

Tech

OpenAI Plots Charging $20,000 a Month For PhD-Level AgentsStephanie Palazzolo and Cory Weinberg | The Information

“OpenAI executives have told some investors it planned to sell low-end agents at a cost of $2,000 per month to ‘high-income knowledge workers’; mid-tier agents for software development costing possibly $10,000 a month; and high-end agents, acting as PhD-level research agents, which could cost $20,000 per month, according to a person who’s spoken with executives.”

Space

Firefly Releases Stunning Footage of Blue Ghost Landing on the MoonPassant Rabie | Gizmodo

“The Texas-based company released a clip of Blue Ghost’s descent toward the moon followed by a smooth landing. The footage is a masterclass in lunar landings, capturing striking views of the lander emerging from a cloud of dust, its shadow stretching across the moon’s surface in a superhero-like stance.”

Tech

This Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion.Berber Jin and Deepa Seetharaman | The Wall Street Journal

“Silicon Valley’s hottest investment isn’t a new app or hardware product. It’s one man. AI researcher Ilya Sutskever is the primary reason venture capitalists are putting some $2 billion into his secretive company Safe Superintelligence, according to people familiar with the matter. The new funding round values SSI at $30 billion, making it one of the most valuable AI startups in the world.”

Robotics

Driverless Race Car Sets a New Autonomous Speed RecordAndrew J. Hawkins | The Verge

“Look out: there’s a new fastest robot in the world. A Maserati MC20 Coupe with no one in the driver’s seat set a new land speed record for autonomous vehicles, reaching 197.7mph (318km/h) during an automotive event at the Kennedy Space Center last week.”

Artificial Intelligence

AI Reasoning Models Can Cheat to Win Chess GamesRhiannon Williams | MIT Technology Review

“Facing defeat in chess, the latest generation of AI reasoning models sometimes cheat without being instructed to do so.  The finding suggests that the next wave of AI models could be more likely to seek out deceptive ways of doing whatever they’ve been asked to do. And worst of all? There’s no simple way to fix it.”

Space

SpaceX Starship Spirals Out of Control in Second Straight Test Flight FailureSean O’Kane | TechCrunch

“The ship successfully separated and headed into space, while the booster came back to the company’s launchpad in Texas, where it was caught for a third time by the launch tower. But at around eight minutes and nine seconds into the flight, SpaceX’s broadcast graphics showed Starship lose multiple Raptor engines on the vehicle. On-board footage showed the ship started spiraling end over end over the ocean.”

Artificial Intelligence

People Are Using Super Mario to Benchmark AI NowKyle Wiggers | TechCrunch

“Thought Pokémon was a tough benchmark for AI? One group of researchers argues that Super Mario Bros. is even tougher. Hao AI Lab, a research org at the University of California San Diego, on Friday threw AI into live Super Mario Bros. games. Anthropic’s Claude 3.7 performed the best, followed by Claude 3.5. Google’s Gemini 1.5 Pro and OpenAI’s GPT-4o struggled.”

Artificial Intelligence

AI Versus the Brain and the Race for General IntelligenceJohn Timmer | Ars Technica

“The systems being touted as evidence that AGI is just around the corner do not work at all like the brain does. …It’s entirely possible that there’s more than one way to reach intelligence, depending on how it’s defined. But at least some of the differences are likely to be functionally significant, and the fact that AI is taking a very different route from the one working example we have is likely to be meaningful.”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 8) appeared first on SingularityHub.

Kategorie: Transhumanismus

Two Moon Landings in a Week—One Dead, One Alive—Aim to Kickstart the Lunar Economy

Singularity HUB - 7 Březen, 2025 - 21:43

Until last year, the US hadn’t visited the moon in over a half century. But now? Twice in a week.

A growing number of companies are eyeing the moon as a source of commercial opportunities. Two private landings in under a week suggest our nearest celestial neighbor is open for business.

Rapidly falling launch costs have opened the door for smaller companies to take on more ambitious space missions, including efforts to land on the moon. NASA has also encouraged this activity. In 2018, the agency launched the Commercial Lunar Payload Services (CLPS) program, incentivizing firms to build robotic landers and rovers in support of its plans to return humans to the moon.

Last year, Intuitive Machines’ Odysseus became the first private spacecraft to touch down on the lunar surface. But the vehicle toppled over onto its side in the process, limiting its ability to communicate and deploy experiments.

Last Sunday, however, US startup Firefly Aerospace achieved a clean touchdown with its Blue Ghost lander in the Mare Crisium basin. Meanwhile, Intuitive Machines experienced déjà vu on its second landing near the moon’s south pole on Friday when its Athena lander ended up on its side again.

Firefly’s 6.6-foot-tall lander launched on a SpaceX Falcon 9 rocket on January 15 and entered lunar orbit on February 13. The solar-powered vehicle is carrying 10 NASA science experiments designed to gather data on the lunar surface. It will now conduct a 14-day mission before the lunar night’s frigid temperatures set in and disable the lander.

Things haven’t turned out as well for Intuitive Machines, whose spacecraft took a speedier path to the moon after launching on a Falcon 9 on February 26. The company experienced a repeat of the problems that took the shine off its first landing. Issues with its laser range finders meant the lander lost track of its trajectory above the moon and didn’t touch down properly.

After assessing the spacecraft, Intuitive Machines, who could play an important role in NASA’s plans to return humans to the moon later this decade, said the craft was on its side again, likely couldn’t revive its batteries, and declared the mission over.

“With the direction of the sun, the orientation of the solar panels, and extreme cold temperatures in the crater, Intuitive Machines does not expect Athena to recharge,” the company wrote in a statement Friday. “The mission has concluded, and teams are continuing to assess the data collected throughout the mission.”

 Athena was carrying the agency’s Polar Resources Ice Mining Experiment, or PRIME-1, which NASA hoped could help the agency assess how easy it will be for astronauts to harvest water ice.

The experiment featured a drill called TRIDENT to extract lunar soil from three feet beneath the surface and a mass spectrometer to analyze the sample for water. Previous observations have suggested significant amounts of water ice is locked up in the soil at the moon’s south pole. This ice could prove a valuable resource for any future long-term outpost.

Athena was also carrying several robots made by Intuitive Machines, US startup Lunar Outpost, and the Massachusetts Institute of Technology, as well as equipment from Nokia designed to power the moon’s first 4G cellular network.

The hope for both missions is that renewed interest in lunar exploration could soon spur a flourishing off-world economy with plenty of opportunities for the private sector.

In the short term, national space agencies like NASA are likely to be the primary customers for companies like Firefly and Intuitive Machines, which both received funding from the CLPS program. NASA is eager to find cheaper ways to get cargo to the moon on a regular basis to support its more challenging missions.

But there’s hope that in the longer term there could be opportunities for companies to carve out a niche harvesting resources like water ice to create rocket fuel and oxygen or the rare isotope helium-3, which could be used to power fusion reactors. These could be particularly attractive to other private companies looking to push further into the solar system and use the moon as a staging post.

Whether this vision pans out remains to be seen. But with several more private moon landings scheduled later this year, the first shoots of a burgeoning lunar economy seem to be emerging.

The post Two Moon Landings in a Week—One Dead, One Alive—Aim to Kickstart the Lunar Economy appeared first on SingularityHub.

Kategorie: Transhumanismus

Scientists Discover Thousands of New Microbial Species Thriving in the Mariana Trench

Singularity HUB - 6 Březen, 2025 - 23:34

The project explores how life adapts to extreme environments—and hopes to inspire new drugs or even treatments to aid space travel.

A human can’t survive in the Mariana Trench without protection. At its deepest, the trench plunges 35,000 feet below the surface of the Pacific Ocean to a region reigned by crushing pressure and darkness.

Yet somehow life finds a way. The hadal snailfish, with delicate fins and translucent body, roams the dark and freezing waters. Giant shrimp-like creatures up to a foot long scavenge fallen debris, including wood and plastic, and transparent eels with fish-like heads hunt prey. A carpet of bacteria breaks down dead sea creatures and plankton to recycle nutrients.

We’ve only scratched the surface of what thrives in the deepest regions of the ocean. But a large project has now added over 6,000 new microbes to the deep-sea species tally.

Called the Mariana Trench Environment and Ecology Research Project, or MEER for short, a team of scientists have collected sediment from the hadal zone—the deepest part of the ocean—in the Mariana Trench and two other areas. The investigation revealed thousands of new species and two adaptations allowing the microbes to thrive under intense pressure.

Another team assembled the genomes of 11 deep-sea fish and found a mutated gene that could boost their ability to survive. Sequencing the genome of a giant shrimp-like creature suggested bacteria boosted its metabolism to adapt to high-pressure environments.

Studying these mysterious species could yield new medications to fight infections, inflammation, or even cancer. They show how creatures adapt to extreme environments, which could be useful for engineering pressure- or radiation-resistant proteins for space exploration.

“The deep sea, especially hadal zones, represents some of the most extreme and least explored environments on Earth,” wrote study author Shunping He and colleagues at the Chinese Academy of Sciences. The project hopes to “push the boundaries of our understanding of life” in this alien world, added Shanshan Liu and her team at BGI research, in a separate study.

Meet MEER

Oceans cover roughly 70 percent of the Earth’s surface. Yet we know very little about their inhabitants, especially on the ocean floor.

Since the 1960s, multiple missions—some autonomous, others manned—have sought to explore the deepest part of the Pacific Ocean, the Mariana Trench. Over 30,000 feet deep, it could completely submerge Mount Everest.

The trench is an unforgiving environment. The pressure is over 1,000 times greater than that at sea level, and at Challenger Deep—the deepest point navigated to date—the temperature is just above freezing. The seabed there is shrouded in complete darkness.

Yet a manned descent 65 years ago found flatfish and large shrimp-like creatures thriving in the trench—the first signs that life could survive in such extreme environments. More recently, James Cameron, best known for directing films like Titanic, dived to nearly 36,000 feet and took footage that helped identify even more new species.

The deep sea, it seems, is a trove of alien species yet to be discovered. The MEER project is collecting specimens from the deepest trenches across the world to learn more.

MEER relies on a deep-sea submersible called Fendouzhe, which means striver or fighter in Chinese. Fendouzhe is self-propelled and can survive freezing temperatures and tremendous pressure. It holds three crew members and has two mechanical arms bristling with devices—cameras, sonars, drills.

The submersible reached the bottom of the Mariana Trench in 2020 followed by missions  to the Yap Trench and Philippine Basin. Scientists on board gathered over 1,600 sediment samples from multiple hadal zones between 6 and 11 kilometers, or roughly 4 to 7 miles, under the sea.

Added to the punishing pressure and lack of light, the deep sea is low on environmental nutrients. It’s truly “a unique combination that sets it apart from all other marine and terrestrial environments,” wrote the authors.

Undersea Genes

Sediments hold genetic material that survives intact when brought to the surface for analysis.

One study sketched a landscape of living creatures in the deep ocean using an approach called metagenomics. Here, scientists sequenced genetic material from all microbes within an environment, allowing them to reconstruct a birds-eye view of the ecology.

In this case, the collection is “10-fold larger than all previously reported,” wrote the team. Over 89 percent of the genomes are entirely new, suggesting most belong to previously unknown microbial species living in the deep ocean.

Samples collected from other trenches have varying genetic profiles, suggesting the microbes learned to adapt to various deep ocean environments. But they share similar genetic changes. Several genes bump up their ability to digest toluene as food. The chemical is mostly known for manufacturing paints, plastics, medications, and cosmetics.

Other genes wipe out metabolic waste products called reactive oxygen species. In large amounts, these damage DNA and lead to aging and disease. The creatures also have a beefed-up DNA repair system. This could help them adapt to intense pressure and frigid temperatures, both of which increase the chances of these damaging chemicals wreaking havoc.

Deep-Sea Superpowers

Meanwhile, other studies peered into the genetic makeup of fish and shrimp-like creatures in the hadal zone.

In one, scientists collected samples using the Fendouzhe submersible and an autonomous rover, covering locations from the Mariana Trench to the Indian Ocean. The team zeroed in on roughly 230 genes in deep-sea fish that boost survival under pressure.

Most of these help repair DNA damage. Others increase muscle function. Surprisingly, all 11 species of deep-sea fish studied shared a single genetic mutation. Engineering the same mutation in lab-grown cells helped them more efficiently turn DNA instructions into RNA—the first step cells take when making the proteins that coordinate our bodily functions.

This is “likely to be advantageous in the deep-sea environment,” wrote the team.

Top predators in the deep rely on a steady supply of prey—mainly, a shrimp-like species called amphipods. Whole genome sequencing of these creatures showed the shrimp thrive thanks to various good bacteria that help them defend against other bacterial species.

There are also some other intriguing findings. For example, while most deep-sea fish have lost genes associated with vision, one species showed gene activity related to color vision. These genes are similar to ours and could potentially let them see color even in total darkness.

Scientists are still digging through the MEER database. The coalition hopes to bolster our understanding of the most resilient lifeforms on Earth—and potentially inspire journeys into other extreme environments, like outer space.

The post Scientists Discover Thousands of New Microbial Species Thriving in the Mariana Trench appeared first on SingularityHub.

Kategorie: Transhumanismus

Quantum Computing Startup Says It’s Already Making Millions of Light-Powered Chips

Singularity HUB - 4 Březen, 2025 - 19:14

PsiQuantum claims to have solved scalability issues that have long plagued photonic approaches.

American quantum computing startup PsiQuantum announced last week that it has cracked a significant puzzle on the road to making the technology useful: manufacturing quantum chips in large quantities.

PsiQuantum burst out of stealth mode in 2021 with a blockbuster funding announcement. It followed up with two more last year.

The company uses so-called “photonic” quantum computing, which has long been dismissed as impractical.

The approach, which encodes data in individual particles of light, offers some compelling advantages—low noise, high-speed operation, and natural compatibility with existing fiber-optic networks. However, it was held back by extreme hardware demands to manage the fact photons fly with blinding speed, get lost, and are hard to create and detect.

PsiQuantum now claims to have addressed many of these difficulties. Last week, in a new peer-reviewed paper published in Nature, the company unveiled hardware for photonic quantum computing they say can be manufactured in large quantities and solves the problem of scaling up the system.

What’s in a Quantum Computer?

Like any computer, quantum computers encode information in physical systems. Whereas digital computers encode bits (0s and 1s) in transistors, quantum computers use quantum bits (qubits), which can be encoded in many potential quantum systems.

Superconducting quantum computers require an elaborate cooling rig to keep them at temperatures close to absolute zero. Image Credit: Rigetti

The darlings of the quantum computing world have traditionally been superconducting circuits running at temperatures near absolute zero. These have been championed by companies such as Google, IBM, and Rigetti.

These systems have attracted headlines claiming “quantum supremacy” (where quantum computers beat traditional computers at some task) or the ushering in of “quantum utility” (that is, actually useful quantum computers).

In a close second in the headline grabbing game, IonQ and Honeywell are pursuing trapped-ion quantum computing. In this approach, charged atoms are captured in special electromagnetic traps that encode qubits in their energy states.

Other commercial contenders include neutral atom qubits, silicon based qubits, intentional defects in diamonds, and non-traditional photonic encodings.

All of these are available now. Some are for sale with enormous price tags, and some are accessible through the cloud. But fair warning: They are more for experimentation than computation today.

Faults and How to Tolerate Them

The individual bits in your digital computers are extraordinarily reliable. They might experience a fault (a 0 inadvertently flips to a 1, for example) once in every trillion operations.

PsiQuantum’s new platform has impressive-sounding features such as low-loss silicon nitride waveguides, high-efficiency photon-number-resolving detectors, and near-lossless interconnects.

The company reports a 0.02 percent error rate for single-qubit operations and 0.8 percent for two-qubit creation. These may seem like quite small numbers, but they are much bigger than the effectively zero error rate of the chip in your smartphone.

However, these numbers rival the best qubits today and are surprisingly encouraging.

One of the most critical breakthroughs in the PsiQuantum system is the integration of fusion-based quantum computing. This is a model that allows for errors to be corrected more easily than in traditional approaches.

Quantum computer developers want to achieve what is called “fault tolerance.” This means that, if the basic error rate is below a certain threshold, the errors can be suppressed indefinitely.

Claims of “below threshold” error rates should be met with skepticism, as they are generally measured on a few qubits. A practical quantum computer would be a very different environment, where each qubit would have to function alongside a million (or a billion, or a trillion) others.

This is the fundamental challenge of scalability. And while most quantum computing companies are tackling the problem from the ground up—building individual qubits and sticking them together—PsiQuantum is taking the top-down approach.

Scale-First Thinking

PsiQuantum developed its system in partnership with semiconductor manufacturer GlobalFoundries. All the key components—photon sources and detectors, logic gates, and error correction—are integrated on single silicon-based chip.

PsiQuantum says GlobalFoundries has already made millions of the chips.

A diagram showing the different components of PsiQuantum’s photonic chip. Image Credit: PsiQuantum

By making use of techniques already used to fabricate semiconductors, PsiQuantum claims to have solved the scalability issue that has long plagued photonic approaches.

PsiQuantum is fabricating their chips in a commercial semiconductor foundry. This means scaling to millions of qubits will be relatively straightforward.

If PsiQuantum’s technology delivers on its promise, it could mark the beginning of quantum computing’s first truly scalable era.

A fault-tolerant photonic quantum computer would have major advantages and lower energy requirements.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Quantum Computing Startup Says It’s Already Making Millions of Light-Powered Chips appeared first on SingularityHub.

Kategorie: Transhumanismus

You Can Taste Cake in Virtual Reality With This New Device

Singularity HUB - 4 Březen, 2025 - 00:36

The device also mimics lemonade and coffee—but fried eggs? Not so much.

“That Cajun blackened shrimp recipe looks really good,” I tell my husband while scrolling through cooking videos online. The presenter describes it well: juicy, plump, smoky, a parade of spices. Without making the dish, I can only imagine how it tastes. But a new device inches us closer to recreating tastes from the digital world directly in our mouths.

Smaller than a stamp, it contains a slurry of chemicals representing primary flavors like salty, sweet, sour, bitter, and savory (or umami). The reusable device mixes these together to mimic the taste of coffee, cake, and other foods and drinks.

Developed by researchers at Ohio State University, the device has a tiny gum-like strip linked to a liquid reservoir. It releases each taste component in a gel and pumps the resulting blend onto the tongue. The system is wireless and includes a sensor to control the chemical mixture. In a demonstration, one person dipped the sensor into some lemonade in San Francisco and transferred a facsimile of the taste to people wearing the devices in Ohio in real-time.

Complex flavor profiles—say, a fried egg—are harder to simulate. And it’s likely awkward to have a device dangling on your mouth. But the work brings us a little closer to adding a new sense to virtual and augmented reality and making video games more immersive.

“This will help people connect in virtual spaces in never-before-seen ways,” study author Jinghua Li said in a press release. “This concept is here, and it is a good first step to becoming a small part of the metaverse.”

Gaming aside, future iterations of the device could potentially help people who have lost their sense of taste, including those living with long Covid or traumatic brain injuries.

What’s Taste, Anyways?

We can taste food thanks to a variety of chemicals stimulating our taste buds. There are five main types of taste bud, each specializing in a different taste. When we chew food, our taste buds send electrical signals to the brain where they combine into a host of flavors—the bitterness of coffee, tanginess of a cup of orange juice, or richness of a buttery croissant.

But taste isn’t an isolated sensation. Smells, textures, memories, and emotions also come into play. One spoon of comfort food can take you back to happy days as a child. That magic is hard to replicate with a few spurts of chemical flavor and is partly why taste is so hard to recreate in digital worlds, wrote the team.

Virtual and augmented reality have mainly focused on audio and visual cues. Adding smell or taste could make experiences more immersive. An early version of the idea, dubbed Smell-O-Vision, dates back nearly a century when scents were released in theaters to heighten the film experience. It’s still employed in 4DX theaters today.

Cinema isn’t the only industry looking for a multi-sensory upgrade. At this year’s CES, a trailer for Sony’s hit game, The Last of Us, showed the technology at work in an immersive, room-size version of the game where players could smell the post-apocalyptic world.

Taste is harder to recreate. Older methods activated taste buds with electrical zaps to the tongue. While participants could detect very basic tastes, hooking your tongue up to electrodes isn’t the most comfortable setup.

More recently, a Hong Kong team developed a lollipop-like device that produces nine tastes embedded in food-safe gels. An electrical zap releases the chemicals, and upping the voltage delivers a stronger flavor. The approach is an improvement, but holding a lollipop in your mouth while gaming for hours is still awkward.

Tasty Interface

The new device offers a neater solution. Dubbed e-Taste, it has two main components: a sensing platform to analyze the taste profile of a food or drink and an actuator to deliver a mixture of liquid chemicals approximating the sampled taste.

The actuator, a cube the size of a shirt button and a gum-like strip, hangs on the lower teeth. The cube stores chemicals mimicking each of five tastes—glucose for sweet and citric acid for sour, for example—in separate chambers. A tiny pump, activated by an electrical zap, pushes the liquids onto a gel strip where they mingle before being pumped onto the tongue. Each pump is the equivalent of a drop of water, which is enough to activate taste buds.

A person using the device holds the strip inside their mouth with the cube dangling outside. Once the sensor captures a food or drink’s flavor profile—say, equal amounts of sweet, sour, salty, and savory—it wirelessly transmits the data to the actuator which releases the final taste mixture for roughly 45 minutes—plenty of time to experience a virtual foodie session.

Eat Digital Cake

After training e-Taste to understand which chemical mixtures best approximate various foods, the team asked 10 volunteers to name the food they were tasting from a list of possibilities.

Roughly 90 percent could pick out lemonade and gauge its sourness. Most could also identify the taste of cake. But not all foods were so easily mimicked. Participants struggled to name umami-heavy dishes, such as fried eggs or fish stew.

Rather than a bug in the device, however, this is an expected result. Taste is highly subjective. Our tolerance to spice or sourness varies largely.

Then there’s the weirdness of a virtual setup. We eat and drink with our eyes open and smell food too. One participant said that tasting coffee through the device without seeing a normal coffee maker led to some confusion. Scientists have long known the color of food is essential to our perception of flavor. Smell and texture are also crucial. The smell of a good southern barbeque joint sets expectations—even before we’ve tasted anything.

The team is exploring ways to enhance the experience by adding these senses. Shrinking the device is also on the menu.

Although the team developed e-Taste to enhance gaming, people could use something like it to sample food across the globe or items when grocery shopping online. Doctors could use it to detect if people have lost their sense of taste, an early indication of multiple diseases, including viral infections and Alzheimer’s disease. And more sophisticated versions could one day augment taste in people who’ve lost it.

The post You Can Taste Cake in Virtual Reality With This New Device appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 1)

Singularity HUB - 1 Březen, 2025 - 16:00
Artificial Intelligence

Anthropic Launches the World’s First ‘Hybrid Reasoning’ AI ModelWill Knight | Wired

“Anthropic, an artificial intelligence company founded by exiles from OpenAI, has introduced the first AI model that can produce either conventional output or a controllable amount of ‘reasoning’ needed to solve more grueling problems. Anthropic says the new hybrid model, called Claude 3.7, will make it easier for users and developers to tackle problems that require a mix of instinctive output and step-by-step cogitation.”

Robotics

Figure Will Start ‘Alpha Testing’ Its Humanoid Robot in the Home in 2025Brian Heater | TechCrunch

“Figure is planning to bring its humanoids into the home sooner than expected. CEO Brett Adcock confirmed on Thursday that the Bay Area robotics startup will begin ‘alpha testing’ its Figure 02 robot in the home setting later in 2025. The executive says the accelerated timeline is a product of the company’s ‘generalist’ Vision-Language-Action (VLA) model, called Helix.”

Artificial Intelligence

New AI Text Diffusion Models Break Speed Barriers by Pulling Words From NoiseBenj Edwards | Ars Technica

“Mercury Coder Mini scores 88.0 percent on HumanEval and 77.1 percent on MBPP—comparable to GPT-4o Mini—while reportedly operating at 1,109 tokens per second compared to GPT-4o Mini’s 59 tokens per second. This represents roughly a 19x speed advantage over GPT-4o Mini while maintaining similar performance on coding benchmarks.”

Computing

Amazon Uses Quantum ‘Cat States’ With Error CorrectionJohn Timmer | Ars Technica

“The system mixes two different types of qubit hardware to improve the stability of the quantum information they hold. The idea is that one type of qubit is resistant to errors, while the second can be used for implementing an error-correction code that catches the problems that do happen.”

Artificial Intelligence

It’s a Lemon’—OpenAI’s Largest AI Model Ever Arrives to Mixed ReviewsBenj Edwards | Ars Technica

“The verdict is in: OpenAI’s newest and most capable traditional AI model, GPT-4.5, is big, expensive, and slow, providing marginally better performance than GPT-4o at 30x the cost for input and 15x the cost for output. The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called ‘scaling laws’ cited by many for years have possibly met their natural end.”

Computing

Google’s Taara Hopes to Usher in a New Era of Internet Powered by LightSteven Levy | Wired

“Instead of beaming from space, Taara’s ‘light bridges’—which are about the size of a traffic light—are earthbound. As X’s ‘captain of moonshots’ Astro Teller puts it, ‘As long as these two boxes can see each other, you get 20 gigabits per second, the equivalent of a fiber-optic cable, without having to trench the fiber-optic cable.'”

Energy

Next-Gen Nuclear Startup Plans 30 Reactors to Fuel Texas Data CentersAlex Pasternack | Fast Company

“Last Energy, a nuclear upstart backed by an Elon Musk-linked venture capital fund, says it plans to construct 30 microreactors on a site in Texas to supply electricity to data centers across the state. The initiative, which it says could provide about 600 megawatts of electricity, would be the company’s largest project to date and help it develop a commercial pipeline in the US.”

Science

The Physicist Working to Build Science-Literate AIJohn Pavlus | Quanta Magazine

“Single-purpose systems like AlphaFold can generate scientific predictions with revolutionary accuracy, but researchers still lack ‘foundation models’ designed for general scientific discovery. These models would work more like a scientifically accurate version of ChatGPT, flexibly generating simulations and predictions across multiple research areas.”

Tech

Vinod Khosla: Most AI Investments Will Lose Money as Market Enters ‘Greed’ CycleSri Muppidi | The Information

“Early OpenAI investor Vinod Khosla warned that most investments in artificial intelligence will lose money, particularly as more investors jump into the market, funding more startups. But he said some companies would grow to be worth hundreds of billions—and eventually trillions—of dollars and make up for the failures.”

Biotechnology

A Protein Borrowed From Tardigrades Could Give Us Radiation Body ArmorEd Cara | Gizmodo

“The strangely adorable and resilient tardigrade, or water bear, just might hold the key to making cancer treatment a lot more (water-) bearable. That’s because a team of researchers just found evidence that a protein produced by these microscopic creatures could protect our healthy cells from the ravages of radiation therapy.”

Tech

The World’s Smallest Lego Brick Is Here. It’s Literally MicroscopicGrace Snelling | Fast Company

“The brick in question is a microscopic sculpture created by UK-based artist David A Lindon. It’s made from a standard red square Lego, and it looks like one, too, aside from the fact that it measures just 0.02517 millimeter by 0.02184 millimeter (about the size of a white blood cell).”

Artificial Intelligence

Anthropic’s Latest Flagship AI Might Not Have Been Incredibly Costly to TrainKyle Wiggers | TechCrunch

“Assuming Claude 3.7 Sonnet indeed cost just ‘a few tens of millions of dollars’ to train, not factoring in related expenses, it’s a sign of how relatively cheap it’s becoming to release state-of-the-art models. Claude 3.5 Sonnet’s predecessor, released in fall 2024, similarly cost a few tens of millions of dollars to train, Anthropic CEO Dario Amodei revealed in a recent essay.”

Computing

How North Korea Pulled Off a $1.5 Billion Crypto Heist—the Biggest in HistoryDan Goodin | Ars Technica

“‘The Bybit hack has shattered long-held assumptions about crypto security,’ Dikla Barda, Roman Ziakin, and Oded Vanunu, researchers at security firm Check Point, wrote Sunday. ‘No matter how strong your smart contract logic or multisig protections are, the human element remains the weakest link.’”

Computing

Is It Lunacy to Put a Data Center on the Moon?Dina Genkina | IEEE Spectrum

“The idea of putting a data center on the moon raises a natural question: Why? Lonestar’s CEO Christopher Stott says it is to protect sensitive data from Earthly hazards. ‘Data centers, right? They’re like modern cathedrals. We’re building these things, they run our entire civilization. It’s superb, and yet you realize that the networks connecting them are increasingly fragile.’”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 1) appeared first on SingularityHub.

Kategorie: Transhumanismus

Move Over Smart Rings. MIT’s New Fabric Computer Is Stitched Into Your Clothes.

Singularity HUB - 28 Únor, 2025 - 20:57

Moore’s Law for your pants.

Wearable devices are popular these days, but they’re largely restricted to watches, rings, and eyewear. Researchers have now developed a thread-based computer that can be stitched into clothes.

Being able to sense what our bodies are up is useful in areas like healthcare and sports. And while devices like smartwatches can track metrics like heart rate, body temperature, and movement, humans produce huge amounts of data that devices tethered to specific points of the body largely miss.

That’s what prompted MIT engineers to create a fabric computer that can be stitched into regular clothes. The device features sensors, processors, memory, batteries, and both optical and Bluetooth communications, allowing networks of these fibers to provide sophisticated whole-body monitoring.

“Our bodies broadcast gigabytes of data through the skin every second in the form of heat, sound, biochemicals, electrical potentials, and light, all of which carry information about our activities, emotions, and health,” MIT professor Yoel Fink, who led the research, said in a press release.

“Wouldn’t it be great if we could teach clothes to capture, analyze, store, and communicate this important information in the form of valuable health and activity insights?”

The MIT team has been working on incorporating electronics into fibers for more than a decade, but in a recent paper in Nature they outline a breakthrough that significantly boosts the sophistication of the devices they can build.

One of the biggest challenges the team faced was the mismatch between flat, 2D chip layouts and the 3D structure of fibers. This made it difficult to establish reliable connections between components and led to failure in previous generations of their fiber computers.

To get around this, the team designed a novel flexible circuit board. This allowed them to attach an electronic component, such as a microcontroller or Bluetooth module, onto a chip in 2D and then fold it into a tiny box with the component nestled inside.

They connected several of these chips using copper microwires arranged in a spiral and coated them in a flexible plastic material. These fibers were then braided with traditional textile materials like polyester, wool, and nylon so they could be stitched into clothes.

The resulting threads had enough computing power to run a rudimentary neural network able to detect the kinds of exercises someone was doing. The researchers stitched four of them into the sleeves of a shirt and the legs of a pair of pants and used these to monitor the wearer’s activity.

Individually, the fabric computers could distinguish between squats, planks, arm circles, and lunges with 67 percent accuracy. But when they used Bluetooth connections to communicate and vote on the predictions, the accuracy jumped to 95 percent.

The technology is currently undergoing a rigorous real-world trial. This month, participants in the US Army and Navy are conducting a month-long winter research mission to the Arctic wearing merino wool base layers featuring the fabric computers. The devices will provide real-time information on the health and activity of the servicemen involved in the exercise.

“As a leader with more than a decade of Arctic operational experience, one of my main concerns is how to keep my team safe from debilitating cold weather injuries,” US Army Major Mathew Hefner, the commander of the mission, said in the press release.

“Conventional systems just don’t provide me with a complete picture. We will be wearing the base layer computing fabrics on us 24/7 to help us better understand the body’s response to extreme cold and ultimately predict and prevent injury.”

While the extreme conditions the military operates in make the technology particularly useful for them, it’s easy to see how whole-body monitoring could benefit areas like elite sports and healthcare too. It may not be long before your pants have as much computing power as an early home computer.

The post Move Over Smart Rings. MIT’s New Fabric Computer Is Stitched Into Your Clothes. appeared first on SingularityHub.

Kategorie: Transhumanismus

The Biggest AI for Biology Yet Writes Genomes From Scratch

Singularity HUB - 27 Únor, 2025 - 22:28

On-demand DNA for every branch of life.

Mother Nature is perhaps the most powerful generative “intelligence.” With just four genetic letters—A, T, C, and G—she has crafted the dazzling variety of life on Earth.

Can generative AI expand on her work?

A new algorithm, Evo 2, trained on roughly 128,000 genomes—9.3 trillion DNA letter pairs—spanning all of life’s domains, is now the largest generative AI model for biology to date. Built by scientists at the Arc Institute, Stanford University, and Nvidia, Evo 2 can write whole chromosomes and small genomes from scratch.

It also learned how DNA mutations affect proteins, RNA, and overall health, shining light on “non-coding” regions, in particular. These mysterious sections of DNA don’t make proteins but often control gene activity and are linked to diseases.

The team has released Evo 2’s software code and model parameters to the scientific community for further exploration. Researchers can also access the tool through a user-friendly web interface. With Evo 2 as a foundation, scientists may develop more specific AI models. These could predict how mutations affect a protein’s function, how genes operate differently across cell types, or even help researchers design new genomes for synthetic biology.

Evo marks “a key moment in the emerging field of generative biology” because machines can now read, write, and “think” in the language of DNA, said study author Patrick Hsu in an Arc Institute blog.

Upping the Game

Evo 2 builds on an earlier model introduced last year. Both are large language models, or LLMs, like the algorithms behind popular chatbots. The original Evo was trained on roughly three million genomes from a range of microbes and bacteria-infecting viruses.

Evo 2 expanded this to include genes from humans, plants, yeast, and other organisms made of more complex cells. These are all known as eukaryotes. Eukaryotic genomes are far more intricate than bacterial ones. Some DNA snippets, for example, have specific functions, such as turning a gene on or off. Others allow a single gene to churn out multiple versions of a protein.

“These features underpin the emergence of multicellularity, sophisticated traits, and intelligent behaviors that are unique to eukaryotic life,” wrote the team in a pre-print paper on bioRxiv.

Though critical for the emergence of complex life, these control mechanisms are a headache for generative AI. Regulatory elements can be far apart from their associated genes, making it difficult to hunt them down. They’re usually hidden in regions of the genome that don’t make proteins but are still crucial to gene expression or the maintenance of chromosomes.

The team explicitly included these regions in Evo 2’s training. They curated a dataset of DNA sequences from 128,000 genomes encompassing all branches on the tree of life. Together, the dataset, OpenGenome2, contains 9.3 trillion DNA letters.

They created two versions of Evo 2: a smaller version trained on 2.4 trillion letters and a full version trained on the entire database. Both algorithms were designed to quickly churn through mountains of data, like for example, longer lengths of DNA. This allows Evo 2 to broaden its “search window” and find patterns across a larger genetic landscape, which is crucial for eukaryotic cells with far longer DNA sequences than bacteria. Compared to its predecessor, Evo 2 trained on 30 times more data and can crunch 8 times as many DNA letters at a time. The whole training process took several months on over 2,000 Nvidia H100 GPUs.

Genetic Sleuth

Once completed, Evo 2 beat state-of-the-art models at predicting the effects of mutations in BRCA1, a gene linked to breast cancer. It especially outshined its competitors when including both protein-coding and non-coding genetic letter changes. The AI separated benign mutations from potentially harmful ones with over 90 percent accuracy.

Using AI to screen for cancer isn’t new. But older methods often made diagnoses using medical images. Evo 2 used DNA sequences alone. With further validation, the tool could one day help scientists find the genetic causes of diseases—especially those hidden in non-coding regions.

It could also aid new treatments that target specific tissues, according to study author Hani Goodarzi. “If you have a gene therapy that you want to turn on only in neurons to avoid side effects, or only in liver cells, you could design a genetic element that is only accessible in those specific cells” to minimize side effects.

Potential medical uses aside, Evo 2 learned a variety of complex genetic traits across multiple species. For example, the tool fished out patterns in the human genome that could also be used to annotate that of a woolly mammoth. Our genome is different than that of the extinct beast, but Evo 2 found a shared genetic vocabulary and grammar that transcended the divide.

“Evo 2 represents a significant step in learning DNA regulatory grammar,” Christina Theodoris at the Gladstone Institutes told Nature.

Genome Architect

Scientists used the original Evo to design a variety of new CRISPR gene-editing tools and a full-length bacterial genome from scratch. Although the latter contained genes essential for survival, the AI also “hallucinated” unnatural sequences preventing it from being functional.

Evo 2 fared better. The team first challenged the model to create a full set of human mitochondrial DNA. With only 13 protein-coding genes and a handful of RNA types, these genomes are relatively small, but the resulting proteins and RNA do intricate work together.

The AI generated 250 unique mitochondrial DNA genomes, each containing roughly 16,000 letters. Using a protein prediction tool, AlphaFold 3, the team found these sequences yielded proteins similar to those found naturally in mitochondria. The team also used Evo 2 to create a minimal bacterial genome with just 580,000 DNA letters and a 330,000-letter-long yeast chromosome. And they added a Morse code message to a mouse’s genome.

To be clear, these generated DNA blueprints have yet to be tested inside living cells, but experiments are in the works.

Evo 2 is a step towards designing complex genomes. Combined with other AI tools in biology, it inches us closer to programming entirely new forms of synthetic life, wrote the authors.

The post The Biggest AI for Biology Yet Writes Genomes From Scratch appeared first on SingularityHub.

Kategorie: Transhumanismus

French Scientists Beat China’s Fusion Record With 22-Minute Plasma Reaction

Singularity HUB - 26 Únor, 2025 - 16:00

The experiment is a stepping-stone toward full-scale fusion power.

There’s no perfect form of energy. Coal and natural gas emit carbon dioxide. Solar and wind are intermittent. Nuclear is costly and creates radioactive waste. Geothermal and hydro are location-specific. Fusion would be about the closest we could get to a clean, abundant, sustainable way to produce electricity—except scientists haven’t yet figured out how to do it at scale.

Last week, however, a team in France got a step closer when they kept a plasma reaction going for longer than ever before.

Nuclear fusion uses extremely high pressures and temperatures to force hydrogen atoms to combine, or fuse. It’s what happens in stars—including the sun—and the reaction generates massive amounts of light, heat, and energy. Recreating this process in a controlled way on Earth is, unsurprisingly, an incredibly complex endeavor.

For a fusion reaction to work, the hydrogen atoms being used as fuel need to get hot enough that the electrons break away from the nucleus. This creates plasma—an energetic slurry of positive ions and negatively charged free electrons—which needs to be heated even more, until it reaches temperatures over 180 million degrees Fahrenheit (100 million Celsius). Then the super-hot plasma needs to be kept in a confined space long enough that the atoms collide and fuse.

This reaction can take place in a donut-shaped device called a tokamak. Tokamaks, which are one of several fusion reactor designs, use magnetic fields to coax particles in one direction, adding energy in the form of heat until the particles are moving so fast that when they collide, their nuclei fuse.

A key challenge is keeping the reaction going in a stable manner without damaging the reactor or causing a malfunction. This is what the WEST reactor in France accomplished last week by keeping a plasma reaction going for 1,337 seconds, or over 22 minutes. The existing record of 1,066 seconds—itself more than double the prior best result—was set by the EAST reactor in China a month ago.

Plasma in the WEST fusion reactor, pictured here, reached temperatures of 50 million degrees during the experiment. CEA

“WEST has achieved a new key technological milestone by maintaining hydrogen plasma for more than twenty minutes through the injection of 2 [megawatts] of heating power,” Anne-Isabelle Etienvre, director of fundamental research at France’s Alternative Energies and Atomic Energy Commission (CEA), said in a statement. “Experiments will continue with increased power.”

Two megawatts is enough to power hundreds of homes (though exactly how many varies by region and time of year). The reaction reached temperatures over 50 million degrees Celsius—90 million degrees Fahrenheit—which is hotter than the core of the sun. However, it did not reach fusion temperatures. The experiment was more about learning to control the plasma as a step toward fusion.

The team’s next goal is to keep a plasma reaction going for even longer, and at hotter temperatures. The WEST reactor is not set up to become a commercial reactor; rather, the studies done there are being used as data for bigger reactors, namely France’s International Thermonuclear Experimental Reactor (ITER), currently under construction.

Despite setting a record for duration of a plasma reaction, CEA’s announcement acknowledges that there’s a long way to go before fusion energy can be produced at scale.

“It is unlikely that fusion technology will make a significant contribution to achieving net-zero carbon emissions by 2050,” they write. “For this, several technological sticking points need to be overcome, and the economic feasibility of this form of energy production must still be demonstrated.”

Though the holy grail of clean energy may still be decades away, at least we’re taking baby steps towards it.

The post French Scientists Beat China’s Fusion Record With 22-Minute Plasma Reaction appeared first on SingularityHub.

Kategorie: Transhumanismus

Scientists Unearth 3-Billion-Year-Old Beach Near a Primordial Ocean on Mars

Singularity HUB - 25 Únor, 2025 - 18:46

A study suggests Martian seas may have survived longer than expected.

In the 1970s, images from the NASA Mariner 9 orbiter revealed water-sculpted surfaces on Mars. This settled the once-controversial question of whether water ever rippled over the red planet.

Since then, more and more evidence has emerged that water once played a large role on our planetary neighbor.

For example, Martian meteorites record evidence for water back to 4.5 billion years ago. On the young side of the timescale, impact craters formed over the past few years show the presence of ice under the surface today.

Today, the hot topics focus on when water appeared, how much was there, and how long it lasted. Perhaps the most burning of all Mars water-related topics nowadays is: Were there ever oceans?

A new study published in PNAS today has made quite a splash. The study involved a team of Chinese and American scientists led by Jianhui Li from Guangzhou University in China and was based on work done by the China National Space Administration’s Mars rover Zhurong.

Data from Zhurong provides an unprecedented look into rocks buried near a proposed shoreline billions of years old. The researchers claim to have found beach deposits from an ancient Martian ocean.

An illustration of Mars 3.6 billion years ago, when an ocean may have covered nearly half the planet. The orange star (right) is the landing site of the Chinese rover Zhurong. The yellow star is the landing site of NASA’s Perseverance rover. Robert Citron/Southwest Research Institute/NASA Blue Water on a Red Planet

Rovers exploring Mars study many aspects of the planet, including the geology, soil, and atmosphere. They’re often looking for any evidence of water. That’s in part because water is a vital factor for determining if Mars ever supported life.

Sedimentary rocks are often a particular focus of investigations because they can contain evidence of water—and therefore life—on Mars.

For example, the NASA Perseverance rover is currently searching for life in a delta deposit. Deltas are triangular regions often found where rivers flow into larger bodies of water, depositing large amounts of sediment. Examples on Earth include the Mississippi delta in the United States and the Nile delta in Egypt.

The delta the Perseverance rover is exploring is located within the roughly 45-kilometer-wide Jezero impact crater, believed to be the site of an ancient lake.

Zhurong had its sights set on a very different body of water—the vestiges of an ancient ocean located in the northern hemisphere of Mars.

Topography of Utopia Planitia. Lower parts of the surface are shown in blues and purples, while higher altitude regions show up in whites and reds, as indicated on the scale to the top right. ESA/DLR/FU Berlin The God of Fire

The Zhurong rover is named after a mythical god of fire.

It was launched by the Chinese National Space Administration in 2020 and was active on Mars from 2021 to 2022. Zhurong landed within Utopia Planitia, a vast expanse and the largest impact basin on Mars which stretches some 3,300km in diameter.

Zhurong is investigating an area near a series of ridges—described as paleoshorelines—that extend for thousands of kilometers across Mars. The paleoshorelines have previously been interpreted as the remnants of a global ocean that encircled the northern third of Mars.

However, there are differing views among scientists about this and more observations are needed.

On Earth, the geologic record of oceans is distinctive. Modern oceans are only a few hundreds of millions of years old. Yet the global rock record is riddled with deposits made by many older oceans, some several billions of years old.

This diagram shows how a series of beach deposits would have formed at the Zhurong landing site in the distant past on Mars. Hai Liu/Guangzhou University What Lies Beneath

To determine if rocks in Utopia Planitia are consistent with having been deposited by an ocean, the rover collected data along a 1.3-kilometer measured line known as a transect at the margin of the basin. The transect was oriented perpendicularly to the paleoshoreline. The goal was to work out what rock types are there, and what story they tell.

The Zhurong rover used a technique called ground penetrating radar, which probed down to 100 meters below the surface. The data revealed many characteristics of the buried rocks, including their orientation.

Rocks imaged along the transect contained many reflective layers that are made visible by ground penetrating radar down to at least 30 meters. All the layers also dip shallowly into the basin, away from the paleoshoreline. This geometry exactly reflects how sediments are deposited in oceans on Earth.

The ground penetrating radar also measured how much the rocks are affected by an electrical field. The results showed the rocks are more likely to be sedimentary and are not volcanic flows, which can also form layers.

The study compared Zhurong data gathered from Utopia Planitia with ground penetrating radar data for different sedimentary environments on Earth.

The result of the comparison is clear—the rocks Zhurong imaged are a match for coastal sediments deposited along the margin of an ocean.

Zhurong found a beach.

Photograph of frosted terrain on Utopia Planitia, taken by the Viking 2 lander in 1979. NASA/JPL A Wet Mars

The Noachian period of Martian history from 4.1 to 3.7 billion years ago is the poster child for a wet Mars. There is abundant evidence from orbital images of valley networks and mineral maps that the surface of Noachian Mars had surface water.

However, there is less evidence for surface water during the Hesperian period, from 3.7 to 3 billion years ago. Stunning orbital images of large outflow channels in Hesperian landforms, including an area of canyons known as Kasei Valles, are believed to have formed from catastrophic releases of ground water, rather than standing water.

From this view, Mars appears to have cooled down and dried up by Hesperian time.

However, the Zhurong rover findings of coastal deposits formed in an ocean may indicate that surface water was stable on Mars longer than previously recognized. It may have lasted into the Late Hesperian period.

This may mean that habitable environments, around an ocean, extended to more recent times.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Scientists Unearth 3-Billion-Year-Old Beach Near a Primordial Ocean on Mars appeared first on SingularityHub.

Kategorie: Transhumanismus

This Robot Swarm Can Flow Like Liquid and Support a Human’s Weight

Singularity HUB - 25 Únor, 2025 - 00:25

Inspired by living cells, the robots flow around objects and solidify into tools and other objects.

With their bright blue bases, yellow gears, and exposed circuit tops, the 3D-printed robots look like a child’s toys. Yet as a roughly two-dozen-member collective, they can flow around obstacles before hardening into weight-bearing tools that push, throw, twist objects like a wrench—and bear up to 150 pounds of weight.

The brainchild of Matthew Devlin, Elliot Hawkes, and colleagues at UC Santa Barbara and TU Dresden, the robots behave like a smart material that shape-shifts into different load-bearing structures as needed. Each smaller in width than a hockey puck, the robots took inspiration from how our cells organize into muscles, skin, and bones—each with vastly different mechanical properties.

Dubbed “programmable matter” and “claytronics,” the concept of robotic materials has long intrigued science fiction writers and scientists alike. Made up of swarms of robots, they can melt and reform, but once locked into a configuration, they have to be stiff and strong enough to hold weight and pack a punch.

“Making this vision a reality would change static objects—with properties set at the time of design—into dynamic matter that could reconfigure into myriad forms with diverse physical properties,” wrote the team.

The new study, published in Science, showcases a proof-of-concept design. Depending on physical and magnetic forces as well as light signals, the robots can form tiny bridges that support weight, collapse into their flow state, and reform as a functional wrench around an object. Each process is controlled by the robot’s integral design.

“We’ve figured out a way for robots to behave more like a material,” said Devlin in a press release.

Unexpected Inspiration

Modular robots and drone collectives have already impressed the robotics community and millions beyond. Over a decade ago, a thousand-bot-strong preprogrammed swarm collaborated with nearby neighbors to self-assemble into complex shapes. While dynamic, they couldn’t support weight. Other designs have been stiffer and stronger but have struggled to reconfigure without breaking group dynamics.

Achieving both properties was “a fundamental challenge to overcome,” wrote the team. For robotic materials to become reality, they need to dynamically shift between a flowing state, in which they can take on new shapes, and a solid state once they reach their final shape.

Nature provides inspiration.

The Power of Three

The team tapped into recent insights gained from the study of embryonic tissues. Starting as a bunch of uniform cells, these tissues can rearrange themselves into multiple shapes and flow to heal tissues. Responding to a bath of biochemical signals within the body, they eventually form a variety of structures—stretchy muscles, stiff bones and teeth, elastic skin, or squishy brains.

“Living embryonic tissues are the ultimate smart materials,” said study author Otger Campàs.

Their versatility relies on three main features.

The first is the force between cells. Imagine being on a completely packed bus. Getting off requires you to push a path across multiple people. Cells are the same. Squishing past each other lets each control where they are in space and time based on their genetic instructions.

The second is coordination. To avoid cellular mayhem, cells use a bunch of biochemical signals to share their positions and movements as they lay out the general landscape of a developing embryo. Finally, cells can grab onto each other—dubbed cellular adhesion—with different levels of strength to build a vast library of tissues with different physical properties.

The robots’ design capture each of these features in 3D-printed hardware.

The bottom of each robot features eight motorized gears dotting the exterior. The bottom isn’t perfectly circular. Some sections are carefully carved out, so that neighbors can always grab onto each other and easily slide off without getting jammed—even when tightly packed. These are a bit like the grooved lids of peanut butter jars. Each gear only slightly peeks out of the housing, enough to grab onto another robot but also easily release it when needed.

To mimic biochemical signals, the team turned to light. Each robot is equipped with light sensors on top and a taped-on polarized film, similar to the material lining some sunglasses. These filters only let light waves vibrating in a particular direction to pass through to the light sensor, telling the robots which way to spin their gears.

Lastly, magnets in small chambers are distributed across the robots’ edges. These can freely roll around and stick to neighboring robots regardless of their position, mimicking cell adhesion.

Robots, Assemble

The team manufactured roughly two dozen battery-powered robots and challenged them to a series of tests. The robots weren’t autonomous: The scientists controlled both the grip strength of the gears and the light signals.

One test started with two towers of robots rotating along each other until they transformed into a rigid bridge. Another began with the robots in a diamond shape that then stretched horizontally into a “mover” that could push a five-pound barbell.

Another test roughly mimicked a workout for your arms. Roughly 20 bots held up two five-pound weights on each side and relaxed only one side when prompted, collapsing into a fluid-like state. All the while, the other side stayed strong.

Even more impressively, the robots swarmed around a nail and solidified to hold it in place. They also hugged a triangle-shaped object in their liquid form and transformed into a wrench capable of twisting the object around. In a demonstration of strength, a collective of 30 robots actively supported a human, weighing roughly 150 pounds, as they stepped across. Then, on command, the structure gradually gave way like mud.

These experiments revealed a surprising quirk. The robots could more easily turn into a fluid-like form when the forces between the robots fluctuated slightly. In contrast, constantly pushing against each other resulted in a deadlock, where no single unit could move, torpedoing the overall dynamics of the robots.

The force fluctuations also saved energy. Returning to the bus analogy, it’s a bit like how wriggling out of a tightly packed human barricade is easier than trying to strong-arm your way through. Adding these fluctuations could be especially beneficial for robots with limited power resources, such as those that run on batteries.

For now, the robot collective has only been tested in about two dozen physical units. But computer simulations of roughly 400 suggest their physical dynamics remain the same and the setup is scalable.

The team is envisioning miniaturizing the system. They’re also eager to explore the technology in soft robots. Like living cells, each unit would be able to stretch and change its shape or size. Although these robots would likely be limited by material properties, a swarm could still significantly change the overall structure and flexibility of any final architecture.

Add in a dose of state-of-the-art control methods—such as AI—to further fine-tune how the units interact and the results could “lead to exciting emergent capabilities in robotic materials,” wrote the authors.

The post This Robot Swarm Can Flow Like Liquid and Support a Human’s Weight appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through February 22)

Singularity HUB - 22 Únor, 2025 - 16:00
Artificial Intelligence

Google’s New AI Generates Hypotheses for ResearchersRyan Whitwam | Ars Technica

“Over the past few years, Google has embarked on a quest to jam generative AI into every product and initiative possible. …And sometimes, the output of generative AI systems can be surprisingly good despite lacking any real knowledge. But can they do science? Google Research is now angling to turn AI into a scientist—well, a ‘co-scientist.'”

Robotics

Norway’s 1X Is Building a Humanoid Robot for the HomeBrian Heater | TechCrunch

“Norwegian robotics firm 1X unveiled its latest home robot, Neo Gamma, on Friday. The humanoid system will succeed Neo Beta, which debuted in August. Like its predecessors, the Neo Gamma is a prototype designed for testing in the home environment. Images of the robot show it performing a number of household tasks like making coffee, doing the laundry, and vacuuming.”

BIOTECH

Your Next Pet Could Be a Glowing RabbitEmily Mullin | Wired

“Humans have been selectively breeding cats and dogs for thousands of years to make more desirable pets. A new startup called the Los Angeles Project aims to speed up that process with genetic engineering to make glow-in-the-dark rabbits, hypoallergenic cats and dogs, and possibly, one day, actual unicorns.”

ARTIFICIAL INTELLIGENCE

DeepSeek Goes Beyond ‘Open Weights’ AI With Plans for Source Code ReleaseKyle Orland | Ars Technica

“Last month, DeepSeek turned the AI world on its head with the release of a new, competitive simulated reasoning model that was free to download and use under an MIT license. Now, the company is preparing to make the underlying code behind that model more accessible, promising to release five open source repos starting next week.”

3D PRINTING

Nature-Inspired Breakthrough Yields Thinnest 3D-Printed Fibers YetMargherita Bassi | Gizmodo

“Professionals of all kinds—from artists to architects to scientists—have been drawing inspiration from nature for millennia. Now, engineers have managed to produce extremely fine fibers inspired by spider silk and hagfish slime. A team of international researchers has used a new 3D-printing technique to create microfibers just 1.5 microns thick.”

Future

AI Agents Will Outmaneuver Salespeople by Optimizing PersuasionLouis Rosenberg | Big Think

“AI agents have evolved from simple heuristics to sophisticated systems that analyze human personalities in real time to optimize persuasion. 
These conversational agents could soon engage us daily, adapting their tactics based on our traits, emotions, and behaviors. In this op-ed, AI researcher Louis Rosenberg argues that as conversational AI agents become more interactive and personalized, they will surpass human influencers in their ability to shape our decisions without us realizing it.”

ROBOTICS

Reinforcement Learning Triples Spot’s Running SpeedEvan Ackerman | IEEE Spectrum

“If Spot running this quickly looks a little strange, that’s probably because it is strange, in the sense that the way this robot dog’s legs and body move as it runs is not very much like how a real dog runs at all. ‘The gait is not biological, but the robot isn’t biological,’ explains Farbod Farshidian, roboticist at the RAI Institute. ‘Spot’s actuators are different from muscles, and its kinematics are different, so a gait that’s suitable for a dog to run fast isn’t necessarily best for this robot.'”

future

AI Is Prompting an Evolution, Not Extinction, for CodersSteve Lohr | The New York Times

“‘The skills software developers need will change significantly, but AI will not eliminate the need for them,’ said Arnal Dayaratna, an analyst at IDC, a technology research firm. ‘Not anytime soon anyway.’ The outlook for software engineers offers a window into the impact that generative AI—the kind behind chatbots like OpenAI’s ChatGPT—is likely to have on knowledge workers across the economy, from doctors and lawyers to marketing managers and financial analysts.”

Space

The Lunar Economy Is ComingJorge Garay | Wired

“The lunar economy, complete with its own supply chain, may seem like a distant concept, but its foundations are already here. It will center around using the moon’s natural resources to construct scientific infrastructure on its surface, as well as develop capacity for future space exploration (the moon is a potential spaceport for more distant destinations, such as Mars).”

TECH

Why AI Spending Isn’t Slowing DownChristopher Mims | The Wall Street Journal

“Despite a brief period of investor doubt, money is pouring into artificial intelligence from big tech companies, national governments and venture capitalists at unprecedented levels. To understand why, it helps to appreciate the way that AI itself is changing. The technology is shifting away from conventional large language models and toward reasoning models and AI agents.”

The post This Week’s Awesome Tech Stories From Around the Web (Through February 22) appeared first on SingularityHub.

Kategorie: Transhumanismus
Syndikovat obsah