Transhumanismus
MIT’s New Robot Dog Learned to Walk and Climb in a Simulation Whipped Up by Generative AI
A big challenge when training AI models to control robots is gathering enough realistic data. Now, researchers at MIT have shown they can train a robot dog using 100 percent synthetic data.
Traditionally, robots have been hand-coded to perform particular tasks, but this approach results in brittle systems that struggle to cope with the uncertainty of the real world. Machine learning approaches that train robots on real-world examples promise to create more flexible machines, but gathering enough training data is a significant challenge.
One potential workaround is to train robots using computer simulations of the real world, which makes it far simpler to set up novel tasks or environments for them. But this approach is bedeviled by the “sim-to-real gap”—these virtual environments are still poor replicas of the real world and skills learned inside them often don’t translate.
Now, MIT CSAIL researchers have found a way to combine simulations and generative AI to enable a robot, trained on zero real-world data, to tackle a host of challenging locomotion tasks in the physical world.
“One of the main challenges in sim-to-real transfer for robotics is achieving visual realism in simulated environments,” Shuran Song from Stanford University, who wasn’t involved in the research, said in a press release from MIT.
“The LucidSim framework provides an elegant solution by using generative models to create diverse, highly realistic visual data for any simulation. This work could significantly accelerate the deployment of robots trained in virtual environments to real-world tasks.”
Leading simulators used to train robots today can realistically reproduce the kind of physics robots are likely to encounter. But they are not so good at recreating the diverse environments, textures, and lighting conditions found in the real world. This means robots relying on visual perception often struggle in less controlled environments.
To get around this, the MIT researchers used text-to-image generators to create realistic scenes and combined these with a popular simulator called MuJoCo to map geometric and physics data onto the images. To increase the diversity of images, the team also used ChatGPT to create thousands of prompts for the image generator covering a huge range of environments.
After generating these realistic environmental images, the researchers converted them into short videos from a robot’s perspective using another system they developed called Dreams in Motion. This computes how each pixel in the image would shift as the robot moves through an environment, creating multiple frames from a single image.
The researchers dubbed this data-generation pipeline LucidSim and used it to train an AI model to control a quadruped robot using just visual input. The robot learned a series of locomotion tasks, including going up and down stairs, climbing boxes, and chasing a soccer ball.
The training process was split into parts. First, the team trained their model on data generated by an expert AI system with access to detailed terrain information as it attempted the same tasks. This gave the model enough understanding of the tasks to attempt them in a simulation based on the data from LucidSim, which generated more data. They then re-trained the model on the combined data to create the final robotic control policy.
The approach matched or outperformed the expert AI system on four out of the five tasks in real-world tests, despite relying on just visual input. And on all the tasks, it significantly outperformed a model trained using “domain randomization”—a leading simulation approach that increases data diversity by applying random colors and patterns to objects in the environment.
The researchers told MIT Technology Review their next goal is to train a humanoid robot on purely synthetic data generated by LucidSim. They also hope to use the approach to improve the training of robotic arms on tasks requiring dexterity.
Given the insatiable appetite for robot training data, methods like this that can provide high-quality synthetic alternatives are likely to become increasingly important in the coming years.
Image Credit: MIT CSAIL
Sweet CRISPR Tomatoes May Be Coming to a Supermarket Near You
When I was a young kid, our neighborhood didn’t have any grocery stores. The only place to buy fruits and vegetables was at our local farmer’s market. My mom would pick out the freshest tomatoes and sauté them with eggs into a simple dish that became my comfort food.
The tomatoes were hideous to look at—small, gnarled, miscolored, and nothing like the perfectly plump and bright beefsteak or Roma tomatoes that eventually flooded supermarkets. But they were oh-so-tasty, with a perfect ratio of tart and sweet flavors that burst in my mouth.
These days, when I ask for the same dish, my mom will always say, “Tomatoes just don’t taste the same anymore.”
She’s not alone. Many people have noticed that today’s produce is watery, waxy, and lacking in flavor—despite looking ripe and inviting. One reason is it was bred that way. Today’s crops are often genetically selected to prioritize appearance, size, shelf life, and transportability. But these perks can sacrifice taste—most often, in the form of sugar. Even broccoli, known for its bitterness, has variants that accumulate sugar inside their stems for a slightly sweeter taste.
The problem is that larger fruit sizes are often less sweet, explains Sanwen Huang and colleagues in Shenzhen, China. The key is to break that correlation. His team may have found a way using a globally popular crop—the tomato—as an example.
By comparing wild and domesticated tomatoes, the team hunted down a set of genes that put the brakes on sugar production. Inhibiting those genes using CRISPR-Cas9, the popular gene-editing tool, bumped up the fruit’s sugar content by 30 percent—enough for a consumer panel to find a noticeable increase in sweetness—without sacrificing size or yields.
Seeds from the edited plants germinated as usual, allowing the edits to pass on to the next generations.
The study isn’t just about satisfying our sweet tooth. Crops, not just tomatoes, with higher sugar content also contain more calories, which are necessary if we’re to meet the needs of a growing global population. The analysis pipeline established in the study is set to identify other genetic trade-offs between size and nutrition, with the goal of rapidly engineering better crops.
The work “represents an exciting step forward…for crop improvement worldwide,” wrote Amy Lanctot and Patrick Shih at the University of California, Berkeley, who were not involved in the study.
Hot LinksFor eons, humanity has cultivated crops to enhance desirable aspects—for example, better yields, higher nutrition, or looks.
Tomatoes are a perfect example. The fruit “is the most valuable vegetable crop, worldwide, and makes substantial overall health and nutritional contributions to the human diet,” wrote the team. Its wild versions range in size from cherries to peas—far smaller than most current variants found in grocery stores. Flavor comes from two types of sugars packed in their solid bits.
After thousands of years of domestication, sugars remain the key ingredient to better-tasting tomatoes. But in recent decades, breeders mostly prioritized increasing fruit size. The result are tomatoes that are easily sliced for sandwiches, crushed for canning, or further processed into sauces or pastes. Compared to their wild ancestors, today’s cultivated tomatoes are roughly between 10 to 100 times larger in size, making them far more economical.
But these improvements come a cost. Multiple studies have found that as size goes up, sugar levels and flavor tank. A similar trend has also been found in other large farming fruits.
Ever since, scientists have tried teasing out the tomato’s inner workings—especially genes that produces sugar—to restore its taste and nutritious value. One study in 2017 combined genomic analysis of nearly 400 varieties of tomatoes with results from a human taste panel to home in on a slew of metabolic chemicals that made the fruit taste better. A year later, Huang’s team, who led the new study, analyzed the genetic makeup and cell function of hundreds of tomato types. Domestication was associated with several large changes in the plant’s genome—but the team didn’t know how each genetic mutation altered the fruit’s metabolism.
It’s tough to link a gene to a trait. Our genes, as DNA strands, are tightly wound into mostly X-shaped chromosomes. Like braided balls of yarn, these 3D structures bring genes normally separated on a linear strand into close proximity. This means nearby, or “linked,” genes often turn on or off together.
“Genetic linkage makes it difficult to alter one gene without affecting the other,” wrote Lanctot and Shih.
Fast Track EvolutionThe new study used two technologies to overcome the problem.
The first was cheaper genetic sequencing. By scanning through genetic variations between domesticated and wild tomatoes, the team pinpointed six tomato genes likely responsible for the fruit’s sweetness.
One gene especially caught their eye. It was turned off in sweeter tomato species, putting the brakes on the plants’ ability to accumulate sugar. Using the gene-editing tool CRISPR-Cas9, the team mutated the gene so it could no longer function and grew the edited species—along with normal ones—under the same conditions in a garden.
The Sweet SpotRoughly 100 volunteers tried the edited and normal tomatoes in a blind trial. The CRISPRed tomatoes won in a landslide for their perceived sweetness.
The study isn’t just about a better tomato. “This research demonstrates the value hidden in the genomes of crop species varieties and their wild relatives,” wrote Lanctot and Shih.
Domestication, while boosting yield or size of a fruit, often decreases genetic diversity for a species because selected crops eventually contain mostly the same genetic blueprint. Some crops, such as bananas, can’t reproduce on their own and are extremely vulnerable to fungi. Analyzing genes related to these traits could help form a defense strategy.
Conservation and taste aside, scientists have also tried to endow crops with more exotic traits. In 2021, Sanatech Seed, a company based in Japan, engineered tomatoes using CRISPR-Cas9 to increase the amount of a chemical that dampens neural transmission. According to the company, the tomatoes can lower blood pressure and help people relax. The fruit is already on the market following regulatory approval in Japan.
Studies that directly link a gene to a trait in plants are still extremely rare. Thanks to cheaper and faster DNA sequencing technologies, and increasingly precise CRISPR tools, it’s becoming easier to test these connections.
“The more researchers understand about the genetic pathways underlying these trade-offs, the more they can take advantage of modern genome-editing tools to attempt to disentangle them to boost crucial agricultural traits,” wrote Lanctot and Shih.
Image Credit: Thomas Martinsen on Unsplash
Could We Ever Decipher an Alien Language? Uncovering How AI Communicates May Be Key
In the 2016 science fiction movie Arrival, a linguist is faced with the daunting task of deciphering an alien language consisting of palindromic phrases, which read the same backwards as they do forwards, written with circular symbols. As she discovers various clues, different nations around the world interpret the messages differently—with some assuming they convey a threat.
If humanity ended up in such a situation today, our best bet may be to turn to research uncovering how artificial intelligence develops languages.
But what exactly defines a language? Most of us use at least one to communicate with people around us, but how did it come about? Linguists have been pondering this very question for decades, yet there is no easy way to find out how language evolved.
Language is ephemeral, it leaves no examinable trace in the fossil records. Unlike bones, we can’t dig up ancient languages to study how they developed over time.
While we may be unable to study the true evolution of human language, perhaps a simulation could provide some insights. That’s where AI comes in—a fascinating field of research called emergent communication, which I have spent the last three years studying.
To simulate how language may evolve, we give AI agents simple tasks that require communication, like a game where one robot must guide another to a specific location on a grid without showing it a map. We provide (almost) no restrictions on what they can say or how—we simply give them the task and let them solve it however they want.
Because solving these tasks requires the agents to communicate with each other, we can study how their communication evolves over time to get an idea of how language might evolve.
Similar experiments have been done with humans. Imagine you, an English speaker, are paired with a non-English speaker. Your task is to instruct your partner to pick up a green cube from an assortment of objects on a table.
You might try to gesture a cube shape with your hands and point at grass outside the window to indicate the color green. Over time, you’d develop a sort of proto-language together. Maybe you’d create specific gestures or symbols for “cube” and “green.” Through repeated interactions, these improvised signals would become more refined and consistent, forming a basic communication system.
This works similarly for AI. Through trial and error, algorithms learn to communicate about objects they see, and their conversation partners learn to understand them.
But how do we know what they’re talking about? If they only develop this language with their artificial conversation partner and not with us, how do we know what each word means? After all, a specific word could mean “green,” “cube,” or worse—both. This challenge of interpretation is a key part of my research.
Cracking the CodeThe task of understanding AI language may seem almost impossible at first. If I tried speaking Polish (my mother tongue) to a collaborator who only speaks English, we couldn’t understand each other or even know where each word begins and ends.
The challenge with AI languages is even greater, as they might organize information in ways completely foreign to human linguistic patterns.
Fortunately, linguists have developed sophisticated tools using information theory to interpret unknown languages.
Just as archaeologists piece together ancient languages from fragments, we use patterns in AI conversations to understand their linguistic structure. Sometimes we find surprising similarities to human languages, and other times we discover entirely novel ways of communication.
These tools help us peek into the “black box” of AI communication, revealing how AI agents develop their own unique ways of sharing information.
My recent work focuses on using what the agents see and say to interpret their language. Imagine having a transcript of a conversation in a language unknown to you, along with what each speaker was looking at. We can match patterns in the transcript to objects in the participant’s field of vision, building statistical connections between words and objects.
For example, perhaps the phrase “yayo” coincides with a bird flying past—we could guess that “yayo” is the speaker’s word for “bird.” Through careful analysis of these patterns, we can begin to decode the meaning behind the communication.
In the latest paper by me and my colleagues, set to appear in the conference proceedings of Neural Information Processing Systems (NeurIPS), we show that such methods can be used to reverse-engineer at least parts of the AIs’ language and syntax, giving us insights into how they might structure communication.
Aliens and Autonomous SystemsHow does this connect to aliens? The methods we’re developing for understanding AI languages could help us decipher any future alien communications.
If we are able to obtain some written alien text together with some context (such as visual information relating to the text), we could apply the same statistical tools to analyze them. The approaches we’re developing today could be useful tools in the future study of alien languages, known as xenolinguistics.
But we don’t need to find extraterrestrials to benefit from this research. There are numerous applications, from improving language models like ChatGPT or Claude to improving communication between autonomous vehicles or drones.
By decoding emergent languages, we can make future technology easier to understand. Whether it’s knowing how self-driving cars coordinate their movements or how AI systems make decisions, we’re not just creating intelligent systems—we’re learning to understand them.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Tomas Martinez on Unsplash
Make Music A Full Body Experience With A “Vibro-Tactile” Suit
Tired: Listening to music.
Wired: Feeling the music.
A mind-bending new suit straps onto your torso, ankles and wrists, then uses actuators to translate audio into vivid vibration. The result: a new way for everyone to experience music, according to its creators. That’s especially exciting for people who have trouble hearing.
THE FEELIESThe Music: Not Impossible suit was created by design firm Not Impossible Labs and electronics manufacturing company Avnet. The suit can create sensations to go with pre-recorded music, or a “Vibrotactile DJ” can adjust the sensations in real time during a live music event.”
Billboard writer Andy Hermann tried the suit out, and it sounds like a trip.
“Sure enough, a pulse timed to a kickdrum throbs into my ankles and up through my legs,” he wrote. “Gradually, [the DJ] brings in other elements: the tap of a woodblock in my wrists, a bass line massaging my lower back, a harp tickling a melody across my chest.”
MORE ACCESSIBLETo show the suit off, Not Impossible and Avnet organized a performance this past weekend by the band Greta Van Fleet at the Life is Beautiful Festival in Las Vegas. The company allowed attendees to don the suits. Mandy Harvey, a deaf musician who stole the show on America’s Got Talent last year, talked about what the performance meant to her in a video Avnet posted to Facebook.
“It was an unbelievable experience to have an entire audience group who are all experiencing the same thing at the same time,” she said. “For being a deaf person, showing up at a concert, that never happens. You’re always excluded.”
READ MORE: Not Impossible Labs, Zappos Hope to Make Concerts More Accessible for the Deaf — and Cooler for Everyone [Billboard]
More on accessible design: New Tech Allows Deaf People To Sense Sounds
The post Make Music A Full Body Experience With A “Vibro-Tactile” Suit appeared first on Futurism.
“Synthetic Skin” Could Give Prosthesis Users a Superhuman Sense of Touch
Today’s prosthetics can give people with missing limbs the ability to do almost anything — run marathons, climb mountains, you name it. But when it comes to letting those people feel what they could with a natural limb, the devices, however mechanically sophisticated, invariably fall short.
Now researchers have created a “synthetic skin” with a sense of touch that not only matches the sensitivity of natural skin, but in some cases even exceeds it. Now the only challenge is getting that information back into the wearer’s nervous system.
UNDER PRESSUREWhen something presses against your skin, your nerves receive and transmit that pressure to the brain in the form of electrical signals.
To mimic that biological process, the researchers suspended a flexible polymer, dusted with magnetic particles, over a magnetic sensor. The effect is like a drum: Applying even the tiniest amount of pressure to the membrane causes the magnetic particles to move closer to the sensors, and they transmit this movement electronically.
The research, which could open the door to super-sensitive prosthetics, was published Wednesday in the journal Science Robotics.
SPIDEY SENSE TINGLINGTests shows that the skin can sense extremely subtle pressure, such as a blowing breeze, dripping water, or crawling ants. In some cases, the synthetic skin responded to pressures so gentle that natural human skin wouldn’t be able to detect them.
While the sensing ability of this synthetic skin is remarkable, the team’s research doesn’t address how to transmit the signals to the human brain. Other scientists are working on that, though, so eventually this synthetic skin could give prosthetic wearers the ability to feel forces even their biological-limbed friends can’t detect.
READ MORE: A Skin-Inspired Tactile Sensor for Smart Prosthetics [Science Robotics]
More on synthetic skin: Electronic Skin Lets Amputees Feel Pain Through Their Prosthetics
The post “Synthetic Skin” Could Give Prosthesis Users a Superhuman Sense of Touch appeared first on Futurism.
People Are Zapping Their Brains to Boost Creativity. Experts Have Concerns.
There’s a gadget that some say can help alleviate depression and enhance creativity. All you have to do is place a pair of electrodes on your scalp and the device will deliver electrical current to your brain. It’s readily available on Amazon or you can even make your own.
But in a new paper published this week in the Creativity Research Journal, psychologists at Georgetown University warned that the practice is spreading before we have a good understanding of its health effects, especially since consumers are already buying and building unregulated devices to shock them. They also cautioned that the technique, which scientists call transcranial electrical stimulation (tES), could have adverse effects on the brains of young people.
“There are multiple potential concerns with DIY-ers self-administering electric current to their brains, but this use of tES may be inevitable,” said co-author Adam Green in a press release. “And, certainly, anytime there is risk of harm with a technology, the scariest risks are those associated with kids and the developing brain”
SHOCK JOCKYes, there’s evidence that tES can help patients with depression, anxiety, Parkinson’s disease, and other serious conditions, the Georgetown researchers acknowledge.
But that’s only when it’s administered by a trained health care provider. When administering tES at home, people might ignore safety directions, they wrote, or their home-brewed devices could deliver unsafe amounts of current. And because it’s not yet clear what effects of tES might be on the still-developing brains of young people, the psychologists advise teachers and parents to resist the temptation to use the devices to encourage creativity among children.
The takeaway: tES is likely here to stay, and it may provide real benefits. But for everyone’s sake, consumer-oriented tES devices should be regulated to protect users.
READ MORE: Use of electrical brain stimulation to foster creativity has sweeping implications [Eurekalert]
More on transcranial electrical stimulation: DARPA’s New Brain Device Increases Learning Speed by 40%
The post People Are Zapping Their Brains to Boost Creativity. Experts Have Concerns. appeared first on Futurism.
Military Pilots Can Control Three Jets at Once via a Neural Implant
The military is making it easier than ever for soldiers to distance themselves from the consequences of war. When drone warfare emerged, pilots could, for the first time, sit in an office in the U.S. and drop bombs in the Middle East.
Now, one pilot can do it all, just using their mind — no hands required.
Earlier this month, DARPA, the military’s research division, unveiled a project that it had been working on since 2015: technology that grants one person the ability to pilot multiple planes and drones with their mind.
“As of today, signals from the brain can be used to command and control … not just one aircraft but three simultaneous types of aircraft,” Justin Sanchez, director of DARPA’s Biological Technologies Office, said, according to Defense One.
THE SINGULARITYSanchez may have unveiled this research effort at a “Trajectory of Neurotechnology” session at DARPA’s 60th anniversary event, but his team has been making steady progress for years. Back in 2016, a volunteer equipped with a brain-computer interface (BCI) was able to pilot an aircraft in a flight simulator while keeping two other planes in formation — all using just his thoughts, a spokesperson from DARPA’s Biological Technologies Office told Futurism.
In 2017, Copeland was able to steer a plane through another simulation, this time receiving haptic feedback — if the plane needed to be steered in a certain direction, Copeland’s neural implant would create a tingling sensation in his hands.
NOT QUITE MAGNETOThere’s a catch. The DARPA spokesperson told Futurism that because this BCI makes use of electrodes implanted in and on the brain’s sensory and motor cortices, experimentation has been limited to volunteers with varying degrees of paralysis. That is: the people steering these simulated planes already had brain electrodes, or at least already had reason to undergo surgery.
To try and figure out how to make this technology more accessible and not require surgical placement of a metal probe into people’s brains, DARPA recently launched the NExt-Generation Nonsurgical Neurotechnology (N3) program. The plan is to make a device with similar capabilities, but it’ll look more like an EEG cap that the pilot can take off once a mission is done.
“The envisioned N3 system would be a tool that the user could wield for the duration of a task or mission, then put aside,” said Al Emondi, head of N3, according to the spokesperson. “I don’t like comparisons to a joystick or keyboard because they don’t reflect the full potential of N3 technology, but they’re useful for conveying the basic notion of an interface with computers.”
READ MORE: It’s Now Possible To Telepathically Communicate with a Drone Swarm [Defense One]
More on DARPA research: DARPA Is Funding Research Into AI That Can Explain What It’s “Thinking”
The post Military Pilots Can Control Three Jets at Once via a Neural Implant appeared first on Futurism.
Lab-Grown Bladders Can Save People From a Lifetime of Dialysis
Today, about 10 people on Earth have bladders they weren’t born with. No, they didn’t receive bladder transplants — doctors grew these folks new bladders using the recipients’ own cells.
On Tuesday, the BBC published a report on the still-nascent procedure of transplanting lab-grown bladders. In it, the publication talks to Luke Massella, who underwent the procedure more than a decade ago. Massella was born with spina bifida, which carries with it a risk of damage to the bladder and urinary tract. Now, he lives a normal life, he told the BBC.
“I was kind of facing the possibility I might have to do dialysis [blood purification via machine] for the rest of my life,” he said. “I wouldn’t be able to play sports, and have the normal kid life with my brother.”
All that changed after Anthony Atala, a surgeon at Boston Children’s Hospital, decided he was going to grow a new bladder for Massella.
ONE NEW BLADDER, COMING UP!To do that, Atala first removed a small piece of Massella’s own bladder. He then removed cells from this portion of bladder and multiplied them in a petri dish. Once he had enough cells, he coated a scaffold with the cells and placed the whole thing in a temperature controlled, high oxygen environment. After a few weeks, the lab-created bladder was ready for transplantation into Massella.
“So it was pretty much like getting a bladder transplant, but from my own cells, so you don’t have to deal with rejection,” said Massella.
The number of people with lab-grown bladders might still be low enough to count on your fingers, but researchers are making huge advances in growing everything from organs to skin in the lab. Eventually, we might reach a point when we can replace any body part we need to with a perfect biological match that we built ourselves.
READ MORE: “A New Bladder Made From My Cells Gave Me My Life Back” [BBC]
More on growing organs: The FDA Wants to Expedite Approval of Regenerative Organ Therapies
The post Lab-Grown Bladders Can Save People From a Lifetime of Dialysis appeared first on Futurism.