Scientists Now Know Why Psychedelics Conquer Depression Even Without a High

Singularity HUB - 6 Červen, 2023 - 16:00

Everyone is raving about hallucinogens as the future of antidepressants.

LSD (better known as acid), psilocybin (the active ingredient in magic mushrooms), and the “spirit molecule” DMT are all being tested in clinical trials as fast-acting antidepressants.

And I mean fast: when carefully administered by a doctor, they can uplift mood in just one session, with the results lasting for months. Meanwhile, traditional antidepressants such as Prozac often take weeks to see any improvement—if they work at all.

But tripping all day is hardly a practical solution. Unlike Prozac, hallucinogens need to be carefully administered in a doctor’s office, under supervision, and in a comfortable setting for best therapeutic results. It’s a tough sale for busy individuals.

Then there’s the elephant in the room: psychedelics are still classified as Schedule I drugs at the federal level, meaning that similar to heroin, their possession and consumption is illegal.

What if we could strip the trip out of psychedelics, but leave their mood-boosting magic?

This week, a new study in Nature Neuroscience suggests it’s possible. Led by Dr. Eero Castrén, a long-time champion of psychedelic research for mental health, the Finnish team dug deep into the molecular machinery that either lifts mood or gives you a trippy head rush.

The results came as a surprise: similar to traditional antidepressants, psychedelics spurred new growth in both baby and mature neurons. But the mind-bending substances were 1,000 times more efficient than Prozac at grabbing onto a key molecular hub, TrkB. With just a single dose, the drugs elevated mood in mice under chronic stress and reduced a previously-established fear. However, when genetically stripped of a critical protein site, LSD lost its magic.

It’s still early days for re-configuring psychedelics as antidepressants. But the results “open an avenue for structure-based design” that can skirt unwanted hallucinations while developing fast and long-lasting antidepressants, the team said.

Meet the Players

Think of neurons as a fast-growing basil plant. It starts as a tiny sprout. With nutrition it blooms into a bushy wonder. Pruning the plant along the way helps its health and survival.

In neurons the main nutrient is BDNF, or brain-derived neurotrophic factor. It’s the all-star of rejuvenating the brain. In the hippocampus—a brain region critical for memory and mood— it helps nurture new neurons through life, cradling neural stem cell “seeds” into maturity. The protein is also essential for rewiring neural networks by pruning connections—a process called neuroplasticity. It’s a fundamental process in the brain that allows us to learn, adapt, and reason in an ever-changing world. Neuroplasticity is especially important for battling depression, as the condition often “locks” people into negative mindsets.

BDNF doesn’t act alone. It floats outside of neurons. Grabbing onto it is TrkB, a protein that usually lies low inside neurons until it’s time to rise to the top—literally. Once on the surface of neurons, it captures floating BDNF. The union then triggers a cascade of molecules that help the neuron branch and grow. Similar to roots on a growing basil plant, TrkB is key to letting brain cells absorb nutrients to promote growth.

Most conventional antidepressants, such as Celexa, Lexapro, Zoloft, and Prozac trigger this nurturing pathway. These medications, dubbed SSRIs (selective serotonin reuptake inhibitors) enhance a chemical called serotonin by blocking its recycling to increase levels in the brain and boost mood.

The downside? Serotonin is also the trigger for hallucinations from psychedelics.

An Atomic Dissection

The new study focused on these two pathways: the nurturing TrkB and the classic serotonin.

Using a myriad of experiments, the team first confirmed that psychedelics grab onto TrkB in cells in petri dishes. Think of TrkB as floating pieces of paper—two need to be united to activate the growth-supporting BDNF. Surprisingly, compared to traditional antidepressants, LSD was able to glue together the paper pieces and stabilize TrkB so that it better captured BDNF. One protein site was critical. When mutated, LSD could no longer organize the TrkB duo.

So what?

A further deep dive found that LSD activated the molecular cascade to increase BDNF, in turn nurturing a bushier neuron than traditional antidepressants. Using fluorescent nano-spy chemicals that glow in the dark, the team could see under the microscope that psychedelics rapidly spurred the neuron into action. With a single dose, LSD moved TrkB into dendritic spines—mushroom-shaped bumps that help neurons connect with each other—a marker for neuroplasticity.

Inside the hippocampus—the hub for learning, memory, and a regulator for mood—the drug boosted the number of newborn neurons in mice with just a single shot after four weeks. Neurogenesis, or the birth of new neurons, is a long-standing marker for antidepressant efficiency.

Here’s the crux: these neuroplasticity effects went away when the team genetically mutated TrkB. Going back to the paper analogy, it’s as if someone shredded one part of the paper so it can no longer catch onto the other.

In contrast, the high remained in mice without TrkB. Although we can’t ask a mouse whether it’s tripping, they do have a tell: multiple head jerks, as if they were head bobbing to the Grateful Dead. When the team gave them a shot that neutralizes serotonin, the mice came down.

Conclusion? LSD takes two highways in the brain: one, organized by BDNF and TrkB, boosts neural growth and neuroplasticity. The other unleashes serotonin, which helps reorganize neural networks but also triggers a trip.

Keeping Afloat

New neural growth is great. But does it mean anything?

The team put LSD to the test, pitting mice with a critical TrkB mutation—therefore lacking the ability to absorb BDNF—against their non-genetically-engineered control peers in several challenges.

The first involved a kiddie pool and gauged chronic stress. Mice are natural swimmers. They just don’t like doing it too much. Like being constantly yelled at by a swimming coach, their mood eventually becomes low. With one shot of LSD, the control mice rallied and thrived in their swim tests even a week after the shot. In contrast, those with a mutated TrkB couldn’t bounce back, giving up easily.

In another test mimicking post-traumatic stress disorder (PTSD) and anxiety, a single dose of LSD helped dampen fear in control mice for a specific traumatic environment. The effects lasted for at least four weeks. Mice with a mutated TrkB didn’t fare as well, retaining their anxiety and stress when put back into the same environment throughout the trial.

To be clear, LSD isn’t a magical shot. Similar to other antidepressants that help battle PTSD, it’s all about set and setting. “LSD alone does not bring about fear extinction, as extinction training is required to produce a sustained decrease” in behaviors normally associated with fear in mice, said the authors. In other words, don’t try this at home.

LSD and other hallucinogens have a long battle ahead to shed their stigma and be accepted as an antidepressant. But they have a cheerleader: esketamine, one form of the club drug special K, was approved as an antidepressant in 2019. However, in 2022, the FDA released an alert informing health professionals that other ketamine formulations may put patients at risk, spurring scientists to seek out chemicals with a similar effect but not the high. Taking a page out of the playbook, in the same year a team screened 75 million chemical compounds related to LSD for their antidepressant activity without hallucinations.

The new study hints at ideas to further reboot psychedelic medicine. Before being criminalized in the 1960s, the National Institute of Health funded over 130 studies exploring psychedelics’ potential for mental health. With AI-powered large-scale drug screening and modern biochemical techniques, we’re already in a brave new world.

We can now design a new era of antidepressants that trigger TrkB “with fast and long-lasting antidepressant action, but potentially devoid of hallucinogenic-like activity,” said the team.

Image Credit: Free Fun Art / Pixabay

Kategorie: Transhumanismus

Gently Jolting the Brain With Electrical Currents Could Boost Cognitive Function

Singularity HUB - 5 Červen, 2023 - 22:57

Figuring out how to enhance a person’s mental capabilities has been of considerable interest to psychology and neuroscience researchers like me for decades. From improving attention in high-stakes environments, like air traffic management, to reviving memory in people with dementia, the ability to improve cognitive function can have far-reaching consequences. New research suggests that brain stimulation could help achieve the goal of boosting mental function.

In the Reinhart Lab at Boston University, my colleagues and I have been examining the effects of an emerging brain stimulation technology—transcranial alternating current stimulation, or tACS—on different mental functions in patients and healthy people.

During this procedure, people wear an elastic cap embedded with electrodes that deliver weak electrical currents oscillating at specific frequencies to their scalp. By applying these controlled currents to specific brain regions, it is possible to alter brain activity by nudging neurons to fire rhythmically.

Why would rhythmically firing neurons be beneficial? Research suggests that brain cells communicate effectively when they coordinate the rhythm of their firing. Critically, these rhythmic patterns of brain activity show marked abnormalities during neuropsychiatric illnesses. The purpose of tACS is to externally induce rhythmic brain activity that promotes healthy mental function, particularly when the brain might not be able to produce these rhythms on its own.

However, tACS is a relatively new technology, and how it works is still unclear. Whether it can strengthen or revive brain rhythms to change mental function has been a topic of considerable debate in the field of brain stimulation. While some studies find evidence of changes in brain activity and mental function with tACS, others suggest that the currents typically used in people might be too weak to have a direct effect.

When faced with conflicting data in the scientific literature, it can be helpful to conduct a type of study called a meta-analysis that quantifies how consistent the evidence is across several studies. A previous meta-analysis conducted in 2016 found promising evidence for the use of tACS in changing mental function. However, the number of studies has more than doubled since then. The design of tACS technologies has also become increasingly sophisticated.

We set out to perform a new meta-analysis of studies using tACS to change mental function. To our knowledge, this work is the largest and most comprehensive meta-analysis yet on this topic, consisting of over 100 published studies with a combined total of more than 2,800 human participants.

After compiling over 300 measures of mental function across all the studies, we observed consistent and immediate improvement in mental function with tACS. When we examined specific cognitive functions, such as memory and attention, we observed that tACS produced the strongest improvements in executive function, or the ability to adapt in the face of new, surprising, or conflicting information.

We also observed improvements in the ability to pay attention and to memorize information for both short and long periods of time. Together, these results suggest that tACS could particularly improve specific kinds of mental function, at least in the short term.

To examine the effectiveness of tACS for those particularly vulnerable to changes in mental function, we examined the data from studies that included older adults and people with neuropsychiatric conditions. In both populations, we observed reliable evidence for improvements in cognitive function with tACS.

Interestingly, we also found that a specialized type of tACS that can target two brain regions at the same time and manipulate how they communicate with each other can both enhance or reduce cognitive function. This bidirectional effect on mental function could be particularly useful in the clinic. For example, some psychiatric conditions like depression may involve a reduced ability to process rewards, while others like bipolar disorder may involve a highly active reward processing system. If tACS can change mental function in either direction, researchers may be able to develop flexible and targeted designs that cater to specific clinical needs.

Developments in the field of tACS are bringing researchers closer to being able to safely enhance mental function in a noninvasive way that doesn’t require medication. Current statistical evidence across the literature suggests that tACS holds promise, and improving its design could help it produce stronger, long-lasting changes in mental function.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Gerd AltmannPixabay

Kategorie: Transhumanismus

A Nasty Virus That Infects Bacteria Could Be Key to Improved Gene Therapies

Singularity HUB - 4 Červen, 2023 - 16:00

Gene therapies could revolutionize medicine, but getting them into peoples’ bodies is harder than it might seem. A new method that re-purposes viruses that infect bacteria could provide a solution.

Finding ways to modify the DNA in the cells of living people could help treat or prevent a host of genetic diseases. It could also help re-purpose their cells to hunt down cancer or produce therapeutic molecules that could treat non-genetic conditions. But while our gene-editing tools are becoming increasingly sophisticated, getting them into peoples’ bodies is complicated.

A few gene therapies exist today, and they mostly use modified viruses, which excel at sneaking their DNA into their hosts’ cells. This makes these so-called viral vectors perfect cargo carriers for the tools and genetic material required to edit genes inside patients cells. But the adeno-associated viruses (AAVs) and lentiviruses that are most commonly used have a pretty small carrying capacity, which severely limits the scope of problems they can tackle.

New research from the Catholic University of America has shown that a type of bacteriophage—viruses that infect bacteria—with a much bigger cargo hold can be repurposed to deliver gene therapies. It’s also cheap to make, stable, and easy to program to carry out more complex missions.

“The actual therapy is years down the road, but this research provides a model for developing life saving treatments and cures,” Venigalla Rao, who led the research, said in a press release. “What we are researching is like a molecular surgery that can safely and precisely correct a defect and generate therapeutic outcomes and some day cures.”

In the hunt for a more capable delivery vehicle, the researchers turned to a phage called T4, which belongs to the Straboviridae family and infects E. coli bacteria. It has a host of promising characteristics, including a much larger capsid (the main compartment where genetic material is stored), an infection efficiency of nearly 100 percent, and the ability to replicate in just 20 to 30 minutes.

What’s more, researchers have already worked out the atomic structures of the phage’s main components, making the re-engineering process much simpler. This made it possible for the group to set up what it called an “assembly-line approach” in which cargo molecules like DNA, proteins, and RNA were sequentially added to the empty capsid shells and also stuck on their outside as well. The resulting viral vector is then coated in an envelope of lipid molecules, which make it easier to infiltrate human cells.

In a paper in Nature Communications, the researchers showed that their engineered phage could hold stretches of DNA up to 171,000 base pairs long, which is roughly 20 times more than viruses used in current gene therapies can hold. To demonstrate the potential, they used this carrying capacity to deliver the entire gene for the protein dystrophin into human cells. Mutations in this gene are responsible for the genetic disorder Duchenne muscular dystrophy.

In a series of experiments, the researchers showed that the viral vector could be used to do genome editing, gene recombination, gene replacement, gene expression, and gene silencing. They also showed that it could carry complex cargoes made up of multiple stretches of DNA aimed at different genes, alongside various proteins and RNA sequences. The researchers say this could ultimately open the door to treating complex diseases that involve multiple genes like many cancers, neurodegenerative disorders, and cardiovascular diseases.

While these early results are certainly promising, Jeffrey Chamberlain at the University of Washington in Seattle told New Scientist that the team has yet to show the viruses can actually deliver genes into the body, rather than simply to human cells in a petri dish. And Rao concedes that there’s still plenty of work to do to make the jump from the lab bench to the clinic.

But the ability to custom engineer viral vectors for a wide range of applications using their assembly line is highly promising. And unlike existing viral vectors, which have to be reared in human cell cultures at considerable cost, the team’s new engineered phage can be grown far more simply in bacteria.

It’s likely to take many more years of research to bring these ideas to fruition, but if successful, this could greatly expand the scope of future gene therapies.

Image Credit: Venigalla B. Rao; Victor Padilla-Sanchez, Andrei Fokine, and Jingen Zhu. Structural model of bacteriophage T4 artificial viral vector.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through June 3)

Singularity HUB - 3 Červen, 2023 - 16:00

Welcome to the New Surreal. How AI-Generated Video Is Changing Film.
Will Douglas Heaven | MIT Technology Review
The Frost nails its uncanny, disconcerting vibe in its first few shots. Vast icy mountains, a makeshift camp of military-style tents, a group of people huddled around a fire, barking dogs. It’s familiar stuff, yet weird enough to plant a growing seed of dread. There’s something wrong here. …The Frost is a 12-minute movie in which every shot is generated by an image-making AI. It’s one of the most impressive—and bizarre—examples yet of this strange new genre.”


Nanoscale Robotic ‘Hand’ Made of DNA Could Be Used to Detect Viruses
Michael Le Page | New Scientist
“Xing Wang at the University of Illinois and his colleagues constructed the nanohand using a method called DNA origami, in which a long, single strand of DNA is ‘stapled’ together by shorter DNA pieces that pair with specific sequences on the longer strand. …The four fingers of the nanohand are joined to a ‘palm’ to form a cross shape when the hand is open. Each finger is just 71 nanometers long…and has three joints, like a human finger.”


The ‘Death of Self-Driving Cars’ Has Been Greatly Exaggerated
Timothy B. Lee | Ars Technica
“[Google and Waymo] don’t believe self-driving technology is ‘decades away’ because they’re already testing it in Phoenix and San Francisco. And they are preparing to launch in additional cities in the coming months. Waymo expects to increase passenger rides tenfold between now and the summer of 2024. Cruise is aiming for $1 billion in revenue in 2025, which would require something like a 50-fold expansion of its current service.”


This Is the First X-Ray Taken of a Single Atom
Jennifer Ouellette | Ars Technica
“Atomic-scale imaging emerged in the mid-1950s and has been advancing rapidly ever since—so much so, that back in 2008, physicists successfully used an electron microscope to image a single hydrogen atom. Five years later, scientists were able to peer inside a hydrogen atom using a ‘quantum microscope,’ resulting in the first direct observation of electron orbitals. And now we have the first X-ray taken of a single atom.”


Get Ready for 3D-Printed Organs and a Knife That ‘Smells’ Tumors
Joao Madeiros | Wired
“To doctors and nurses working 75 years ago, when the UK’s National Health Service was founded, a modern ward would be completely unrecognizable. Fast-forward into the future, and hospitals are likely to look very different again. These are some of the changes you’re likely to see in years to come.”


I’m a Rational Optimist. Here’s Why I Don’t Believe in an AI Doomsday.
Rohit Krishnan | BigThink
“The systems of today are powerful. They can write, paint, direct, plan, code, and even write passable prose. And with this explosion of capabilities, we also have an explosion of worries. In seeing some of these current problems and projecting them into future non-extant problems, we find ourselves in a bit of a doom loop. The more fanciful arguments about how artificial superintelligence is inevitable and how they’re incredibly dangerous sit side by side with more understandable concerns about increasing misinformation.”


The Race to Make AI Smaller (and Smarter)
Oliver Whang | The New York Times
“[In January, a group of young AI researchers] called for teams to create functional language models ‌using data sets that are less than one-ten-thousandth the size of those used by the most advanced large language models. A successful mini-model would be nearly as capable as the high-end models but much smaller, more accessible and ‌more compatible with humans. The project is called the BabyLM Challenge.”


Judge Bans AI-Generated Filings In Court Because It Just Makes Stuff Up
Chloe Xiang | Motherboard
“This decision follows an incident where a Manhattan lawyer named Steven A. Schwartz used ChatGPT to write a 10-page brief that cited multiple cases that were made up by the chatbot, such as ‘Martinez v. Delta Air Lines,’ and ‘Varghese v. China Southern Airlines.’ After Schwartz submitted the brief to a Manhattan federal judge, no one could find the decisions or quotations included, and Schwartz later admitted in an affidavit that he had used ChatGPT to do legal research.”


The World Is Finally Spending More on Solar Than Oil Production
Casey Crownhart | MIT Technology Review
“Let’s start with what I consider to be good news: there’s a lot of money going into clean energy—including renewables, nuclear, and things that help cut emissions, like EVs and heat pumps. And not only is it a lot of money, but it’s more than the amount going toward fossil fuels. In 2022, for every dollar spent on fossil fuels, $1.70 went to clean energy. Just five years ago, it was dead even.”


The Quest to Use Quantum Mechanics to Pull Energy Out of Nothing
Charlie Wood | Wired
“In the past year, researchers have teleported energy across microscopic distances in two separate quantum devices, vindicating Hotta’s theory. The research leaves little room for doubt that energy teleportation is a genuine quantum phenomenon. ‘This really does test it,’ said Seth Lloyd, a quantum physicist at the Massachusetts Institute of Technology who was not involved in the research. “You are actually teleporting. You are extracting energy.”

Image Credit: Pawel CzerwinskiUnsplash

Kategorie: Transhumanismus

JetZero’s Next-Gen Aircraft Could Change How We Fly for the First Time in Decades

Singularity HUB - 2 Červen, 2023 - 16:00

Air travel is a major source of carbon emissions, accounting for about 2.4 percent of global emissions each year. There are a range of solutions in the works, from electrifying aircraft to using hydrogen fuel to bringing back the airship. The likelihood of any of these coming to fruition varies, and even if they do, it won’t be soon.

A California-based startup called JetZero has a different idea: changing the shape of commercial planes and the material they’re made of. The company unveiled its designs for the midsize commercial and military tanker-transport markets this spring, and has big plans to upend the way air travel looks and feels—as well as how much it costs and how much carbon it emits. Tony Fadell, founder of venture capital firm Build Collective and a JetZero investor and strategic advisor, thinks the company could be the “SpaceX of aviation” due to its potential to disrupt the existing business model.

JetZero’s planes, which are still in the concept/prototype phase, have a blended wing body design. That means the wings merge with the main body of the aircraft, rather than being attached to a hollow tube like the planes we travel in today. Picture the body of a manta ray: wide and flat, it tapers off to a narrower fin at each side, with a head and a tail. A blended wing body aircraft isn’t terribly different, though on JetZero’s models the body isn’t quite as wide.

Besides providing a lot more space, this design is more aerodynamic than tube-and-wing planes. JetZero plans to fly its planes at higher altitudes than today’s norm (40 to 45,000 feet rather than 30 to 35,000), and says its airframe will cut fuel burn and emissions in half. It plans to make its planes out of carbon fiber and kevlar (a strong lightweight fiber used for things like body armor, bulletproof vests, car brakes, boats, and aircraft). The company says its planes’ lighter weight and improved aerodynamics would be able to fly at the same speed and range as existing midbody jetliners, but burn half as much fuel in the process.

JetZero points out that we’ve brought the traditional tube-and-wing design about as far as we possibly can in terms of efficiency gains; there’s not much more to be done to make them lighter, faster, or more fuel-efficient. At the same time, jet fuel is getting more expensive, and reducing emissions is getting more urgent. If JetZero is able to bring its blended wing body aircraft to production, it would be the first major overhaul of commercial passenger planes, well, ever. But, the company says, its planes would still fit seamlessly into airport infrastructure, utilizing existing runways and gates without requiring significant alterations.

The company is planning to test a small prototype of its design with a 23-foot wingspan this summer and hopes to secure Air Force funding to build a full-size prototype it would demo in 2027. If all goes to plan, JetZero’s planes would start commercial service in a decade or so.

There are a lot of bumps they could encounter on the road to get there, but also a lot of incentive to make it happen. The aforementioned climate impact of air travel is likely to come under increased fire, as is occurring with anything that has significant climate impact.

Some countries are trying to reduce the amount of air travel their citizens do, like France, which just banned domestic short-haul flights. Other countries including Spain and Germany are considering restricting short-haul flights or imposing an extra tax on them. Measures like these may make a small difference, but when it comes down to it, people are still going to want to travel. In fact, if the global middle class continues to grow, demand for air travel will only go up; at present, only about three percent of the global population takes regular flights.

A more planet-friendly way to do it, then, is going to be imperative. SpaceX has proven that it’s possible for one private company to come along and completely upend a massively complex industry. Could JetZero do the same?

They’re going to try.

Image Credit: JetZero

Kategorie: Transhumanismus

Did Life Evolve More Than Once? Researchers Are Closing In on an Answer

Singularity HUB - 1 Červen, 2023 - 16:00

From its humble origin(s), life has infected the entire planet with endless beautiful forms. The genesis of life is the oldest biological event, so old that no clear evidence was left behind other than the existence of life itself. This leaves many questions open, and one of the most tantalizing is how many times life magically emerged from non-living elements.

Has all of life on Earth evolved only once, or are different living beings cut from different cloths? The question of how difficult it is for life to emerge is interesting, not least because it can shed some light on the likelihood of finding life on other planets.

The origin of life is a central question in modern biology, and probably the hardest to study. This event took place four billion years ago, and it happened at a molecular level, meaning little fossil evidence remains.

Many lively beginnings have been suggested, from unsavory primordial soups to outer space. But the current scientific consensus is that life emerged from non-living molecules in a natural process called abiogenesis, most likely in the darkness of deep-sea hydrothermal vents. But if life emerged once, why not more times?

What Is Abiogenesis?

Scientists have proposed various consecutive steps for abiogenesis. We know that Earth was rich in several chemicals, such as amino acids, a type of molecules called nucleotides or sugars, which are the building blocks of life. Laboratory experiments, such as the iconic Miller-Urey experiment, have shown how these compounds can be naturally formed under conditions similar to early Earth. Some of these compounds could also have come to Earth riding meteorites.

Next, these simple molecules combined to form more complex ones, such as fats, proteins, or nucleic acids. Importantly, nucleic acids—such as double-stranded DNA or its single-stranded cousin RNA—can store the information needed to build other molecules. DNA is more stable than RNA, but in contrast, RNA can be part of chemical reactions in which a compound makes copies of itself—self-replication.

The “RNA world” hypothesis suggests that early life may have used RNA as material for both genes and replication before the emergence of DNA and proteins.

Once an information system can make copies of itself, natural selection kicks in. Some of the new copies of these molecules (which some would call “genes”) will have errors, or mutations, and some of these new mutations will improve the replication ability of the molecules. Therefore, over time, there will be more copies of these mutants than other molecules, some of which will accumulate further new mutations, making them even faster and more abundant, and so on.

Eventually, these molecules probably evolved a lipid (fatty) boundary separating the internal environment of the organism from the exterior, forming protocells. Protocells could concentrate and organize better the molecules needed in biochemical reactions, providing a contained and efficient metabolism.

Life on Repeat?

Abiogenesis could have happened more than once. Earth could have birthed self-replicating molecules several times, and maybe early life for thousands or millions of years just consisted of a bunch of different self-replicating RNA molecules, with independent origins, competing for the same building blocks. Alas, due to the ancient and microscopic nature of this process, we may never know.

Many lab experiments have successfully reproduced different stages of abiogenesis, proving they could happen more than once, but we have no certainty of these occurring in the past.

A related question could be whether new life is emerging by abiogenesis as you are reading this. This is very unlikely, though. Early Earth was sterile of life and the physical and chemical conditions were very different. Nowadays, if somewhere on the planet there were ideal conditions for new self-replicating molecules to appear, they would be promptly chomped by existing life.

What we do know is that all extant life beings descend from a single shared last universal common ancestor of life (also known as LUCA). If there were other ancestors, they left no descendants behind. Key pieces of evidence support the existence of LUCA. All life on Earth uses the same genetic code, namely the correspondence between nucleotides in DNA known as A, T, C, and G—and the amino acid they encode in proteins. For example, the sequence of the three nucleotides ATG always corresponds to the amino acid methionine.

Theoretically, however, there could have been more genetic code variants between species. But all life on Earth uses the same code with a few minor changes in some lineages. Biochemical pathways, such as the ones used to metabolize food, also support the existence of LUCA; many independent pathways could have evolved in different ancestors, yet some (such as the ones used to metabolize sugars) are shared across all living organisms. Similarly, hundreds of identical genes are present in disparate live beings which can only be explained by being inherited from LUCA.

My favorite support for LUCA comes from the Tree of Life. Independent analyses, some using anatomy, metabolism, or genetic sequences, have revealed a hierarchical pattern of relatedness that can be represented as a tree. This shows we are more related to chimps than to any other living organisms on Earth. Chimps and we are more related to gorillas, and together to orangutans, and so on.

You can pick any random organism, from the lettuce in your salad to the bacteria in your bioactive yogurt and, if you travel back in time far enough, you will share an actual common ancestor. This is not a metaphor, but a scientific fact.

This is one of the most mind-boggling concepts in science, Darwin’s unity of life. If you are reading this text, you are here thanks to an uninterrupted chain of reproductive events going back billions of years. As exciting as it is to think about life repeatedly emerging on our planet, or elsewhere, it is even more exciting to know that we are related to all the life beings in the planet.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Giovanni Cancemi /

Kategorie: Transhumanismus

There’s Such a Thing as Vegan Fat Now, and It’s Being Used to Make Pork Belly

Singularity HUB - 31 Květen, 2023 - 16:00

For several years now, we’ve been able to go to grocery stores or restaurants and buy all sorts of products made from plants in imitation of products from animals. Burgers. Chicken. Sausages. Bacon. Now another product is being added to the list, and it’s even more unexpected than fake meat; you could even call it downright bizarre. Companies are working on—and already selling—plant-based fat. Yes, fat.

San Francisco-based Lypid initially launched in Taiwan. The company was founded by two Taiwanese entrepreneurs who met while completing PhDs at Cornell University. Plant-based burgers made with Lypid’s vegan fat are sold at a Taiwanese national chain of cafes. Last month the company announced the launch of its first product available in the US market: plant-based pork belly.

The pork belly is made with Lypid’s PhytoFat, a vegan fat meant to mimic the taste and texture of animal fat. It’s created using vegetable oils and water, but the key is what the company calls “microencapsulation” technology. This means they designed a coating for the oils that gives them a much higher melting point. Much like a vitamin, whose solid outer shell dissolves once it hits your stomach, little nuggets of oil are encased in a capsule that’s resistant to heat up to a certain temperature.

PhytoFat maintains its animal fat-like qualities even when heated to over 329 degrees Fahrenheit (165 degrees Celsius). The company says its pork belly sizzles, smells, and tastes like the real thing, and it can be sauteed, fried, or baked—but it’s 85 percent lower in saturated fat, 39 percent lower in calories, and 69 percent lower in sodium. There’s no artificial additives, no hydrogenation, and no trans fats. There’s even two different flavors: pork and beef.

Lypid received $4 million in seed funding in early 2022. They’re not the only ones working on an alternative to animal fat. One team of researchers is growing real fat in a lab using animal cells, just like cultured meat is produced starting with muscle cells from a chicken, pig, etc. A Bay Area-based company called Zero Acre Farms is working on a healthier (for humans and the planet) alternative to vegetable oil, produced using microorganisms and fermentation.

Because it’s easier to synthesize fat cells than muscle cells, lab-grown fat could become commercially available before lab-grown meat. At the moment, the cultured meat industry seems to be falling on hard times as companies realize it’s going to be a lot harder to economically scale than they thought. A recent study claimed that cultured meat could actually be worse for the planet than growing and slaughtering animals. Up to 25 times worse, to be exact.

If killing animals is inhumane and causes pollution, growing animal meat in labs is too expensive and energy-intensive, and fake meat made out of plants is—let’s face it—not nearly as good as the real thing, what might we find ourselves eating in the future?

More people may switch to a vegetarian diet. Cultured meat could eventually make the breakthroughs needed for economic and environmental feasibility. Or, with more products like Lypid’s fake fat, maybe plant-based meat will get closer to the real thing.

Image Credit: Lypid


Kategorie: Transhumanismus

First-of-Its-Kind Gene Therapy Can Be Applied to Skin Instead of Injected

Singularity HUB - 30 Květen, 2023 - 20:32

Sunburns are terrible. The skin blisters and peels. Even a light brush from putting on clothes or tucking into bed sheets is agony.

Now imagine having those blisters at just six months old. But the sun isn’t the culprit; your genes are.

Thousands of people in the US have dystrophic epidermolysis bullosa (DEB), a rare genetic disorder that affects the structure and integrity of the skin and eyes. Kids with the illness are cursed with skin similar to wet tissue paper. Chronic painful blisters and wounds—sometimes inside their throats—are a part of life since birth.

The root cause is frustratingly simple: one gene mutation, which affects a critical protein that helps support skin integrity. The single genetic error makes the illness a perfect candidate for gene therapy. Yet with the skin already fragile, injections—a current standard for gene therapy—are hard to tolerate.

What about a genetic moisturizer instead?

This month, the FDA approved the first rub-on gene therapy. Similar to aloe vera for treating sunburns, the therapy comes in a gel that’s gently massaged onto blisters and wounds to help with healing. Dubbed Vyjuvek, it directly delivers healthy copies of the mutated gene onto damaged skin. An alternative version is configured into eye drops to reconstruct the eye’s delicate architecture to better support sight.

In multiple clinical trials from patients ranging from a year old to middle aged, the treatment reduced painful blisters after six months. With a massage every week, over two-thirds of the patients’ wounds completely healed, compared to just one out of five in wounds treated with a placebo. The patients’ eyesight also improved, allowing a 13-year-old volunteer to finally play Minecraft online with his teenage peers.

The therapy is the latest to expand the universe of gene therapy delivery technologies. When further developed, it won’t be limited to rare skin conditions. Because the therapy targets collagen, a critical protein that helps maintain skin structure and elasticity, the rub-on treatment could launch the next generation of moisturizers to combat fine lines and crow’s feet from aging. A subsidiary of Krystal Biotech based in Pittsburgh, which developed Vyjuvek, is already expanding into cosmetics.

It’s not all superficial. Beauty aside, the FDA nod of approval “ushers in a whole new paradigm to treat genetic diseases,” said Krystal’s CEO Krish S. Krishnan.

The Perfect Package

Vyjuvek joins a prestigious roster of approved gene therapies.

These treatments have mainly battled blood cancers and disorders. Usually, doctors need to extract immune or red blood cells from a patient’s blood. The cells are then genetically enhanced and infused back into the body. Some are amped up to pursue cancer targets. Others help boost hemoglobin in red blood cells, which carries oxygen across the body.

Unlike deeper organs, blood cells are relatively easy to access, making them a valuable resource for genetic tweaks. Just last year, one team expanded gene editing’s potential by infusing CRISPR components directly into blood, which helped brush away a toxic protein made by the liver that leads to pain, numbness, and eventually heart failure in an inherited disease.

On the surface (no pun intended), the skin is another easily accessible target—just think of the thousands of skincare products on the market. Yet our external barrier is also a formidable fortress with multiple protective layers. The top, the epidermis, is a flexible biological shield composed of tightly-knit cells, making it difficult for large intruders—including collagen—to penetrate. Current treatments for DEB rely on genetically engineered skin outside the body being grafted onto patients. It’s as intense as it sounds: patients range from a week-long stay in the hospital to a prolonged medically-induced coma to tolerate the procedure.

The team tackled the conundrum with a cleverly-balanced hand. First was deciphering the genetic error that leads to DEB: a gene called COL7A1, which encodes a type of collagen. Like anchors stabilizing skyscraper scaffolds, COL7 molecules arrange into long, thin but extremely strong bundles to hold the epidermis and the middle layer of skin together. When deficient, the two layers separate, leading to painful blisters and wounds that resemble severe sunburn or frostbite, often from birth.

Unfortunately, COL7A1 is also a tough gene to deliver due to its enormous size. The team chose their carrier “vector” carefully: HSV-1, a type of harmless herpes simple virus, extensively engineered so it doesn’t replicate inside the body or cause any disease.

The vector is the first gamble for most gene therapies. Think of them as Amazon delivery boxes. Some are extremely efficient at penetrating into cells (or into tiny mailboxes), but can only hold a small payload. Others have greater capacity, but are then dumped on your front yard—easy for others (think immune cells) to see and potentially vandalize.

Previous attempts have collected skin tissue and used viruses for cell engineering and grafting, but “correction of genetic skin diseases via direct gene transfer in vivo [inside the body] has been a longstanding yet unrealized goal in the gene therapy field,” said the team last year in a clinical study for safety.

Their final recipe worked out stunningly well. First, a healthy version of the COL7 gene was genetically packaged into the selected viral vector. The entire architecture was then suspended inside a gel, similar to molecular gastronomy, to stabilize the treatment. The end result was gene therapy inside a moisturizer bottle—a first without the need for needles or other painful procedures.

In a study with 31 patients published last December in the New England Journal of Medicine, the ointment, dubbed B-VEC, healed nearly 70 percent of painful wounds across patients, compared to just roughly 20 percent in those rubbed with a placebo. The treatment also reduced pain throughout the trial.

A New Therapy Landscape

While impressive, the treatment isn’t a cure.

Because the skin readily replaces itself with new cells that carry similar genetic defects, the gel will need to be rubbed on at least weekly by a healthcare professional. With gene therapy costing up to millions of dollars, the numbers readily stack up. But the recipients are grateful.

“With the FDA approval of Vyjuvek the DEB population has reached a monumental milestone in the treatment of this horrible disorder,” said Brett Kopelan, Executive Director of debra, an organization that supports people with the illness. “Our hopes have now been realized for a safe and effective treatment for one of the most devastating symptoms of the disorder.”

On a broader scale, the study widens the gene therapy landscape, firing the first shot at rub-on therapies—with eye drops to quickly follow.

For now, Krystal is branching out into the million-dollar skincare industry to combat aging and damaged skin. It may be tricky: unlike wounds with open skin, normal skin forms a tighter and protective barrier. But if it succeeds, the treatment may open doors to highly efficient skin care—all inside a bottle, no need for a knife.

Image Credit: liyuanalisonPixabay

Kategorie: Transhumanismus

Gravitational Wave Detector LIGO Is Finally Back Online With Exciting Upgrades to Make It Way More Sensitive

Singularity HUB - 28 Květen, 2023 - 16:00

After a three-year hiatus, scientists in the US have just turned on detectors capable of measuring gravitational waves—tiny ripples in space itself that travel through the universe.

Unlike light waves, gravitational waves are nearly unimpeded by the galaxies, stars, gas, and dust that fill the universe. This means that by measuring gravitational waves, astrophysicists like me can peek directly into the heart of some of the most spectacular phenomena in the universe.

Since 2020, the Laser Interferometric Gravitational-Wave Observatory—commonly known as LIGO—has been sitting dormant while it underwent some exciting upgrades. These improvements will significantly boost the sensitivity of LIGO and should allow the facility to observe more-distant objects that produce smaller ripples in spacetime.

By detecting more of the events that create gravitational waves, there will be more opportunities for astronomers to also observe the light produced by those same events. Seeing an event through multiple channels of information, an approach called multi-messenger astronomy, provides astronomers rare and coveted opportunities to learn about physics far beyond the realm of any laboratory testing.

According to Einstein’s theory of general relativity, massive objects warp space around them. Image Credit: vchal/iStock via Getty Images Ripples in Spacetime

According to Einstein’s theory of general relativity, mass and energy warp the shape of space and time. The bending of spacetime determines how objects move in relation to one another—what people experience as gravity.

Gravitational waves are created when massive objects like black holes or neutron stars merge with one another, producing sudden, large changes in space. The process of space warping and flexing sends ripples across the universe like a wave across a still pond. These waves travel out in all directions from a disturbance, minutely bending space as they do so and ever so slightly changing the distance between objects in their way.

Even though the astronomical events that produce gravitational waves involve some of the most massive objects in the universe, the stretching and contracting of space is infinitesimally small. A strong gravitational wave passing through the Milky Way may only change the diameter of the entire galaxy by three feet (one meter).

The First Gravitational Wave Observations

Though first predicted by Einstein in 1916, scientists of that era had little hope of measuring the tiny changes in distance postulated by the theory of gravitational waves.

Around the year 2000, scientists at Caltech, the Massachusetts Institute of Technology, and other universities around the world finished constructing what is essentially the most precise ruler ever built—LIGO.

The LIGO detector in Hanford, Wash., uses lasers to measure the minuscule stretching of space caused by a gravitational wave. Image Credit: LIGO Laboratory

LIGO is comprised of two separate observatories, with one located in Hanford, Washington, and the other in Livingston, Louisiana. Each observatory is shaped like a giant L with two, 2.5-mile-long (four-kilometer-long) arms extending out from the center of the facility at 90 degrees to each other.

To measure gravitational waves, researchers shine a laser from the center of the facility to the base of the L. There, the laser is split so that a beam travels down each arm, reflects off a mirror and returns to the base. If a gravitational wave passes through the arms while the laser is shining, the two beams will return to the center at ever so slightly different times. By measuring this difference, physicists can discern that a gravitational wave passed through the facility.

LIGO began operating in the early 2000s, but it was not sensitive enough to detect gravitational waves. So, in 2010, the LIGO team temporarily shut down the facility to perform upgrades to boost sensitivity. The upgraded version of LIGO started collecting data in 2015 and almost immediately detected gravitational waves produced from the merger of two black holes.

Since 2015, LIGO has completed three observation runs. The first, run O1, lasted about four months; the second, O2, about nine months; and the third, O3, ran for 11 months before the COVID-19 pandemic forced the facilities to close. Starting with run O2, LIGO has been jointly observing with an Italian observatory called Virgo.

Between each run, scientists improved the physical components of the detectors and data analysis methods. By the end of run O3 in March 2020, researchers in the LIGO and Virgo collaboration had detected about 90 gravitational waves from the merging of black holes and neutron stars.

The observatories have still not yet achieved their maximum design sensitivity. So, in 2020, both observatories shut down for upgrades yet again.

Upgrades to the mechanical equipment and data processing algorithms should allow LIGO to detect fainter gravitational waves than in the past. Image Credit: LIGO/Caltech/MIT/Jeff Kissel, CC BY-ND Making Some Upgrades

Scientists have been working on many technological improvements.

One particularly promising upgrade involved adding a 1,000-foot (300-meter) optical cavity to improve a technique called squeezing. Squeezing allows scientists to reduce detector noise using the quantum properties of light. With this upgrade, the LIGO team should be able to detect much weaker gravitational waves than before.

My teammates and I are data scientists in the LIGO collaboration, and we have been working on a number of different upgrades to software used to process LIGO data and the algorithms that recognize signs of gravitational waves in that data. These algorithms function by searching for patterns that match theoretical models of millions of possible black hole and neutron star merger events. The improved algorithm should be able to more easily pick out the faint signs of gravitational waves from background noise in the data than the previous versions of the algorithms.

Astronomers have captured both the gravitational waves and light produced by a single event, the merger of two neutron stars. The change in light can be seen over the course of a few days in the top right inset. Image Credit: Hubble Space Telescope, NASA and ESA A Hi-Def Era of Astronomy

In early May 2023, LIGO began a short test run—called an engineering run—to make sure everything was working. On May 18, LIGO detected gravitational waves likely produced from a neutron star merging into a black hole.

LIGO’s 20-month observation run 04 officially started on May 24, and it will later be joined by Virgo and a new Japanese observatory—the Kamioka Gravitational Wave Detector, or KAGRA.

While there are many scientific goals for this run, there is a particular focus on detecting and localizing gravitational waves in real time. If the team can identify a gravitational wave event, figure out where the waves came from and alert other astronomers to these discoveries quickly, it would enable astronomers to point other telescopes that collect visible light, radio waves, or other types of data at the source of the gravitational wave. Collecting multiple channels of information on a single event—multi-messenger astrophysics—is like adding color and sound to a black-and-white silent film and can provide a much deeper understanding of astrophysical phenomena.

Astronomers have only observed a single event in both gravitational waves and visible light to date—the merger of two neutron stars seen in 2017. But from this single event, physicists were able to study the expansion of the universe and confirm the origin of some of the universe’s most energetic events known as gamma-ray bursts.

With run O4, astronomers will have access to the most sensitive gravitational wave observatories in history and hopefully will collect more data than ever before. My colleagues and I are hopeful that the coming months will result in one—or perhaps many—multi-messenger observations that will push the boundaries of modern astrophysics.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA’s Goddard Space Flight Center/Scott Noble; simulation data, d’Ascoli et al. 2018

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through May 27)

Singularity HUB - 27 Květen, 2023 - 16:00

IBM Wants to Build a 100,000-Qubit Quantum Computer
Michael Brooks | MIT Technology Review
“Late last year, IBM took the record for the largest quantum computing system with a processor that contained 433 quantum bits, or qubits, the fundamental building blocks of quantum information processing. Now, the company has set its sights on a much bigger target: a 100,000-qubit machine that it aims to build within 10 years.”


Scientists Use AI to Discover New Antibiotic to Treat Deadly Superbug
Maya Yang | The Guardian
“After scientists trained the AI model, they used it to analyze 6,680 compounds that it had previously not encountered. The analysis took an hour and half and ended up producing several hundred compounds, 240 of which were then tested in a laboratory. Laboratory testing ultimately revealed nine potential antibiotics, including abaucin. The scientists then tested the new molecule against A baumannii in a wound infection model in mice and found that the molecule suppressed the infection.”


Nvidia Is Poised to Join $1 Trillion Club Thanks to AI-Driven Surge
Sharon Goldman | VentureBeat
“Nvidia’s stock soared nearly 30% after it announced its first-quarter financial results yesterday, setting the stage for Nvidia to become only the fifth publicly traded US company to be currently worth $1 trillion—joining Apple, Microsoft, Alphabet, and Amazon. And it’s all thanks to the hunger for high-powered AI chips in the era of generative AI.”


A Paralyzed Man Can Walk Naturally Again With Brain and Spine Implants
Oliver Whang | The New York Times
“In a study published on Wednesday in the journal Nature, researchers in Switzerland described implants that provided a ‘digital bridge’ between Mr. Oskam’s brain and his spinal cord, bypassing injured sections. The discovery allowed Mr. Oskam, 40, to stand, walk and ascend a steep ramp with only the assistance of a walker. More than a year after the implant was inserted, he has retained these abilities and has actually showed signs of neurological recovery, walking with crutches even when the implant was switched off.”


Humanoid Robots Are Coming of Age
Will Knight | Wired
“Eight years ago, the Pentagon’s Defense Advanced Research Projects Agency organized a painful-to-watch contest that involved robots slowly struggling (and often failing) to perform a series of human tasks, including opening doors, operating power tools, and driving golf carts. …Today the descendants of those hapless robots are a lot more capable and graceful. Several startups are developing humanoids that they claim could, in just a few years, find employment in warehouses and factories.”


Scientists Working to Generate Electricity From Thin Air Make Breakthrough
Becky Ferreira | Motherboard
“Scientists have invented a device that can continuously generate electricity from thin air, offering a glimpse of a possible sustainable energy source that can be made of almost any material and runs on the ambient humidity that surrounds all of us, reports a new study.”


How NASA Plans to Melt the Moon—and Build on Mars
Khari Johnson | Wired
“In June a four-person crew will enter a hangar at NASA’s Johnson Space Center in Houston, Texas, and spend one year inside a 3D printed building. Made of a slurry that—before it dried—looked like neatly laid lines of soft-serve ice cream, Mars Dune Alpha has crew quarters, shared living space, and dedicated areas for administering medical care and growing food.”


Replica Unveils AI-Powered Smart NPCs for Unreal Engine
Dean Takahashi | VentureBeat
“The smart NPCs are powered by OpenAI or the user’s own AI language model, and Replica’s library of over 120 ethically licensed AI voices, allowing game developers to develop games at scale and create new dynamic gaming experiences. …In Replica’s smart NPC experience, AI-powered NPCs will dynamically respond to the player’s in-game voice in real time, the company said. Characters will change their dialogue, emotional tone and body gestures in reaction to how the player speaks to them.”


‘Fluxonium’ Is the Longest Lasting Superconducting Qubit Ever
Karmela Padavic-Callaghan | NewScientist
“Somoroff says that the best transmon qubits have coherence times of hundreds of microseconds, but he and his team measured about 1.48 milliseconds for their fluxonium qubit. They also determined that they could change their qubit’s state, something that would have to happen many times during a computation on a fluxonium quantum computer, with 99.991 per cent fidelity. This makes the fluxonium qubit one of the most reliable qubits that exists, almost always changing states exactly as instructed.”


Some Neural Networks Learn Language Like Humans
Steve Nadis | Quanta
“The researchers—led by Gašper Beguš, a computational linguist at the University of California, Berkeley—compared the brain waves of humans listening to a simple sound to the signal produced by a neural network analyzing the same sound. The results were uncannily alike. ‘To our knowledge,’ Beguš and his colleagues wrote, the observed responses to the same stimulus ‘are the most similar brain and ANN signals reported thus far.’i”

Image Credit: Maxim BergUnsplash

Kategorie: Transhumanismus

Generative AI Reconstructs Videos People Are Watching by Reading Their Brain Activity

Singularity HUB - 26 Květen, 2023 - 16:28

The ability of machines to read our minds has been steadily progressing in recent years. Now, researchers have used AI video generation technology to give us a window into the mind’s eye.

The main driver behind attempts to interpret brain signals is the hope that one day we might be able to offer new windows of communication for those in comas or with various forms of paralysis. But there are also hopes that the technology could create more intuitive interfaces between humans and machines that could also have applications for healthy people.

So far, most research has focused on efforts to recreate the internal monologues of patients, using AI systems to pick out what words they are thinking of. The most promising results have also come from invasive brain implants that are unlikely to be a practical approach for most people.

Now though, researchers from the National University of Singapore and the Chinese University of Hong Kong have shown that they can combine non-invasive brain scans and AI image generation technology to create short snippets of video that are uncannily similar to clips that the subjects were watching when their brain data was collected.

The work is an extension of research the same authors published late last year, where they showed they could generate still images that roughly matched the pictures subjects had been shown. This was achieved by first training one model on large amounts of data collected using fMRI brain scanners. This model was then combined with the open-source image generation AI Stable Diffusion to create the pictures.

In a new paper published on the preprint server arXiv, the authors take a similar approach, but adapt it so that the system can interpret streams of brain data and convert them into videos rather than stills. First, they trained one model on large amounts of fMRI so that it could learn the general features of these brain scans. This was then augmented so it could process a succession of fMRI scans rather than individual ones, and then trained again on combinations of fMRI scans, the video snippets that elicited that brain activity, and text descriptions.

Separately, the researchers adapted the pre-trained Stable Diffusion model to produce video rather than still images. It was then trained again on the same videos and text descriptions that the first model had been trained on. Finally, the two models were combined and fine-tuned together on fMRI scans and their associated videos.

The resulting system was able to take fresh fMRI scans it hadn’t seen before and generate videos that broadly resembled the clips human subjects had been watching at the time. While far from a perfect match, the AI’s output was generally pretty close to the original video, accurately recreating crowd scenes or herds of horses and often matching the color palette.

To evaluate their system, the researchers used a video classifier designed to assess how well the model had understood the semantics of the scene—for instance, whether it had realized the video was of fish swimming in an aquarium or a family walking down a path—even if the imagery was slightly different. Their model scored 85 percent, which is a 45 percent improvement over the state-of-the-art.

While the videos the AI generates are still glitchy, the authors say this line of research could ultimately have applications in both basic neuroscience and also future brain-machine interfaces. However, they also acknowledge potential downsides to the technology. “Governmental regulations and efforts from research communities are required to ensure the privacy of one’s biological data and avoid any malicious usage of this technology,” they write.

That is likely a nod to concerns that the combination of AI brain scanning technology could make it possible for people to intrusively record other’s thoughts without their consent. Anxieties were also voiced earlier this year when researchers used a similar approach to essentially create a rough transcript of the voice inside peoples’ heads, though experts have pointed out that this would be impractical if not impossible for the foreseeable future.

But whether you see it as a creepy invasion of your privacy or an exciting new way to interface with technology, it seems machine mind readers are edging closer to reality.

Image Credit: Claudia Dewald from Pixabay

Kategorie: Transhumanismus

Can 3D-Printed Homes Be Built for Under $99,000? ICON Wants You to Figure It Out

Singularity HUB - 25 Květen, 2023 - 16:00

From Kenya to Mexico, Texas, and beyond, 3D-printed houses are starting to go up all over the world. Besides providing a durable and aesthetically pleasing structure, one of the biggest goals of 3D-printed homes is affordability. The technology replaces part of the human labor needed for building, cutting the cost of construction and lowering the home’s final price tag.

But so far this seems to be harder than anticipated; while a handful of 3D-printed homes have been priced well below their conventionally-built competitors, many others have sold at parity or just slightly undercut average market prices. Given that there’s a housing shortage of somewhere between 2.3 to 6.5 million homes (depending on whether multi-family construction is included) in the US, we’re going to need to do a lot better than that.

Construction technology company ICON is aiming to find a way forward. The company (which is currently building a community of 100 3D-printed homes outside Austin, Texas) is launching a competition for disruptive solutions for affordable housing. Called Initiative 99, the contest will call for 3D-printed home designs that can be built for under $99,000.

ICON released the submission details for the contest yesterday. Its parameters are fairly general, which opens up a lot of possibility for different designs. All homes must have a minimum of one bedroom and one bathroom—sorry, studios—within flexible square footage. They have to be designed with a target group in mind, such as young families, people who were formerly homeless, the elderly, etc. The homes also have to comply with residential building code requirements, and be possible to build using ICON’s Vulcan 3D printer.

Outside of those must-haves, participants are encouraged to think about how their design could be scaled, i.e. for a community of 20 or more homes. They should take climate and sustainability into account. Rainwater collection system? Great. Solar panel ready? Even better. Hypothetical occupants should be able to expect “low, consistent, and predictable utility bills over the entire year” because the homes should be as energy-efficient as possible.

In terms of costs, the $99,000 threshold must include printing, additional construction costs, and finish-out costs (like mechanical, electrical, and plumbing systems). It doesn’t include land, labor, utility connections, or permits.

The most expensive parts of building a house the conventional way are labor and materials, with labor being most expensive. You need carpenters, plumbers, electricians, and roofers. Window installers. Kitchen cabinet specialists. Someone to lay the concrete foundation. For custom homes, an architect. And the list goes on.

One of the biggest ways to save on these labor costs is to use a fixed design and panelize the building. Studs for the interior walls—that is, the two-by-fours that are put up to create rooms, then covered by drywall—can be prefabricated. Floors can be panelized too (not the finishes, like hardwood or carpet, but the structure).

Companies have come up with all sorts of creative ideas to reduce homebuilding costs in ways similar to this. Boxabl makes prefabricated “foldable” homes that can ship on an eight-foot footprint, and they start at $49,500 (though that’s for a 400-square-foot studio). Similarly, NODE makes prefab homes that ship in kits then are assembled like Ikea furniture. Vantem Global makes energy-efficient prefabricated homes out of structural panels, and Automatic Construction is trying to build houses by pumping concrete into inflatable forms.

What sorts of similarly innovative ideas might builders and entrepreneurs come up with for ICON’s competition? How much can be done within 3D printing as a building technology to further reduce its costs and make it more scalable, all while producing appealing, comfortable homes?

Entrants will have their work cut out for them. Their design submissions won’t just be judged on constructability and innovation—the judges will also consider aesthetics, sustainability, cost, and scalability. The winning design will receive $75,000, second place $50,000 and third place $35,000. Submissions will open this summer.

Image Credit: ICON

Kategorie: Transhumanismus

Virtual Reality Could Soon Include Smells Thanks to New Wireless Scent Interface

Singularity HUB - 24 Květen, 2023 - 16:00

Virtual reality experiences depend on goggles and headphones, transporting wearers to new places using sight and sound. Be it a peaceful meadow where the only sounds are birds chirping and the breeze blowing through the grass, or a packed stadium with thousands of fans cheering on a pro football team, what you see and hear are key components of an immersive experience.

But they’re not the only ones. Multiple companies are working on haptic devices, like gloves or vests, to add a sense of touch to virtual experiences. And now, researchers are aiming to integrate a fourth sense: smell.

How much more real might that peaceful meadow feel if you could smell the wildflowers and the damp Earth around you? How might the scent of an ocean breeze amplify a VR experience that takes place on a boat or a beach?

Scents have a powerful effect on the brain, eliciting emotions, memories, and sometimes even fight-or-flight responses. You may feel nostalgic with the cologne or perfume a favorite grandparent wore, comforted by a whiff of a favorite food, or extra-alert to your surroundings if it smells like something’s burning.

If proponents’ vision of the metaverse come to pass, integrating scent will help make the virtual world more immersive and realistic. A team at Beihang University in China published a paper in Nature Communications this month describing a system to make it happen. Their wearable interface uses an odor generator to produce specific smells during virtual experiences.

The team created two different versions of the “olfaction interface”: one that users stick onto the patch of skin between their nose and mouth, and another that’s strapped on like a face mask. The interfaces contain odor generators in the form of miniaturized containers of paraffin wax infused with different scents. These can activate individually or be combined to create many unique smells (though the face mask version has much more versatility with 9 odor generators, while the on-skin version only has 2).

The scents reach the device’s wearer via an actuator and heat source that starts to melt the wax, causing it to release its scent, like a candle. The researchers claim it only takes 1.44 seconds for a scent to be generated and reach the device wearer’s nose. To make the scent stop or transition to a different one—say you’ve left the meadow and are now walking along a paved road upon which there’s a chocolate factory (mmmm)—a copper coil kicks a magnet to cover the wax and cool it down.

Image Credit: Xinge Yu et. al.

It may make users nervous to have a device on their face that gets hot enough to melt wax. The researchers say their interface won’t burn wearers—or even come close to doing so—thanks to an open design that ventilates warm air. There’s also a piece of silicone built in to create a barrier between the interface and wearers’ skin.

In a test with 11 volunteers, the on-skin interface reached a temperature of 90° F; that’s lower than the human body temperature, but not exactly cool and comfortable. The team says they’re working on solutions to make the interface run at lower temperatures. They also have yet to figure out how to program the odor generators in a way that would seamlessly integrate with VR headsets, and release the relevant scents at appropriate times.

Nonetheless, their design is a step forward. “This is quite an exciting development,” said Jas Brooks, a PhD candidate at the University of Chicago’s Human-Computer Integration Lab who has studied chemical interfaces and smell, who was not involved in the study. “It’s tackling a core problem with smell in VR: How do we miniaturize this, make it not messy, and not use liquid?”

Imagine wearing a scent-releasing device while watching The Great British Baking Show or Top Chef. If those shows were addicting (and hunger-inducing) to begin with, being able to smell the cooks’ and bakers’ creations might make us all run out to buy the closest match we can find—or the ingredients to make it ourselves.

That brings us to the final sense that may eventually be added to virtual reality: taste.

Image Credit: SimpleB /

Kategorie: Transhumanismus

ChatGPT Can’t Think—Consciousness Is Something Entirely Different to Today’s AI

Singularity HUB - 23 Květen, 2023 - 16:00

There has been shock around the world at the rapid rate of progress with ChatGPT and other artificial intelligence created with what’s known as large language models (LLMs). These systems can produce text that seems to display thought, understanding, and even creativity.

But can these systems really think and understand? This is not a question that can be answered through technological advance, but careful philosophical analysis and argument tell us the answer is no. And without working through these philosophical issues, we will never fully comprehend the dangers and benefits of the AI revolution.

In 1950, the father of modern computing, Alan Turing, published a paper that laid out a way of determining whether a computer thinks. This is now called “the Turing test.” Turing imagined a human being engaged in conversation with two interlocutors hidden from view: one another human being, the other a computer. The game is to work out which is which.

If a computer can fool 70 percent of judges in a 5-minute conversation into thinking it’s a person, the computer passes the test. Would passing the Turing test—something that now seems imminent—show that an AI has achieved thought and understanding?

Chess Challenge

Turing dismissed this question as hopelessly vague, and replaced it with a pragmatic definition of “thought,” whereby to think just means passing the test.

Turing was wrong, however, when he said the only clear notion of “understanding” is the purely behavioral one of passing his test. Although this way of thinking now dominates cognitive science, there is also a clear, everyday notion of “understanding” that’s tied to consciousness. To understand in this sense is to consciously grasp some truth about reality.

In 1997, the Deep Blue AI beat chess grandmaster Garry Kasparov. On a purely behavioral conception of understanding, Deep Blue had knowledge of chess strategy that surpasses any human being. But it was not conscious: it didn’t have any feelings or experiences.

Humans consciously understand the rules of chess and the rationale of a strategy. Deep Blue, in contrast, was an unfeeling mechanism that had been trained to perform well at the game. Likewise, ChatGPT is an unfeeling mechanism that has been trained on huge amounts of human-made data to generate content that seems like it was written by a person.

It doesn’t consciously understand the meaning of the words it’s spitting out. If “thought” means the act of conscious reflection, then ChatGPT has no thoughts about anything.

Time to Pay Up

How can I be so sure that ChatGPT isn’t conscious? In the 1990s, neuroscientist Christof Koch bet philosopher David Chalmers a case of fine wine that scientists would have entirely pinned down the “neural correlates of consciousness” in 25 years.

By this, he meant they would have identified the forms of brain activity necessary and sufficient for conscious experience. It’s about time Koch paid up, as there is zero consensus that this has happened.

This is because consciousness can’t be observed by looking inside your head. In their attempts to find a connection between brain activity and experience, neuroscientists must rely on their subjects’ testimony, or on external markers of consciousness. But there are multiple ways of interpreting the data.

Some scientists believe there is a close connection between consciousness and reflective cognition—the brain’s ability to access and use information to make decisions. This leads them to think that the brain’s prefrontal cortex—where the high-level processes of acquiring knowledge take place—is essentially involved in all conscious experience. Others deny this, arguing instead that it happens in whichever local brain region that the relevant sensory processing takes place.

Scientists have good understanding of the brain’s basic chemistry. We have also made progress in understanding the high-level functions of various bits of the brain. But we are almost clueless about the bit in between: how the high-level functioning of the brain is realized at the cellular level.

People get very excited about the potential of scans to reveal the workings of the brain. But fMRI (functional magnetic resonance imaging) has a very low resolution: every pixel on a brain scan corresponds to 5.5 million neurons, which means there’s a limit to how much detail these scans are able to show.

I believe progress on consciousness will come when we understand better how the brain works.

Pause in Development

As I argue in my forthcoming book Why? The Purpose of the Universe, consciousness must have evolved because it made a behavioral difference. Systems with consciousness must behave differently, and hence survive better, than systems without consciousness.

If all behavior was determined by underlying chemistry and physics, natural selection would have no motivation for making organisms conscious; we would have evolved as unfeeling survival mechanisms.

My bet, then, is that as we learn more about the brain’s detailed workings, we will precisely identify which areas of the brain embody consciousness. This is because those regions will exhibit behavior that can’t be explained by currently known chemistry and physics. Already, some neuroscientists are seeking potential new explanations for consciousness to supplement the basic equations of physics.

While the processing of LLMs is now too complex for us to fully understand, we know that it could in principle be predicted from known physics. On this basis, we can confidently assert that ChatGPT is not conscious.

There are many dangers posed by AI, and I fully support the recent call by tens of thousands of people, including tech leaders Steve Wozniak and Elon Musk, to pause development to address safety concerns. The potential for fraud, for example, is immense. However, the argument that near-term descendants of current AI systems will be super-intelligent, and hence a major threat to humanity, is premature.

This doesn’t mean current AI systems aren’t dangerous. But we can’t correctly assess a threat unless we accurately categorize it. LLMs aren’t intelligent. They are systems trained to give the outward appearance of human intelligence. Scary, but not that scary.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Gerd Altmann from Pixabay

Kategorie: Transhumanismus

Silicon Valley Is Reviving the Dream of General-Purpose Humanoid Robots

Singularity HUB - 22 Květen, 2023 - 16:00

Robots are nothing new. They build our cars, vacuum our floors, prepare our e-commerce orders, and even help carry out surgeries. But now the sci-fi vision of a general-purpose humanoid robot seems to be edging closer.

While disembodied artificial intelligence has seen rapid improvements in performance in recent years, most robots are still relatively dumb. For the most part, they are used for highly specialized purposes, the environments they operate in are carefully controlled, and they are not particularly autonomous.

That’s because operating in the messy uncertainty of the real world remains difficult for current AI approaches. As impressive as the recent feats of large language models have been, they are dealing with a fairly limited palette of data types that are fed to them in predictable ways.

The real world is messy and multi-faceted. A general-purpose robot needs to integrate input from multiple data sources, understand how those inputs vary at different times of the day or in different kinds of weather, predict the behavior of everything from humans to pets to vehicles, and then sync this all up with the challenging tasks of locomotion and object manipulation.

That kind of flexibility has so far eluded AI. That’s why, despite billions of dollars of investment, companies like Waymo and Cruise are still struggling to roll out autonomous vehicles even in the more restricted domain of driving.

If company announcements are anything to go by, though, many in Silicon Valley think that’s about to change. The last few months have seen a flurry of announcements from companies touting autonomous humanoid robots that could soon take on a broad gamut of tasks that currently only humans can perform.

Most recent was Sanctuary’s announcement of its new Phoenix robot last week. The company has already shown that, when tele-operated by a human, its robots can carry out more than 100 tasks in a retail environment, like packing merchandise, cleaning, and labeling products. But the new robot, which is bipedal, stands five feet seven inches tall and has a hand nearly as dexterous as a human’s. It is designed to eventually be completely autonomous.

The company plans to get there in increments, according to IEEE Spectrum. Their first step is to record the motion of humans doing all kinds of activities, then use this to build better tele-operated robots. They will gradually begin to automate some of the most common sub-tasks, while the human operator still takes care of the most complex ones. As time goes on, the company hopes to automate more and more tasks until the operator is essentially just supervising and directing. Ultimately, the goal is to be able to remove the operator completely.

It seems that human workers training their robot replacements is a popular approach. A video released by Tesla last week showed off a bunch of new features for the latest version of its Optimus robot, including improved object manipulation, environment navigation, and fine motor control. But it also included footage of engineers wearing motion capture equipment to teach the robot how to complete various tasks.

Tesla’s robot still seemed fairly slow and wobbly compared to the slick demos we’ve become used to seeing from Boston Dynamics, the original humanoid robot company. But as impressive as these have become, the company has struggled to find commercial applications for their technology. And perhaps companies with a firmer sense of what’s needed in industry or by consumers will have more luck in making them a reality.

In that vein, news of a secret robot project at Amazon also recently broke. The company has successfully deployed robots in its warehouses for many years, but its first attempt at a domestic robot called Astro was somewhat of a flop. But now, according to Insider, the tech giant is apparently planning to use large language models (LLMs) to boost the capabilities of its next-generation helper bot.

Code-named Burnham, the device will supposedly take advantage of the emergent problem-solving capabilities seen in the largest language models to improve things like conversational fluency, social awareness, and problem-solving ability.

Astro is still pretty much just a screen on wheels, so it’s not going to be fetching your morning coffee. But some of the potential applications Insider references include telling the owner if they find a stove left burning unattended, helping find lost car keys, or monitoring whether kids have friends over after school.

They might not be the only ones looking to see how LLMs can push robotics forward. It was recently announced that ChatGPT creator OpenAI led a multi-million-dollar investment round in Norwegian company 1X, which is preparing to unveil a bipedal robot called NEO. While details were scant, it’s not hard to imagine that the AI leader is keen to find ways to interface its technology with the real world.

Perhaps the most intriguing of all the general-purpose robot companies, though, is Figure, which emerged from stealth in March. With a team made up of Boston Dynamics, Tesla, Cruise, and Apple veterans, and at least $100 million in funding, the company has ambitions of replacing human labor in everything from logistics to manufacturing and retail. So far though, the company hasn’t released much detail about its humanoid Figure 01 robot, and images have only been graphical renders rather than actual photographs.

This does seem par for the course. Heavily produced promotional videos and shiny computer-generated images are not a good marker of progress, so until these companies start sharing concrete demos in real-world contexts, it’s probably wise to reserve judgment. Nonetheless, there is a new sense of optimism that robots may soon be walking among us.

Image Credit: Sanctuary AI

Kategorie: Transhumanismus

Earth Will Likely Dodge ‘Planet Killer’ Asteroids for the Next 1,000 Years

Singularity HUB - 21 Květen, 2023 - 16:00

To those who study existential risk, the list of threats is lengthening. If nuclear war doesn’t end us, a designer virus or AI might. The good news? No giant asteroids will strike this millennium.

A new study by University of Colorado and NASA scientists and accepted for publication in The Astronomical Journal, extended forecasts for the biggest known near-Earth asteroids by an order of magnitude and found none threaten Earth in the next thousand years.

Don’t Look Up

In 1998, NASA asked scientists to find 90 percent of all near-Earth asteroids bigger than a kilometer. The 10-kilometer-wide asteroid that killed off the dinosaurs 66 million years ago belonged to this club. But even smaller strikes would be catastrophic.

“This is what we call a planet killer,” astronomer Scott Sheppard told the New York Times last year after scientists found a new 1.5-kilometer asteroid. “If this one hits the Earth, it would cause planet-wide destruction. It would be very bad for life as we know it.”

Scientists believe such impacts happen every few million years, but until recent decades, there was simply no way to predict future strikes. No one had a list of likely candidates. NASA has since discovered nearly a thousand asteroids over a kilometer wide, or around 95 percent of the total in existence.

This catalog includes observations that help astronomers calculate each asteroid’s orbit and model the likelihood it’ll impact Earth in the future. But these predictions previously maxed out around a hundred years. As asteroids careen around the sun, their orbits are tugged about by the gravity of the planets. Gravitational encounters, especially close ones, increase the uncertainty in forecasting models. Past a certain point, astronomers can’t say exactly where an asteroid will be in its orbit.

Buzzing the Tower

The new study aims to make longer forecasts by employing some tricks to reduce the computational workload. Instead of relying on orbital position alone, they zoomed in on the most consequential moments—close flybys of Earth. These encounters, they write, can be modeled further into the future, even as orbital position becomes uncertain.

Looking ahead a thousand years, the team found the vast majority of asteroids didn’t spend much time in our neighborhood and could be ruled out as hazardous. Next, they identified the population of large asteroids that most frequently buzz by Earth. Using their new method, they modeled close encounters over the next millennium.

The asteroid with the highest probability of impact is 1994 PC1, a kilometer-wide asteroid that passes close to Earth often. The team found a 0.00151 percent chance that 1994 PC1 would pass within the moon’s orbit in the next thousand years. This is a very small risk—and yet it’s still ten times higher than any other asteroid on the list.

Using this method at least, it seems we’re very unlikely to experience a major impact any time soon.

“It’s still not likely that it’s going to collide,” the University of Colorado’s Oscar Fuentes-Muñoz, who led the team, told MIT Technology Review. “But it will be a very good scientific opportunity, because it’s going to be a huge asteroid that’s very close to us.”

Planetary Defense

Of course, there’s a chance a more dangerous asteroid is lurking in the still-undiscovered five percent of kilometer-sized objects. The space rock Sheppard was referring to last year is a member of a group of large asteroids hiding in the glare of the sun. And large comets living out in the Kuiper Belt and Oort cloud could be nudged into our path one day. But the most likely interlopers are nearby, and we’re getting a much better handle on their habits.

The team write they’d like to apply their approach to extend forecasts for smaller asteroids too. There are a lot more of those—around 25,000 are thought to be bigger than 140 meters, of which we’ve only discovered around 40 percent—and while they wouldn’t cause planet-wide destruction, they could certainly wreak havoc regionally or, if our luck is especially bad, in areas with high population densities, like cities.

Still, the forecast is encouraging. The likelihood of a significant strike soon is very low. Should we discover a dangerous smaller asteroid in the future, NASA’s DART mission last year showed we might push it off-course and prevent a strike with enough advance warning. And although there’s no proven way of avoiding the biggest impacts—we can breathe easier knowing we likely have another thousand years to strengthen our defenses.

Image Credit: NASA/JPL-Caltech

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through May 20)

Singularity HUB - 20 Květen, 2023 - 16:00

ChatGPT Is Already Obsolete
Matteo Wong | The Atlantic
“Language-only models such as the original ChatGPT are now giving way to machines that can also process images, audio, and even sensory data from robots. The new approach might reflect a more human understanding of intelligence, an early attempt to approximate how a child learns by existing in and observing the world. It might also help companies build AI that can do more stuff and therefore be packaged into more products.”


Watch 44 Million Atoms Simulated Using AI and a Supercomputer
Alex Wilkins | New Scientist
“Boris Kozinsky at Harvard University and his colleagues have developed a tool, called Allegro, that can accurately simulate systems with tens of millions of atoms using artificial intelligence. Kozinsky and his team used the world’s 8th most powerful supercomputer, Perlmutter, to simulate the 44 million atoms involved in the protein shell of HIV.”


Take Your Ultrawide Monitors Everywhere With an AR Laptop
Brenda Stolyar | Wired
“Now you can harness the power of a multi-monitor setup with a pair of augmented reality (AR) glasses and a keyboard. Created by a new company called Sightful, founded by former executives of Magic Leap, Spacetop does exactly that. As the world’s first AR laptop, it delivers the convenience of a virtual 100-inch screen with the ability to display as many windows and apps as you need to get work done from wherever you are.”


Allergic to Eggs? Not These Eggs
Lauren Leffer | Gizmodo
“Using a targeted gene-editing enzyme to knock out specific protein-coding DNA sequences, scientists can produce a safer chicken egg far less likely to trigger an allergic reaction, according to a recent study published in the journal Food and Chemical Toxicology. Not only do the edited eggs lack an important allergen, they also seem to be without any unintended, potentially harmful related byproducts.”


Long-Sought Universal Flu Vaccine: mRNA-Based Candidate Enters Clinical Trial
Beth Mole | Ars Technica
“i‘A universal influenza vaccine would be a major public health achievement and could eliminate the need for both annual development of seasonal influenza vaccines, as well as the need for patients to get a flu shot each year,’ Hugh Auchincloss, acting director of the NIH’s National Institute of Allergy and Infectious Diseases, said in a news release. ‘Moreover, some strains of influenza virus have significant pandemic potential. A universal flu vaccine could serve as an important line of defense against the spread of a future flu pandemic.’i“


Wendy’s Wants to Use Underground Robots to Fetch Your Order
Kevin Hurler | Gizmodo
“Fresh off the heels of Wendy’s announcing it would be using AI in its drive-thrus, the fast food franchise is hoping to add another piece of technology to its dining experience. Specifically, it wants to add a subterranean system of autonomous robots to bring customers their food beneath its parking lot.”


Why a Genome Can’t Bring Back an Extinct Animal
Isaac Schultz | Gizmodo
“Makeshift mammoths and body-double dodos are on the way—but they won’t be the genuine article. …Scientists may finally be on the verge of breakthroughs that can simulate some animals’ resurrection. But, despite what Jurassic Park led us to believe, simply having a creature’s DNA isn’t enough to bring it back from the dead.“


Just Calm Down About GPT-4 Already
Glenn Zorpette | IEEE Spectrum
“Rapid and pivotal advances in technology have a way of unsettling people, because they can reverberate mercilessly, sometimes, through business, employment, and cultural spheres. And so it is with the current shock and awe over large language models, such as GPT-4 from OpenAI. It’s a textbook example of the mixture of amazement and, especially, anxiety that often accompanies a tech triumph. And we’ve been here many times, says Rodney Brooks.”


We’re Effectively Alone in the Universe, and That’s OK
Paul Sutter | Ars Technica
“Our cosmic insignificance is the only barrier we need to explain Fermi’s great puzzle. We’re not equipped to deal with the astronomically large numbers that our galaxy casually throws around, so what appears at first glance to be a paradox is really our inability to handle truly cosmic scales. Our galaxy could be teeming with life. There could be dozens, hundreds, or even thousands of intelligent species in our galaxy right now, but the vast gulfs of nothingness that surround them make us interstellar islands.”

Image Credit: Mitchell Luo / Unsplash

Kategorie: Transhumanismus

Quantum Biology Could Revolutionize Our Understanding of How Life Works

Singularity HUB - 19 Květen, 2023 - 16:00

Imagine using your cell phone to control the activity of your own cells to treat injuries and disease. It sounds like something from the imagination of an overly optimistic science fiction writer. But this may one day be a possibility through the emerging field of quantum biology.

Over the past few decades, scientists have made incredible progress in understanding and manipulating biological systems at increasingly small scales, from protein folding to genetic engineering. And yet, the extent to which quantum effects influence living systems remains barely understood.

Quantum effects are phenomena that occur between atoms and molecules that can’t be explained by classical physics. It has been known for more than a century that the rules of classical mechanics, like Newton’s laws of motion, break down at atomic scales. Instead, tiny objects behave according to a different set of laws known as quantum mechanics.

For humans, who can only perceive the macroscopic world, or what’s visible to the naked eye, quantum mechanics can seem counterintuitive and somewhat magical. Things you might not expect happen in the quantum world, like electrons “tunneling” through tiny energy barriers and appearing on the other side unscathed, or being in two different places at the same time in a phenomenon called superposition.

I am trained as a quantum engineer. Research in quantum mechanics is usually geared toward technology. However, and somewhat surprisingly, there is increasing evidence that nature—an engineer with billions of years of practice—has learned how to use quantum mechanics to function optimally. If this is indeed true, it means that our understanding of biology is radically incomplete. It also means that we could possibly control physiological processes by using the quantum properties of biological matter.

Quantumness in Biology Is Probably Real

Researchers can manipulate quantum phenomena to build better technology. In fact, you already live in a quantum-powered world: from laser pointers to GPS, magnetic resonance imaging and the transistors in your computer—all these technologies rely on quantum effects.

In general, quantum effects only manifest at very small length and mass scales, or when temperatures approach absolute zero. This is because quantum objects like atoms and molecules lose their “quantumness” when they uncontrollably interact with each other and their environment. In other words, a macroscopic collection of quantum objects is better described by the laws of classical mechanics. Everything that starts quantum dies classical. For example, an electron can be manipulated to be in two places at the same time, but it will end up in only one place after a short while—exactly what would be expected classically.

In a complicated, noisy biological system, it is thus expected that most quantum effects will rapidly disappear, washed out in what the physicist Erwin Schrödinger called the “warm, wet environment of the cell.” To most physicists, the fact that the living world operates at elevated temperatures and in complex environments implies that biology can be adequately and fully described by classical physics: no funky barrier crossing, no being in multiple locations simultaneously.

Chemists, however, have for a long time begged to differ. Research on basic chemical reactions at room temperature unambiguously shows that processes occurring within biomolecules like proteins and genetic material are the result of quantum effects. Importantly, such nanoscopic, short-lived quantum effects are consistent with driving some macroscopic physiological processes that biologists have measured in living cells and organisms. Research suggests that quantum effects influence biological functions, including regulating enzyme activity, sensing magnetic fields, cell metabolism, and electron transport in biomolecules.

How to Study Quantum Biology

The tantalizing possibility that subtle quantum effects can tweak biological processes presents both an exciting frontier and a challenge to scientists. Studying quantum mechanical effects in biology requires tools that can measure the short time scales, small length scales, and subtle differences in quantum states that give rise to physiological changes—all integrated within a traditional wet lab environment.

In my work, I build instruments to study and control the quantum properties of small things like electrons. In the same way that electrons have mass and charge, they also have a quantum property called spin. Spin defines how the electrons interact with a magnetic field, in the same way that charge defines how electrons interact with an electric field. The quantum experiments I have been building since graduate school, and now in my own lab, aim to apply tailored magnetic fields to change the spins of particular electrons.

Research has demonstrated that many physiological processes are influenced by weak magnetic fields. These processes include stem cell development and maturation, cell proliferation rates, genetic material repair, and countless others. These physiological responses to magnetic fields are consistent with chemical reactions that depend on the spin of particular electrons within molecules. Applying a weak magnetic field to change electron spins can thus effectively control a chemical reaction’s final products, with important physiological consequences.

Currently, a lack of understanding of how such processes work at the nanoscale level prevents researchers from determining exactly what strength and frequency of magnetic fields cause specific chemical reactions in cells. Current cell phone, wearable, and miniaturization technologies are already sufficient to produce tailored, weak magnetic fields that change physiology, both for good and for bad. The missing piece of the puzzle is, hence, a “deterministic codebook” of how to map quantum causes to physiological outcomes.

In the future, fine-tuning nature’s quantum properties could enable researchers to develop therapeutic devices that are noninvasive, remotely controlled, and accessible with a mobile phone. Electromagnetic treatments could potentially be used to prevent and treat disease, such as brain tumors, as well as in biomanufacturing, such as increasing lab-grown meat production.

A Whole New Way of Doing Science

Quantum biology is one of the most interdisciplinary fields to ever emerge. How do you build community and train scientists to work in this area?

Since the pandemic, my lab at the University of California, Los Angeles and the University of Surrey’s Quantum Biology Doctoral Training Centre have organized Big Quantum Biology meetings to provide an informal weekly forum for researchers to meet and share their expertise in fields like mainstream quantum physics, biophysics, medicine, chemistry, and biology.

Research with potentially transformative implications for biology, medicine, and the physical sciences will require working within an equally transformative model of collaboration. Working in one unified lab would allow scientists from disciplines that take very different approaches to research to conduct experiments that meet the breadth of quantum biology from the quantum to the molecular, the cellular, and the organismal.

The existence of quantum biology as a discipline implies that traditional understanding of life processes is incomplete. Further research will lead to new insights into the age-old question of what life is, how it can be controlled, and how to learn with nature to build better quantum technologies.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ANIRUDH / Unsplash

Kategorie: Transhumanismus

More Than Half of Americans Think AI Poses a Threat to Humanity

Singularity HUB - 18 Květen, 2023 - 16:00

AI is the talk of the town these days. But despite the technology’s impressive accomplishments—or perhaps because of them—not all of that talk is positive. There was a New York Times tech columnist’s piece about his unsettling interaction with ChatGPT in February; an open letter calling for a moratorium on AI research in March; “godfather of AI” Geoffrey Hinton’s dramatic resignation from Google and warning about the dangers of AI; and just this week, OpenAI CEO Sam Altman’s testimony before Congress, in which he said his “worst fear is we cause significant harm to the world” and encouraged legislation around the technology (though he also argued that generative AI should be treated differently, which would be convenient for his company).

It seems these warnings (along with all the other media circulating on the topic) have reached the American public loud and clear, and people don’t quite know what to think—but many are getting nervous. A poll carried out last week by Reuters revealed that more than half of Americans believe AI poses a threat to humanity’s future.

The poll was conducted online between May 9 and May 15, with 4,415 adults participating, and the results were published yesterday. More than two-thirds of respondents expressed concern about possible negative impacts of AI, while 61 percent believe it could be a threat to civilization.

“It’s telling such a broad swatch of Americans worry about the negative effects of AI,” said Landon Klein, director of US policy at the Future of Life Institute, the organization behind the previously mentioned open letter. “We view the current moment similar to the beginning of the nuclear era, and we have the benefit of public perception that is consistent with the need to take action.”

One nebulous aspect of the poll, and of many of the headlines about AI we see on a daily basis, is how the technology is defined. What are we referring to when we say “AI”? The term encompasses everything from recommendation algorithms that serve up content on YouTube and Netflix, to large language models like ChatGPT, to models that can design incredibly complex protein architectures, to the Siri assistant built into many iPhones.

IBM’s definition is simple: “a field which combines computer science and robust datasets to enable problem-solving.” Google, meanwhile, defines it as “a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more.”

It could be that peoples’ fear and distrust of AI comes partly from a lack of understanding of it, and a stronger focus on unsettling examples than positive ones. The AI that can design complex proteins may help scientists discover stronger vaccines and other drugs, and could do so on a vastly accelerated timeline.

In fact, biotechnology and medicine are two fields for which AI holds enormous promise, be it by modeling millions of proteins, coming up with artificial enzymes, powering brain implants that help disabled people communicate, or helping diagnose conditions like Alzheimer’s.

Sebastian Thrun, a computer science professor at Stanford who founded Google X, pointed out that there’s not enough public awareness of the potential for positive impact AI has. “The concerns are very legitimate, but I think what’s missing in the dialogue in general is why are we doing this in the first place?” he said. “AI will raise peoples’ quality of life, and help people be more competent and more efficient.”

While 61 percent of the poll’s respondents said AI could be a risk to humanity, only 22 percent said it won’t be a risk; the other 17 percent weren’t sure.

However, the (sort of?) good news is that AI isn’t the biggest thing Americans are losing sleep over. The top worry at the moment is, unsurprisingly, the economy (82 percent of respondents fear a looming recession), with crime coming in second (77 percent said they support increasing police funding to fight crime).

If an AI solution came along that could, say, point out economic strategies humans haven’t yet thought of, would that make people less wary of it?

Given everything else the tech can do, this doesn’t seem like such a long shot.

Image Credit: Google DeepMind / Unsplash

Kategorie: Transhumanismus

A Google AI Chatbot May Soon Take Your Drive-Through Food Order at Wendy’s

Singularity HUB - 17 Květen, 2023 - 16:00

The recent proliferation of generative AI models—which are now being used to produce online search results, make art, help with customer service calls, and much more—has heightened fears of technological unemployment. Though AI is ultimately likely to create more jobs than it renders obsolete, it will indeed render some obsolete, and it seems that among these will be fast food drive-through operators.

Last week, Wendy’s and Google Cloud announced that the fast food chain will be piloting a custom-designed AI for drive-through food ordering. Wendy’s FreshAI, as the technology’s been dubbed, will reportedly give drive-through customers a better ordering experience by reducing miscommunications and errors. Since customers can tweak the restaurant’s offerings to their liking—hold the mustard, pile on some extra pickles, take out the onion and sub in more lettuce—the order combinations are endless, and the companies believe an algorithm can do a better job of keeping it all straight than a human can.

The partnership between Wendy’s and Google isn’t new. The companies started collaborating in 2021, when the fast food chain started using Google Cloud’s data analytics, AI, and hybrid cloud tools for mobile ordering and other convenient ways for “customers to access the brand.”

Their new agreement entails an order-taking, question-answering chatbot. Wendy’s says 75 to 80 percent of its orders come from drive-throughs, so the bot better know its stuff. Like OpenAI’s ChatGPT and Google’s LaMDA, the tool is a large language model (LLM), a type of deep learning algorithm trained on large datasets (as large as the entire internet, in some cases) to learn the relationships between words and the probability of different words preceding or following one another in a sentence. LLMs establish parameters that allow them to generate text based on prompts—or, in the case of ChatGPT and Wendy’s FreshAI, respond to questions from users in a human-like way in real time.

Wendy’s FreshAI was trained on data from Wendy’s menu, the chain’s business rules, and basic conversation logic. It will be able to have conversations with customers and answer their questions, as well as confirming their orders on a screen and relaying them to the cooks inside.

“It will be very conversational,” Wendy’s CEO, Todd Penegor, told the Wall Street Journal. “You won’t know you’re talking to anybody but an employee.”

The chain’s chief information officer, Kevin Vasconi, gave the AI an even heartier endorsement, saying, “It’s at least as good as our best customer service representative, and it’s probably on average better.”

The algorithm was trained to answer frequently asked questions, so it could be interesting (and entertaining) to hear what it comes up with in response to not-so-frequently-asked questions. The AI will doubtless have some perplexing late-night interactions with hungry, impatient, and inebriated customers who just want to dip their fries in a chocolate milkshake (or as Wendy’s calls it, a Frosty). In fact, Penegor said the chain plans to expand its hours and “lean into late night.”

Google has likely built some hefty guardrails into the chatbot to keep it from saying anything untoward, but even so, its rollout will be gradual. It will first launch at a couple of restaurants near Columbus, Ohio next month; if that goes well, it will expand to other locations. The pilot restaurants will have a human employee on hand to monitor the AI and take over and talk to drive-through customers if needed.

Besides making the ordering experience better for customers, the AI is meant to take some work off employees’ hands and free them up to focus on making food and keeping the restaurants running smoothly. It could also be extra good for Wendy’s bottom line (and bad for customers’ waistlines) in that it’s programmed to try to upsell people, offering them larger sizes, daily specials, and desserts.

Wendy’s isn’t the first fast food chain to integrate AI into its ordering process. Popeye’s, McDonald’s, Carl’s Jr., Hardee’s, Taco Bell, and Wingstop have all experimented with AI order-taking in drive-throughs or over the phone. A Popeye’s in Louisiana reported that after starting to use a chatbot called Tori for drive-through orders, speed of service increased by 20 percent, drink sales went up by 150 percent, and customer satisfaction improved by 20 percent—all with 99.9 percent accuracy in order-taking.

Could Wendy’s see similar results? We’ll find out, but it seems entirely possible that they will—and that people conversing with algorithms will be the most normal of everyday experiences in the not-too-distant future.

Image Credit: Michael Form / Pixabay

Kategorie: Transhumanismus
Syndikovat obsah