Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity Group
Aktualizace: 3 min 36 sek zpět

This Week’s Awesome Tech Stories From Around the Web (Through May 18)

18 Květen, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

It’s Time to Believe the AI Hype
Steven Levy | Wired
“There’s universal agreement in the tech world that AI is the biggest thing since the internet, and maybe bigger. …Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos aren’t lying. We will eventually become acclimated to the AI marvels unveiled this week. The smartphone once seemed exotic; now it’s an appendage no less critical to our daily life than an arm or a leg. At a certain point AI’s feats, too, may not seem magical any more.”

archive page

COMPUTING

How to Put a Datacenter in a Shoebox
Anna Herr and Quentin Herr | IEEE Spectrum
“At Imec, we have spent the past two years developing superconducting processing units that can be manufactured using standard CMOS tools. A processor based on this work would be one hundred times as energy efficient as the most efficient chips today, and it would lead to a computer that fits a data-center’s worth of computing resources into a system the size of a shoebox.”

BIOTECH

IndieBio’s SF Incubator Lineup Is Making Some Wild Biotech Promises
Devin Coldewey | TechCrunch
“We took special note of a few, which were making some major, bordering on ludicrous, claims that could pay off in a big way. Biotech has been creeping out in recent years to touch adjacent industries, as companies find how much they rely on outdated processes or even organisms to get things done. So it may not surprise you that there’s a microbiome company in the latest batch—but you might be surprised when you hear it’s the microbiome of copper ore.”

TECH

It’s the End of Google Search as We Know It
Lauren Goode | Wired
“It’s as though Google took the index cards for the screenplay it’s been writing for the past 25 years and tossed them into the air to see where the cards might fall. Also: The screenplay was written by AI. These changes to Google Search have been long in the making. Last year the company carved out a section of its Search Labs, which lets users try experimental new features, for something called Search Generative Experience. The big question since has been whether, or when, those features would become a permanent part of Google Search. The answer is, well, now.”

AUTOMATION

Waymo Says Its Robotaxis Are Now Making 50,000 Paid Trips Every Week
Mariella Moon | Engadget
“If you’ve been seeing more Waymo robotaxis recently in Phoenix, San Francisco, and Los Angeles, that’s because more and more people are hailing one for a ride. The Alphabet-owned company has announced on Twitter/X that it’s now serving more than 50,000 paid trips every week across three cities. Waymo One operates 24/7 in parts of those cities. If the company is getting 50,000 rides a week, that means it receives an average of 300 bookings every hour or five bookings every minute.”

CULTURE

Technology Is Probably Changing Us for the Worse—or So We Always Think
Timothy Maher | MIT Technology Review
“We’ve always greeted new technologies with a mixture of fascination and fear,  says Margaret O’Mara, a historian at the University of Washington who focuses on the intersection of technology and American politics. ‘People think: “Wow, this is going to change everything affirmatively, positively,”‘ she says. ‘And at the same time: ‘It’s scary—this is going to corrupt us or change us in some negative way.”‘ And then something interesting happens: ‘We get used to it,’ she says. ‘The novelty wears off and the new thing becomes a habit.'”

TECH

This Is the Next Smartphone Evolution
Matteo Wong | The Atlantic
“Earlier [this week], OpenAI announced its newest product: GPT-4o, a faster, cheaper, more powerful version of its most advanced large language model, and one that the company has deliberately positioned as the next step in ‘natural human-computer interaction.’ …Watching the presentation, I felt that I was witnessing the murder of Siri, along with that entire generation of smartphone voice assistants, at the hands of a company most people had not heard of just two years ago.”

SPACE

In the Race for Space Metals, Companies Hope to Cash In
Sarah Scoles | Undark
“Previous companies have rocketed toward similar goals before but went bust about a half decade ago. In the years since that first cohort left the stage, though, ‘the field has exploded in interest,’ said Angel Abbud-Madrid, director of the Center for Space Resources at the Colorado School of Mines. …The economic picture has improved with the cost of rocket launches decreasing, as has the regulatory environment, with countries creating laws specifically allowing space mining. But only time will tell if this decade’s prospectors will cash in where others have drilled into the red or be buried by their business plans.”

FUTURE

What I Got Wrong in a Decade of Predicting the Future of Tech
Christopher Mims | The Wall Street Journal
“Anniversaries are typically a time for people to get misty-eyed and recount their successes. But after almost 500 articles in The Wall Street Journal, one thing I’ve learned from covering the tech industry is that failures are far more instructive. Especially when they’re the kind of errors made by many people. Here’s what I’ve learned from a decade of embarrassing myself in public—and having the privilege of getting an earful about it from readers.”

FUTURE OF FOOD

Lab-Grown Meat Is on Shelves Now. But There’s a Catch
Matt Reynolds | Wired
“Now cultivated meat is available in one store in Singapore. There is a catch, however: The chicken on sale at Huber’s Butchery contains just 3 percent animal cells. The rest will be made of plant protein—the same kind of ingredients you’d find in plant-based meats that are already on supermarket shelves worldwide. This might feel like a bit of a bait and switch. Didn’t cultivated meat firms promise us real chicken? And now we’re getting plant-based products with a sprinkling of animal cells? That criticism wouldn’t be entirely fair, though.”

Image Credit: Pawel Czerwinski / Unsplash

Kategorie: Transhumanismus

Smelting Steel With Sunlight: New Solar Trap Tech Could Help Decarbonize Industrial Heat

17 Květen, 2024 - 16:46

Some of the hardest sectors to decarbonize are industries that require high temperatures like steel smelting and cement production. A new approach uses a synthetic quartz solar trap to generate temperatures of over 1,000 degrees Celsius (1,832 degrees Fahrenheit)—hot enough for a host of carbon-intensive industries.

While most of the focus on the climate fight has been on cleaning up the electric grid and transportation, a surprisingly large amount of fossil fuel usage goes into industrial heat. As much as 25 percent of global energy consumption goes towards manufacturing glass, steel, and cement.

Electrifying these processes is challenging because it’s difficult to reach the high temperatures required. Solar receivers, which use thousands of sun-tracking mirrors to concentrate energy from the sun, have shown promise as they can hit temperatures of 3,000 C. But they’re very inefficient when processes require temperatures over 1,000 C because much of the energy is radiated back out.

To get around this, researchers from ETH Zurich in Switzerland showed that adding semi-transparent quartz to a solar receiver could trap solar energy at temperatures as high as 1,050 C. That’s hot enough to replace fossil fuels in a range of highly polluting industries, the researchers say.

“Previous research has only managed to demonstrate the thermal-trap effect up to 170 C,” lead researcher Emiliano Casati said in a press release. “Our research showed that solar thermal trapping works not just at low temperatures, but well above 1,000 C. This is crucial to show its potential for real-world industrial applications.”

The researchers used a silicon carbide disk to absorb solar energy but attached a roughly one-foot-long quartz rod to it. Because quartz is semi-transparent, light is able pass through it, but it also readily absorbs heat and prevents it from being radiated back out.

That meant that when the researchers subjected the quartz rod to simulated sunlight equivalent to 136 suns, the solar energy readily passed through to the silicon plate and was then trapped there. This allowed the plate to heat up to 1,050 C, compared to just 600 C at the other end of the rod.

Simulations of the device found that the quartz’s thermal trapping capabilities could significantly boost the efficiency of solar receivers. Adding a quartz rod to a state-of-the-art receiver could boost efficiency from 40 percent to 70 percent when attempting to hit temperatures of 1,200 C. That kind of efficiency gain could drastically reduce the size, and therefore cost, of solar heat installations.

While still just a proof of concept, the simplicity of the approach means it would probably not be too difficult to apply to existing receiver technology. Companies like Heliogen, which is backed by Bill Gates, has already developed solar furnace technology designed to generate the high temperatures required in a wide range of industries.

Casati says the promise is clear, but work remains to be done to prove its commercial feasibility.

“Solar energy is readily available, and the technology is already here,” he says. “To really motivate industry adoption, we need to demonstrate the economic viability and advantages of this technology at scale.”

But the prospect of replacing such a big chunk of our fossil fuel usage with solar power should be motivation enough to bring this technology to fruition.

Image Credit: A new solar trap built by a team of ETH Zurich scientists reaches 1050 C (Device/Casati et al.)

Kategorie: Transhumanismus

Scientists Step Toward Quantum Internet With Experiment Under the Streets of Boston

16 Květen, 2024 - 19:00

A quantum internet would essentially be unhackable. In the future, sensitive information—financial or national security data, for instance, as opposed to memes and cat pictures—would travel through such a network in parallel to a more traditional internet.

Of course, building and scaling systems for quantum communications is no easy task. Scientists have been steadily chipping away at the problem for years. A Harvard team recently took another noteworthy step in the right direction. In a paper published this week in Nature, the team says they’ve sent entangled photons between two quantum memory nodes 22 miles (35 kilometers) apart on existing fiber optic infrastructure under the busy streets of Boston.

“Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area is an important step toward practical networking between quantum computers,” Mikhail Lukin, who led the project and is a physics professor at Harvard, said in a press release.

The team leased optical fiber under the Boston streets, connecting the two memory nodes located at Harvard by way of a 22-mile (35-kilometer) loop of cable. Image Credit: Can Knaut via OpenStreetMap

One way a quantum network can transmit information is by using entanglement, a quantum property where two particles, likely photons in this case, are linked so a change in the state of one tells us about the state of the other. If the sender and receiver of information each have one of a pair of entangled photons, they can securely transmit data using them. This means quantum communications will rely on generating enormous numbers of entangled photons and reliably sending them to far-off destinations.

Scientists have sent entangled particles long distances over fiber optic cables before, but to make a quantum internet work, particles will need to travel hundreds or thousands of miles. Because cables tend to absorb photons over such distances, the information will be lost—unless it can be periodically refreshed.

Enter quantum repeaters.

You can think of a repeater as a kind of internet gas station. Information passing through long stretches of fiber optic cables naturally degrades. A repeater refreshes that information at regular intervals, strengthening the signal and maintaining its fidelity. A quantum repeater is the same thing, only it also preserves entanglement.

That scientists have yet to build a quantum repeater is one reason we’re still a ways off from a working quantum internet at scale. Which is where the Harvard study comes in.

The team of researchers from Harvard and Amazon Web Services (AWS) have been working on quantum memory nodes. Each node houses a piece of diamond with an atom-sized hole, or silicon-vacancy center, containing two qubits: one for storage, one for communication. The nodes are basically small quantum computers, operating at near absolute zero, that can receive, record, and transmit quantum information. The Boston experiment, according to the team, is the longest distance anyone has sent information between such devices and a big step towards a quantum repeater.

“Our experiment really put us in a position where we’re really close to working on a quantum repeater demonstration,” Can Knaut, a Harvard graduate student in Lukin’s lab, told New Scientist.

Next steps include expanding the system to include multiple nodes.

Along those lines, a separate group in China, using a different technique for quantum memory involving clouds of rubidium atoms, recently said they’d linked three nodes 6 miles (10 kilometers) apart. The same group, led by Xiao-Hui Bao at the University of Science and Technology of China, had previously entangled memory nodes 13.6 miles (22 kilometers) apart.

It’ll take a lot more work to make the technology practical. Researchers need to increase the rate at which their machines entangle photons, for example. But as each new piece falls into place, the prospect of unhackable communications gets a bit closer.

Image Credit: Visax / Unsplash

Kategorie: Transhumanismus

‘Noise’ in the Machine: Human Differences in Judgment Lead to Problems for AI

14 Květen, 2024 - 19:26

Many people understand the concept of bias at some intuitive level. In society, and in artificial intelligence systems, racial and gender biases are well documented.

If society could somehow remove bias, would all problems go away? The late Nobel laureate Daniel Kahneman, who was a key figure in the field of behavioral economics, argued in his last book that bias is just one side of the coin. Errors in judgments can be attributed to two sources: bias and noise.

Bias and noise both play important roles in fields such as law, medicine, and financial forecasting, where human judgments are central. In our work as computer and information scientists, my colleagues and I have found that noise also plays a role in AI.

Statistical Noise

Noise in this context means variation in how people make judgments of the same problem or situation. The problem of noise is more pervasive than initially meets the eye. A seminal work, dating back all the way to the Great Depression, has found that different judges gave different sentences for similar cases.

Worryingly, sentencing in court cases can depend on things such as the temperature and whether the local football team won. Such factors, at least in part, contribute to the perception that the justice system is not just biased but also arbitrary at times.

Other examples: Insurance adjusters might give different estimates for similar claims, reflecting noise in their judgments. Noise is likely present in all manner of contests, ranging from wine tastings to local beauty pageants to college admissions.

Noise in the Data

On the surface, it doesn’t seem likely that noise could affect the performance of AI systems. After all, machines aren’t affected by weather or football teams, so why would they make judgments that vary with circumstance? On the other hand, researchers know that bias affects AI, because it is reflected in the data that the AI is trained on.

For the new spate of AI models like ChatGPT, the gold standard is human performance on general intelligence problems such as common sense. ChatGPT and its peers are measured against human-labeled commonsense datasets.

Put simply, researchers and developers can ask the machine a commonsense question and compare it with human answers: “If I place a heavy rock on a paper table, will it collapse? Yes or No.” If there is high agreement between the two—in the best case, perfect agreement—the machine is approaching human-level common sense, according to the test.

So where would noise come in? The commonsense question above seems simple, and most humans would likely agree on its answer, but there are many questions where there is more disagreement or uncertainty: “Is the following sentence plausible or implausible? My dog plays volleyball.” In other words, there is potential for noise. It is not surprising that interesting commonsense questions would have some noise.

But the issue is that most AI tests don’t account for this noise in experiments. Intuitively, questions generating human answers that tend to agree with one another should be weighted higher than if the answers diverge—in other words, where there is noise. Researchers still don’t know whether or how to weigh AI’s answers in that situation, but a first step is acknowledging that the problem exists.

Tracking Down Noise in the Machine

Theory aside, the question still remains whether all of the above is hypothetical or if in real tests of common sense there is noise. The best way to prove or disprove the presence of noise is to take an existing test, remove the answers and get multiple people to independently label them, meaning provide answers. By measuring disagreement among humans, researchers can know just how much noise is in the test.

The details behind measuring this disagreement are complex, involving significant statistics and math. Besides, who is to say how common sense should be defined? How do you know the human judges are motivated enough to think through the question? These issues lie at the intersection of good experimental design and statistics. Robustness is key: One result, test, or set of human labelers is unlikely to convince anyone. As a pragmatic matter, human labor is expensive. Perhaps for this reason, there haven’t been any studies of possible noise in AI tests.

To address this gap, my colleagues and I designed such a study and published our findings in Nature Scientific Reports, showing that even in the domain of common sense, noise is inevitable. Because the setting in which judgments are elicited can matter, we did two kinds of studies. One type of study involved paid workers from Amazon Mechanical Turk, while the other study involved a smaller-scale labeling exercise in two labs at the University of Southern California and the Rensselaer Polytechnic Institute.

You can think of the former as a more realistic online setting, mirroring how many AI tests are actually labeled before being released for training and evaluation. The latter is more of an extreme, guaranteeing high quality but at much smaller scales. The question we set out to answer was how inevitable is noise, and is it just a matter of quality control?

The results were sobering. In both settings, even on commonsense questions that might have been expected to elicit high—even universal—agreement, we found a nontrivial degree of noise. The noise was high enough that we inferred that between 4 percent and 10 percent of a system’s performance could be attributed to noise.

To emphasize what this means, suppose I built an AI system that achieved 85 percent on a test, and you built an AI system that achieved 91 percent. Your system would seem to be a lot better than mine. But if there is noise in the human labels that were used to score the answers, then we’re not sure anymore that the 6 percent improvement means much. For all we know, there may be no real improvement.

On AI leaderboards, where large language models like the one that powers ChatGPT are compared, performance differences between rival systems are far narrower, typically less than 1 percent. As we show in the paper, ordinary statistics do not really come to the rescue for disentangling the effects of noise from those of true performance improvements.

Noise Audits

What is the way forward? Returning to Kahneman’s book, he proposed the concept of a “noise audit” for quantifying and ultimately mitigating noise as much as possible. At the very least, AI researchers need to estimate what influence noise might be having.

Auditing AI systems for bias is somewhat commonplace, so we believe that the concept of a noise audit should naturally follow. We hope that this study, as well as others like it, leads to their adoption.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Michael Dziedzic / Unsplash

Kategorie: Transhumanismus

Google and Harvard Map a Tiny Piece of the Human Brain With Extreme Precision

13 Květen, 2024 - 21:32

Scientists just published the most detailed map of a cubic millimeter of the human brain. Smaller than a grain of rice, the mapped section of brain includes over 57,000 cells, 230 millimeters of blood vessels, and 150 million synapses.

The project, a collaboration between Harvard and Google, is looking to accelerate connectomics—the study of how neurons are wired together—over a much larger scale.

Our brains are like a jungle.

Neuron branches crisscross regions, forming networks that process perception, memories, and even consciousness. Blood vessels tightly wrap around these branches to provide nutrients and energy. Other brain cell types form intricate connections with neurons, support the brain’s immune function, and fine-tune neural network connections.

In biology, structure determines function. Like tracing wires of a computer, mapping components of the brain and their connections can improve our understanding of how the brain works—and when and why it goes wrong. A brain map that charts the jungle inside our heads could help us tackle some of the most perplexing neurological disorders, such as Alzheimer’s disease, and decipher the origins of emotions, thoughts, and behaviors.

Aided by machine learning tools from Google Research, the Harvard team traced neurons, blood vessels, and other brain cells at nanoscale levels. The images revealed previously unknown quirks in the human brain—including mysterious tangles in neuron wiring and neurons that connect through multiple “contacts” to other cells. Overall, the dataset incorporates a massive 1.4 petabytes of information—roughly the storage amount of a thousand high-end laptops—and is free to explore.

“It’s a little bit humbling,” Dr. Viren Jain, a neuroscientist at Google and study author, told Nature. “How are we ever going to really come to terms with all this complexity?” The database, first released as a preprint paper in 2021, has already garnered much enthusiasm in the scientific field.

“It’s probably the most computer-intensive work in all of neuroscience,” Dr. Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science, who was not involved in the project, told MIT Technology Review.

Why So Complicated?

Many types of brain maps exist. Some chart gene expression in brain cells; others map different cell types across the brain. But the goal is the same. They aim to help scientists understand how the brain works in health and disease.

The connectome details highways between brain regions that “talk” to each other. These connections, called synapses, number in the hundreds of trillions in human brains—on the scale of the number of stars in the universe.

Decades ago, the first whole-brain wiring map detailed all 302 neurons in the roundworm Caenorhabditis elegans. Because its genetics are largely known, the lowly worm delivered insights, such as how the brain and body communicate to increase healthy longevity. Next, scientists charted the fruit fly connectome and found the underpinnings of spatial navigation.

More recently, the MouseLight Project and MICrONS have been deciphering a small chunk of a mouse’s brain—the outermost area called the cortex. It’s hoped such work can help inform neuro-inspired AI algorithms with lower power requirements and higher efficacy.

But mice are not people. In the new study, scientists mapped a cubic millimeter of human brain tissue from the temporal cortex—a nexus that’s important for memory, emotions, and sensations. Although just one-millionth of a human brain, the effort reconstructed connections in 3D at nanoscale resolution.

Slice It Up

Sourcing is a challenge when mapping the human brain. Brain tissues rapidly deteriorate after trauma or death, which changes their wiring and chemistry. Brain organoids—”mini-brains” grown in test tubes—somewhat resemble the brain’s architecture, but they can’t replicate the real thing.

Here, the team took a tiny bit of brain tissue from a 45-year-old woman with epilepsy during surgery—the last resort for those who suffer severe seizures and don’t respond to medication.

Using a machine like a deli-meat slicer armed with a diamond knife, the Harvard team, led by connectome expert Dr. Jeff Lichtman, meticulously sliced the sample into 5,019 cross sections. Each was roughly 30 nanometers thick—a fraction of the width of a human hair. They imaged the slices with an electron microscope, capturing nanoscale cellular details, including the “factories” inside cells that produce energy, eliminate waste, or transport molecules.

Piecing these 2D images into a 3D reconstruction is a total headache. A decade ago, scientists had to do it by hand. Jain’s team at Google developed an AI to automate the job. The AI was able to track fragments of whole components—say, a part of a neuron (its body or branches)—and stick them back together throughout the images.

In total, the team pieced together thousands of neurons and over a hundred million synaptic connections. Other brain components included blood vessels and myelin—a protective molecular “sheath” covering neurons. Like electrical insulation, when myelin deteriorates, it causes multiple brain disorders.

“I remember this moment, going into the map and looking at one individual synapse from this woman’s brain, and then zooming out into these other millions of pixels,” Jain told Nature. “It felt sort of spiritual.”

A Whole New World

Even a cursory look at the data led to surprising insights into the brain’s intricate neural wiring.

Cortical neurons have a forest-like structure for input and a single “cable” that delivers output signals. Called axons, these are dotted with thousands of synapses connecting to other cells.

Usually, a synapse grabs onto just one spot of a neighboring neuron. But the new map found a rare, strange group that connects with up to 50 points. “We’ve always had a theory that there would be super connections, if you will, amongst certain cells…But it’s something we’ve never had the resolution to prove,” Dr. Tim Mosca, who was not involved in the work, told Popular Science. These could be extra-potent connections that allow neural communications to go into “autopilot mode,” like when riding a bike or navigating familiar neighborhoods.

More strange structures included “axon whorls” that wrapped around themselves like tangled headphones. An axon’s main purpose is to reach out and connect with other neurons—so why do some fold into themselves? Do they serve a purpose, or are they just a hiccup in brain wiring? It’s a mystery. Another strange observation found pairs of neurons that perfectly mirrored each other. What this symmetry does for the brain is also unknown.

The bottom line: Our understanding of the brain’s connections and inner workings is still only scratching the surface. The new database is a breakthrough, but it’s not perfect. The results are from a single person with epilepsy, which can’t represent everyone. Some wiring changes, for example, may be due to the disorder. The team is planning a follow-up to separate epilepsy-related circuits from those that are more universal in people.

Meanwhile, they’ve opened the entire database for anyone to explore. And the team is also working with scientists to manually examine the results and eliminate potential AI-induced errors during reconstruction. So far, hundreds of cells have been “proofread” and validated by humans, but it’s just a fraction of the 50,000 neurons in the database.

The technology can also be used for other species, such as the zebrafish—another animal model often used in neuroscience research—and eventually the entire mouse brain.

Although this study only traced a tiny nugget of the human brain, the atlas is a stunning way to peek inside its seemingly chaotic wiring and make sense of things. “Further studies using this resource may bring valuable insights into the mysteries of the human brain,” wrote the team.

Image Credit: Google Research and Lichtman Lab

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through May 11)

11 Květen, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

OpenAI Could Unveil Its Google Search Competitor on Monday
Jess Weatherbed | The Verge
“OpenAI is reportedly gearing up to announce a search product powered by artificial intelligence on Monday that could threaten Google’s dominance. That target date, provided to Reuters by ‘two sources familiar with the matter,’ would time the announcement a day before Google kicks off its annual I/O conference, which is expected to focus on the search giant’s own AI model offerings like Gemini and Gemma.”

archive page

ROBOTICS

DeepMind Is Experimenting With a Nearly Indestructible Robot Hand
Jeremy Hsu | New Scientist
“This latest robotic hand developed by the UK-based Shadow Robot Company can go from fully open to closed within 500 milliseconds and perform a fingertip pinch with up to 10 newtons of force. It can also withstand repeated punishment such as pistons punching the fingers from multiple angles or a person smashing the device with a hammer.”

BIOTECH

First Patient Begins Newly Approved Sickle Cell Gene Therapy
Gina Kolata | The New York Times
“On Wednesday, Kendric Cromer, a 12-year-old boy from a suburb of Washington, became the first person in the world with sickle cell disease to begin a commercially approved gene therapy that may cure the condition. For the estimated 20,000 people with sickle cell in the United States who qualify for the treatment, the start of Kendric’s monthslong medical journey may offer hope. But it also signals the difficulties patients face as they seek a pair of new sickle cell treatments.”

SPACE

Commercial Space Stations Approach Launch Phase 
Andrew Jones | IEEE Spectrum
“A changing of the guard in space stations is on the horizon as private companies work towards providing new opportunities for science, commerce, and tourism in outer space. …The challenge [new space stations like Blue Origin’s] Orbital Reef faces is considerable: reimagining successful earthbound technologies—such as regenerative life support systems, expandable habitats and 3D printing—but now in orbit, on a commercially viable platform.”

FUTURE

This Gigantic 3D Printer Could Reinvent Manufacturing
Nate Berg | Fast Company
“This machine isn’t just spitting out basic building materials like some massive glue gun. It’s also able to do subtractive manufacturing, like milling, as well as utilize a robotic arm for more complicated tasks. A built-in system allows it to lay down fibers in a printed object that give it greater structural integrity, allowing printed spans to stretch farther, and enabling factory-based 3D printed buildings to become even larger.”

AUTOMATION

Wayve Raises $1B to Take Its Tesla-Like Technology for Self-Driving to Many Carmakers
Mike Butcher | TechCrunch
“Wayve calls its hardware-agnostic mapless product an ‘Embodied AI,’ and it plans to distribute its platform not just to car makers but also to robotics companies serving manufacturers of all descriptions, allowing the platform to learn from human behavior in a wide variety of real-world environments.”

BIOTECH

The US Is Cracking Down on Synthetic DNA
Emily Mullin | Wired
“Synthesizing DNA has been possible for decades, but it’s become increasingly easier, cheaper, and faster to do so in recent years thanks to new technology that can ‘print’ custom gene sequences. Now, dozens of companies around the world make and ship synthetic nucleic acids en masse. And with AI, it’s becoming possible to create entirely new sequences that don’t exist in nature—including those that could pose a threat to humans or other living things.”

SPACE

Fall Into a Black Hole in Mind-Bending NASA Animation
Robert Lea | Space.com
“If you’ve ever wondered what would happen if you were unlucky enough to fall into a black hole, NASA has your answer. A visualization created on a NASA supercomputer to celebrate the beginning of black hole week on Monday (May 6) takes the viewer on a one-way plunge beyond the event horizon of a black hole.”

ENERGY

A Company Is Building a Giant Compressed-Air Battery in the Australian Outback
Dan Gearino | Wired
“Toronto-based Hydrostor is one of the businesses developing long-duration energy storage that has moved beyond lab scale and is now focusing on building big things. The company makes systems that store energy underground in the form of compressed air, which can be released to produce electricity for eight hours or longer.”

SCIENCE

The Way Whales Communicate Is Closer to Human Language Than We Realized
Rhiannon Williams | MIT Technology Review
“A team of researchers led by Pratyusha Sharma at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) working with Project CETI, a nonprofit focused on using AI to understand whales, used statistical models to analyze whale codas and managed to identify a structure to their language that’s similar to features of the complex vocalizations humans use. Their findings represent a tool future research could use to decipher not just the structure but the actual meaning of whale sounds.”

Image Credit: Benjamin Cheng / Unsplash

Kategorie: Transhumanismus

Global Carbon Capture Capacity Quadruples as the Biggest Plant Yet Revs Up in Iceland

10 Květen, 2024 - 19:18

Pulling carbon dioxide out of the atmosphere is likely to be a crucial weapon in the battle against climate change. And now global carbon capture capacity has quadrupled with the opening of the world’s largest direct air capture plant in Iceland.

Scientists and policymakers initially resisted proposals to remove CO2 from the atmosphere, due to concerns it could lead to a reduced sense of urgency around emissions reductions. But with progress on that front falling behind schedule, there’s been growing acceptance that carbon capture will be crucial if we want to avoid the worst consequences of climate change.

A variety of approaches, including reforestation, regenerative agriculture, and efforts to lock carbon up in minerals, could play a role. But the approach garnering most of the attention is direct air capture, which relies on large facilities powered by renewable energy to suck CO2 out of the air.

One of the leaders in this space is Swiss company Climeworks, whose Orca plant in Iceland previously held the title for world’s largest. But this week, the company started operations at a new plant called Mammoth that has nearly ten times the capacity. The facility, also in Iceland, will be able to extract 36,000 tons of CO2 a year, which is nearly four times the 10,000 tons a year currently being captured globally.

“Starting operations of our Mammoth plant is another proof point in Climeworks’ scale-up journey to megaton capacity by 2030 and gigaton by 2050,” co-CEO of Climeworks Jan Wurzbacher said in a statement. “Constructing multiple real-world plants in rapid sequences makes Climeworks the most deployed carbon removal company with direct air capture at the core.”

Climeworks plants use fans to suck air into large collector units filled with a material called a sorbent, which absorbs CO2. Once the sorbent is saturated, the collector shuts and is heated to roughly 212 degrees Fahrenheit to release the CO2.

The Mammoth plant will eventually feature 72 of these collector units, though only 12 are currently operational. That’s still more than Orca’s eight units, which allows it to capture roughly 4,000 tons of CO2 a year. Adding an extra level to the stacks of collectors has also reduced land use per ton of CO2 captured, while a new V-shaped configuration improves airflow, boosting performance.

To permanently store the captured carbon, Climeworks has partnered with Icelandic company Carbfix, which has developed a process to inject CO2 dissolved in water deep into porous rock formations made of basalt. Over the course of a couple years, the dissolved CO2 reacts with the rocks to form solid carbonate minerals that are stable for thousands of years.

With the Orca plant, CO2 had to be transported through hundreds of meters of pipeline to Carbfix’s storage site. But Mammoth features two injection wells on-site reducing transportation costs. It also has a new CO2 absorption tower that dissolves the gas in water at lower pressures, reducing energy costs compared to the previous approach.

Climeworks has much bigger ambitions than Mammoth though. The US government has earmarked $3.5 billion to build four direct air capture hubs, each capable of capturing one million tons of CO2 a year, and Climeworks will provide the technology for one of the proposed facilities in Louisiana.

The company says it’s aiming to reach megaton-scale—removing one million tons a year—by 2030 and gigaton-scale—a billion tons a year by 2050. Hopefully, they won’t be the only ones, because climate forecasts suggest we’ll need to be removing 3.5 gigatons of CO2 a year by 2050 to keep warming below 1.5 degrees Celsius.

There’s also little clarity on the economics of the approach. According to Reuters, Climeworks did not reveal how much it costs Mammoth to remove each ton of CO2, though it said it’s targeting $400-600 per ton by 2030 and $200-350 per ton by 2040. And while plants in Iceland can take advantage of abundant, green geothermal energy, it’s less clear what they will rely on elsewhere.

Either way, there’s growing agreement that carbon capture will be an important part of our efforts to tackle climate change. While Mammoth might not make much of a dent in emissions, it’s a promising sign that direct air capture technology is maturing.

Image Credit: Climeworks

Kategorie: Transhumanismus

Google DeepMind’s New AlphaFold AI Maps Life’s Molecular Dance in Minutes

9 Květen, 2024 - 23:33

Proteins are biological workhorses.

They build our bodies and orchestrate the molecular processes in cells that keep them healthy. They also present a wealth of targets for new medications. From everyday pain relievers to sophisticated cancer immunotherapies, most current drugs interact with a protein. Deciphering protein architectures could lead to new treatments.

That was the promise of AlphaFold 2, an AI model from Google DeepMind that predicted how proteins gain their distinctive shapes based on the sequences of their constituent molecules alone. Released in 2020, the tool was a breakthrough half a decade in the making.

But proteins don’t work alone. They inhabit an entire cellular universe and often collaborate with other molecular inhabitants like, for example, DNA, the body’s genetic blueprint.

This week, DeepMind and Isomorphic Labs released a big new update that allows the algorithm to predict how proteins work inside cells. Instead of only modeling their structures, the new version—dubbed AlphaFold 3—can also map a protein’s interactions with other molecules.

For example, could a protein bind to a disease-causing gene and shut it down? Can adding new genes to crops make them resilient to viruses? Can the algorithm help us rapidly engineer new vaccines to tackle existing diseases—or whatever new ones nature throws at us?

“Biology is a dynamic system…you have to understand how properties of biology emerge due to the interactions between different molecules in the cell,” said Demis Hassabis, the CEO of DeepMind, in a press conference.

AlphaFold 3 helps explain “not only how proteins talk to themselves, but also how they talk to other parts of the body,” said lead author Dr. John Jumper.

The team is releasing the new AI online for academic researchers by way of an interface called the AlphaFold Server. With a few clicks, a biologist can run a simulation of an idea in minutes, compared to the weeks or months usually needed for experiments in a lab.

Dr. Julien Bergeron at King’s College London, who builds nano-protein machines but was not involved in the work, said the AI is “transformative science” for speeding up research, which could ultimately lead to nanotech devices powered by the body’s mechanisms alone.

For Dr. Frank Uhlmann at the Francis Crick Laboratory, who gained early access to AlphaFold 3 and used it to study how DNA divides when cells divide, the AI is “democratizing discovery research.”

Molecular Universe

Proteins are finicky creatures. They’re made of strings of molecules called amino acids that fold into intricate three-dimensional shapes that determine what the protein can do.

Sometimes the folding processes goes wrong. In Alzheimer’s disease, misfolded proteins clump into dysfunctional blobs that clog up around and inside brain cells.

Scientists have long tried to engineer drugs to break up disease-causing proteins. One strategy is to map protein structure—know thy enemy (and friends). Before AlphaFold, this was done with electron microscopy, which captures a protein’s structure at the atomic level. But it’s expensive, labor intensive, and not all proteins can tolerate the scan.

Which is why AlphaFold 2 was revolutionary. Using amino acid sequences alone—the constituent molecules that make up proteins—the algorithm could predict a protein’s final structure with startling accuracy. DeepMind used AlphaFold to map the structure of nearly all proteins known to science and how they interact. According to the AI lab, in just three years, researchers have mapped roughly six million protein structures using AlphaFold 2.

But to Jumper, modeling proteins isn’t enough. To design new drugs, you have to think holistically about the cell’s whole ecosystem.

It’s an idea championed by Dr. David Baker at the University of Washington, another pioneer in the protein-prediction space. In 2021, Baker’s team released AI-based software called RoseTTAFold All-Atom to tackle interactions between proteins and other biomolecules.

Picturing these interactions can help solve tough medical challenges, allowing scientists to design better cancer treatments or more precise gene therapies, for example.

“Properties of biology emerge through the interactions between different molecules in the cell,” said Hassabis in the press conference. “You can think about AlphaFold 3 as our first big sort of step towards that.”

A Revamp

AlphaFold 3 builds on its predecessor, but with significant renovations.

One way to gauge how a protein interacts with other molecules is to examine evolution. Another is to map a protein’s 3D structure and—with a dose of physics—predict how it can grab onto other molecules. While AlphaFold 2 mostly used an evolutionary approach—training the AI on what we already know about protein evolution in nature—the new version heavily embraces physical and chemical modeling.

Some of this includes chemical changes. Proteins are often tagged with different chemicals. These tags sometimes change protein structure but are essential to their behavior—they can literally determine a cell’s fate, for example, life, senescence, or death.

The algorithm’s overall setup makes some use of its predecessor’s machinery to map proteins, DNA, and other molecules and their interactions. But the team also looked to diffusion models—the algorithms behind OpenAI’s DALL-E 2 image generator—to capture structures at the atomic level. Diffusion models are trained to reverse noisy images in steps until they arrive at a prediction for what the image (or in this case a 3D model of a biomolecule) should look like without the noise. This addition made a “substantial change” to performance, said Jumper.

Like AlphaFold 2, the new version has a built-in “sanity check” that indicates how confident it is in a generated model so scientists can proofread its outputs. This has been a core component of all their work, said the DeepMind team. They trained the AI using the Protein Data Bank, an open-source compilation of 3D protein structures that’s constantly updated, including new experimentally validated structures of proteins binding to DNA and other biomolecules

Pitted against existing software, AlphaFold 3 broke records. One test for molecular interactions between proteins and small molecules—ones that could become medications—succeeded 76 percent of the time. Previous attempts were successful in roughly 42 percent of cases.

When it comes to deciphering protein functions, AlphaFold 3 “seeks to solve the exact same problem [as RoseTTAFold All-Atom]…but is clearly more accurate,” Baker told Singularity Hub.

But the tool’s accuracy depends on which interaction is being modeled. The algorithm isn’t yet great at protein-RNA interactions, for example, Columbia University’s Mohammed AlQuraishi told MIT Technology Review. Overall, accuracy ranged from 40 to more than 80 percent.

AI to Real Life

Unlike previous iterations, DeepMind isn’t open-sourcing AlphaFold 3’s code. Instead, they’re releasing the tool as a free online platform, called AlphaFold Server, that allows scientists to test their ideas for protein interactions with just a few clicks.

AlphaFold 2 required technical expertise to install and run the software. The server, in contrast, can help people unfamiliar with code to use the tool. It’s for non-commercial use only and can’t be reused to train other machine learning models for protein prediction. But it is freely available for scientists to try. The team envisions the software helping develop new antibodies and other treatments at a faster rate. Isomorphic Labs, a spin-off of DeepMind, is already using AlphaFold 3 to develop medications for a variety of diseases.

For Bergeron, the upgrade is “transformative.” Instead of spending years in the lab, it’s now possible to mimic protein interactions in silico—a computer simulation—before beginning the labor- and time-intensive work of investigating promising solutions using cells.

“I’m pretty certain that every structural biology and protein biochemistry research group in the world will immediately adopt this system,” he said.

Image Credit: Google DeepMind

Kategorie: Transhumanismus

Astronomers Discover 27,500 New Asteroids Lurking in Archival Images

8 Květen, 2024 - 21:03

There are well over a million asteroids in the solar system. Most don’t cross paths with Earth, but some do and there’s a risk one of these will collide with our planet. Taking a census of nearby space rocks, then, is prudent. As conventional wisdom would have it, we’ll need lots of telescopes, time, and teams of astronomers to find them.

But maybe not, according to the B612 Foundation’s Asteroid Institute.

In tandem with Google Cloud, the Asteroid Institute recently announced they’ve spotted 27,500 new asteroids—more than all discoveries worldwide last year—without requiring a single new observation. Instead, over a period of just a few weeks, the team used new software to scour 1.7 billion points of light in some 400,000 images taken over seven years and archived by the National Optical-Infrared Astronomy Research Laboratory (NOIRLab).

To discover new asteroids, astronomers usually need multiple images over several nights (or more) to find moving objects and calculate their orbits. This means they have to make new observations with asteroid discovery in mind. There is also, however, a trove of existing one-time observations made for other purposes, and these are likely packed with photobombing asteroids. But identifying them is difficult and computationally intensive.

Working with the University of Washington, the Asteroid Institute team developed an algorithm, Tracklet-less Heliocentric Orbit Recovery, or THOR, to scan archived images recorded at different times or even by different telescopes. The tool can tell if moving points of light recorded in separate images are the same object. Many of these will be asteroids.

Running THOR on Google Cloud, the team scoured the NOIRLab data and found plenty. Most of the new asteroids are in the main asteroid belt, but more than 100 are near-Earth asteroids. Though the team classified their findings as “high-confidence,” these near-Earth asteroids have not yet been confirmed. They’ll submit their findings to the Minor Planet Center, and ESA and NASA will then verify orbits and assess risk. (The team says they have no reason to believe any pose a risk to Earth.)

While the new software could speed up the pace of discovery, the process still requires volunteers and scientists to manually review the algorithm’s finds. The team plans to use the raw data from the recent run including human review to train an AI model. The hope is that some or all of the manual review process can be automated, making the process even faster.

In the future, the algorithm will go to work on data from the Vera C. Rubin Observatory, a telescope in Chile’s Atacama desert. The telescope, set to begin operations next year, will make twice nightly observations of the sky with asteroid detection in mind. THOR may be able to make discoveries with only one nightly run, freeing the telescope up for other work.

All this is in service of the plan to discover as many Earth-crossing asteroids as possible.

According to NASA, we’ve found over 1.3 million asteroids35,000 of which are near-Earth asteroids. Of these, over 90 percent of the biggest and most dangerous—in the same class as the impact that ended the dinosaurs—have been discovered. Scientists are now filling out the list of smaller but still dangerous asteroids. The vast majority of all known asteroids were catalogued this century. Before that we were flying blind.

While no dangerous asteroids are known to be headed our way soon, space agencies are working on a plan of action—sans nukes and Bruce Willis—should we discover one.

In 2022, NASA rammed the DART spacecraft into an asteroid, Dymorphos, to see if it would deflect the space rock’s orbit. This is a planetary defense strategy known as a “kinetic impactor.” Scientists thought DART might change the asteroid’s orbit by 7 minutes. Instead, DART changed Dymorphos’ orbit by a whopping 33 minutes, much of which was due to recoil produced by a giant plume of material ejected by the impact.

The conclusion of scientists studying the aftermath? “Kinetic impactor technology is a viable technique to potentially defend Earth if necessary.” With the caveat: If we have enough time. Such impacts amount to a nudge, so we need years of advance notice.

Algorithms like THOR could help give us that crucial heads up.

Image Credit: B612 Foundation

Kategorie: Transhumanismus

AI Can Now Generate Entire Songs on Demand. What Does This Mean for Music as We Know It?

7 Květen, 2024 - 19:28

In March, we saw the launch of a “ChatGPT for music” called Suno, which uses generative AI to produce realistic songs on demand from short text prompts. A few weeks later, a similar competitor—Udioarrived on the scene.

I’ve been working with various creative computational tools for the past 15 years, both as a researcher and a producer, and the recent pace of change has floored me. As I’ve argued elsewhere, the view that AI systems will never make “real” music like humans do should be understood more as a claim about social context than technical capability.

The argument “sure, it can make expressive, complex-structured, natural-sounding, virtuosic, original music which can stir human emotions, but AI can’t make proper music” can easily begin to sound like something from a Monty Python sketch.

After playing with Suno and Udio, I’ve been thinking about what it is exactly they change—and what they might mean not only for the way professionals and amateur artists create music, but the way all of us consume it.

Expressing Emotion Without Feeling It

Generating audio from text prompts in itself is nothing new. However, Suno and Udio have made an obvious development: from a simple text prompt, they generate song lyrics (using a ChatGPT-like text generator), feed them into a generative voice model, and integrate the “vocals” with generated music to produce a coherent song segment.

This integration is a small but remarkable feat. The systems are very good at making up coherent songs that sound expressively “sung” (there I go anthropomorphizing).

The effect can be uncanny. I know it’s AI, but the voice can still cut through with emotional impact. When the music performs a perfectly executed end-of-bar pirouette into a new section, my brain gets some of those little sparks of pattern-processing joy that I might get listening to a great band.

To me this highlights something sometimes missed about musical expression: AI doesn’t need to experience emotions and life events to successfully express them in music that resonates with people.

Music as an Everyday Language

Like other generative AI products, Suno and Udio were trained on vast amounts of existing work by real humans—and there is much debate about those humans’ intellectual property rights.

Nevertheless, these tools may mark the dawn of mainstream AI music culture. They offer new forms of musical engagement that people will just want to use, to explore, to play with, and actually listen to for their own enjoyment.

AI capable of “end-to-end” music creation is arguably not technology for makers of music, but for consumers of music. For now it remains unclear whether users of Udio and Suno are creators or consumers—or whether the distinction is even useful.

A long-observed phenomenon in creative technologies is that as something becomes easier and cheaper to produce, it is used for more casual expression. As a result, the medium goes from an exclusive high art form to more of an everyday language—think what smartphones have done to photography.

So imagine you could send your father a professionally produced song all about him for his birthday, with minimal cost and effort, in a style of his preference—a modern-day birthday card. Researchers have long considered this eventuality, and now we can do it. Happy birthday, Dad!

Mr Bown’s Blues. Generated by Oliver Bown using Udio [3.75 MB (download)] Can You Create Without Control?

Whatever these systems have achieved and may achieve in the near future, they face a glaring limitation: the lack of control.

Text prompts are often not much good as precise instructions, especially in music. So these tools are fit for blind search—a kind of wandering through the space of possibilities—but not for accurate control. (That’s not to diminish their value. Blind search can be a powerful creative force.)

Viewing these tools as a practicing music producer, things look very different. Although Udio’s about page says “anyone with a tune, some lyrics, or a funny idea can now express themselves in music,” I don’t feel I have enough control to express myself with these tools.

I can see them being useful to seed raw materials for manipulation, much like samples and field recordings. But when I’m seeking to express myself, I need control.

Using Suno, I had some fun finding the most gnarly dark techno grooves I could get out of it. The result was something I would absolutely use in a track.

Cheese Lovers’ Anthem. Generated by Oliver Bown using Suno [2.75 MB (download)]

 

But I found I could also just gladly listen. I felt no compulsion to add anything or manipulate the result to add my mark.

And many jurisdictions have declared that you won’t be awarded copyright for something just because you prompted it into existence with AI.

For a start, the output depends just as much on everything that went into the AI—including the creative work of millions of other artists. Arguably, you didn’t do the work of creation. You simply requested it.

New Musical Experiences in the No-Man’s Land Between Production and Consumption

So Udio’s declaration that anyone can express themselves in music is an interesting provocation. The people who use tools like Suno and Udio may be considered more consumers of music AI experiences than creators of music AI works, or as with many technological impacts, we may need to come up with new concepts for what they’re doing.

A shift to generative music may draw attention away from current forms of musical culture, just as the era of recorded music saw the diminishing (but not death) of orchestral music, which was once the only way to hear complex, timbrally rich and loud music. If engagement in these new types of music culture and exchange explodes, we may see reduced engagement in the traditional music consumption of artists, bands, radio and playlists.

While it is too early to tell what the impact will be, we should be attentive. The effort to defend existing creators’ intellectual property protections, a significant moral rights issue, is part of this equation.

But even if it succeeds I believe it won’t fundamentally address this potentially explosive shift in culture, and claims that such music might be inferior also have had little effect in halting cultural change historically, as with techno or even jazz, long ago. Government AI policies may need to look beyond these issues to understand how music works socially and to ensure that our musical cultures are vibrant, sustainable, enriching, and meaningful for both individuals and communities.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Pawel Czerwinski / Unsplash

Kategorie: Transhumanismus

A Massive Study Is Revealing Why Exercise Is So Good for Our Health

6 Květen, 2024 - 21:30

We all know that exercise is good for us.

A brisk walk of roughly an hour a day can stave off chronic diseases, including heart or blood vessel issues and Type 2 diabetes. Regular exercise delays memory loss due to aging, boosts the immune system, slashes stress, and may even increase lifespan.

For decades, scientists have tried to understand why. Throughout the body, our organs and tissues release a wide variety of molecules during—and even after—exercise to reap its benefits. But no single molecule works alone. The hard part is understanding how they collaborate in networks after exercise.

Enter the Molecular Transducers of Physical Activity Consortium (MoTrPAC) project. Established nearly a decade ago and funded by the National Institutes of Health (NIH), the project aims to create comprehensive molecular maps of how genes and proteins change after exercise in both rodents and people. Rather than focusing on single proteins or genes, the project takes a Google Earth approach—let’s see the overall picture.

It’s not simply for scientific curiosity. If we can find important molecular processes that trigger exercise benefits, we could potentially mimic those reactions using medications and help people who physically can’t work out—a sort of “exercise in a pill.”

This month, the project announced multiple results.

In one study, scientists built an atlas of bodily changes before, during, and after exercise in rats. Altogether, the team collected nearly 9,500 samples across multiple tissues to examine how exercise changes gene expression across the body. Another study detailed differences between sexes after exercise. A third team mapped exercise-related genes to those associated with diseases.

According to the project’s NIH webpage: “When the MoTrPAC study is completed, it will be the largest research study examining the link between exercise and its improvement of human health.”

Work It

Our tissues are chatterboxes. The gut “talks” to the brain through a vast maze of molecules. Muscles pump out proteins to fine-tune immune system defenses. Plasma—the liquid part of blood—can transfer the learning and memory benefits of running when injected into “couch potato” mice and delay cognitive decline.

Over the years, scientists have identified individual molecules and processes that could mediate these effects, but the health benefits are likely due to networks of molecules working together.

“MoTrPAC was launched to fill an important gap in exercise research,” said former NIH director Dr. Francis Collins in a 2020 press release. “It shifts focus from a specific organ or disease to a fundamental understanding of exercise at the molecular level—an understanding that may lead to personalized, prescribed exercise regimens based on an individual’s needs and traits.”

The project has two arms. One observes rodents before, during, and after wheel running to build comprehensive maps of molecular changes due to exercise. These maps aim to capture gene expression alongside metabolic and epigenetic changes in multiple organs.

Another arm will recruit roughly 2,600 healthy volunteers aged 10 to over 60 years old. With a large pool of participants, the team hopes to account for variation between people and even identify differences in the body’s response to exercise based on age, gender, or race. The volunteers will undergo 12 weeks of exercise, either endurance training—such as long-distance running—or weightlifting.

Altogether, the goal is to detect how exercise affects cells at a molecular level in multiple tissue types—blood, fat, and muscle.

Exercise Encyclopedia

Last week, MoTrPAC released an initial wave of findings.

In one study, the group collected blood and 18 different tissue samples from adult rats, both male and female, as they happily ran for a week to two months. The team then screened how the body changes with exercise by comparing rats that work out with “couch potato” rats as a baseline. Physical training increased the rats’ aerobic capacity—the amount of oxygen the body can use—by roughly 17 percent.

Next, the team analyzed the molecular fingerprints of exercise in whole blood, plasma, and 18 solid tissues, including heart, liver, lung, kidney, fat tissue, and the hippocampus, a brain region associated with memory. They used an impressive array of tools that, for example, captured changes in overall gene expression and the epigenetic landscape. Others mapped differences in the body’s proteins, fat, immune system, and metabolism.

“Altogether, datasets were generated from 9,466 assays across 211 combinations of tissues and molecular platforms,” wrote the team.

Using an AI-based method, they integrated the results across time into a comprehensive molecular map. The map pinpointed multiple molecular changes that could dampen liver diseases, inflammatory bowel disease, and protect against heart health and tissue injuries.

All this represents “the first whole-organism molecular map” capturing how exercise changes the body, wrote the team. (All of the data is free to explore.)

Venus and Mars

Most previous studies on exercise in rodents focused on males. What about the ladies?

After analyzing the MoTrPAC database, another study found that exercise changes the body’s molecular signaling differently depending on biological sex.

After running, female rats triggered genes in white fat—the type under the skin—related to insulin signaling and the body’s ability to form fat. Meanwhile, males showed molecular signatures of a ramped up metabolism.

With consistent exercise, male rats rapidly lost fat and weight, whereas females maintained their curves but with improved insulin signaling, which might protect them against heart diseases.

A third study integrated gene expression data collected from exercised rats with disease-relevant gene databases previously found in humans. The goal is to link workout-related genes in a particular organ or tissue with a disease or other health outcome—what the authors call “trait-tissue-gene triplets.” Overall, they found 5,523 triplets “to serve as a valuable starting point for future investigations,” they wrote.

We’re only scratching the surface of the complex puzzle that is exercise. Through extensive mapping efforts, the project aims to eventually tailor workout regimens for people with chronic diseases or identify key “druggable” components that could confer some health benefits of exercise with a pill.

“This is an unprecedented large-scale effort to begin to explore—in extreme detail—the biochemical, physiological, and clinical impact of exercise,” Dr. Russell Tracy at the University of Vermont, a MoTrPAC member, said in a press release.

Image Credit: Fitsum Admasu / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through May 4)

4 Květen, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

Sam Altman Says Helpful Agents Are Poised to Become AI’s Killer Function
James O’Donnell | MIT Technology Review
“Altman, who was visiting Cambridge for a series of events hosted by Harvard and the venture capital firm Xfund, described the killer app for AI as a ‘super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.’ It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to.”archive page

COMPUTING

Expect a Wave of Wafer-Scale Computers
Samuel K. Moore | IEEE Spectrum
“At TSMC’s North American Technology Symposium on Wednesday, the company detailed both its semiconductor technology and chip-packaging technology road maps. While the former is key to keeping the traditional part of Moore’s Law going, the latter could accelerate a trend toward processors made from more and more silicon, leading quickly to systems the size of a full silicon wafer. …In 2027, you will get a full-wafer integration that delivers 40 times as much compute power, more than 40 reticles’ worth of silicon, and room for more than 60 high-bandwidth memory chips, TSMC predicts.”

FUTURE

Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything?
Will Knight | Wired
“With the publication of his last book, Superintelligence: Paths, Dangers, Strategies, in 2014, Bostrom drew public attention to what was then a fringe idea—that AI would advance to a point where it might turn against and delete humanity. …Bostrom’s new book takes a very different tack. Rather than play the doomy hits, Deep Utopia: Life and Meaning in a Solved World, considers a future in which humanity has successfully developed superintelligent machines but averted disaster.”

TECH

AI Start-Ups Face a Rough Financial Reality Check
Cade Metz, Karen Weise, and Tripp Mickle | The New York Times
“The AI revolution, it is becoming clear in Silicon Valley, is going to come with a very big price tag. And the tech companies that have bet their futures on it are scrambling to figure out how to close the gap between those expenses and the profits they hope to make somewhere down the line.”

ROBOTICS

Every Tech Company Wants to Be Like Boston Dynamics
Jacob Stern | The Atlantic
“Clips of robots running faster than Usain Bolt and dancing in sync, among many others, have helped [Boston Dynamics] reach true influencer status. Its videos have now been viewed more than 800 million times, far more than those of much bigger tech companies, such as Tesla and OpenAI. The creator of Black Mirror even admitted that an episode in which killer robot dogs chase a band of survivors across an apocalyptic wasteland was directly inspired by Boston Dynamics’ videos.”

ETHICS

ChatGPT Shows Better Moral Judgment Than a College Undergrad
Kyle Orland | Ars Technica
“In ‘Attributions toward artificial agents in a modified Moral Turing Test’…[Georgia State University] researchers found that morality judgments given by ChatGPT4 were ‘perceived as superior in quality to humans’ along a variety of dimensions like virtuosity and intelligence. But before you start to worry that philosophy professors will soon be replaced by hyper-moral AIs, there are some important caveats to consider.”

SPACE

New Space Company Seeks to Solve Orbital Mobility With High Delta-V Spacecraft
Eric Berger | Ars Technica
“[Portal Space Systems founder, Jeff Thornburg] envisions a fleet of refuelable Supernova vehicles at medium-Earth and geostationary orbit capable of swooping down to various orbits and providing services such as propellant delivery, mobility, and observation for commercial and military satellites. His vision is to provide real-time, responsive capability for existing satellites. If one needs to make an emergency maneuver, a Supernova vehicle could be there within a couple of hours. ‘If we’re going to have a true space economy, that means logistics and supply services,’ he said.”

AUTOMATION

Google’s Waymo Is Expanding Its Self-Driving ‘Robotaxi’ Testing
William Gavin | Quartz
“Waymo plans to soon start testing fully autonomous rides across California’s San Francisco Peninsula, despite criticism and concerns from residents and city officials. In the coming weeks, Waymo employees will begin testing rides without a human driver on city streets north of San Mateo, the company said Friday.”

VIRTUAL REALITY

Ukraine Unveils AI-Generated Foreign Ministry Spokesperson
Agence France-Presse | The Guardian
“Dressed in a dark suit, the spokesperson introduced herself as Victoria Shi, a ‘digital person,’ in a presentation posted on social media. The figure gesticulates with her hands and moves her head as she speaks. The foreign ministry’s press service said that the statements given by Shi would not be generated by AI but ‘written and verified by real people.'”

Image Credit: Drew Walker / Unsplash

Kategorie: Transhumanismus

This Plastic Is Embedded With Bacterial Spores That Break It Down After It’s Thrown Out

2 Květen, 2024 - 20:49

Getting microbes to eat plastic is a frequently touted solution to our growing waste problem, but making the approach practical is tricky. A new technique that impregnates plastic with the spores of plastic-eating bacteria could make the idea a reality.

The impact of plastic waste on the environment and our health has gained increasing attention in recent years. The latest round of UN talks aiming for a global treaty to end plastic pollution just concluded in Ottawa, Canada earlier this week, though considerable disagreements remain.

Recycling will inevitably be a crucial ingredient in any plan to deal with the problem. But a 2022 report from the Organization for Economic Cooperation and Development found only 9 percent of plastic waste ever gets recycled. That’s partly due to the fact that existing recycling approaches are energy intensive and time consuming.

This has spurred a search for new approaches, and one of the most promising is the use of bacteria to break down plastics, either by rendering them harmless or using them to produce building blocks that can be repurposed into other valuable materials and chemicals. The main problem with the approach is making sure plastic waste ends up in the same place as these plastic-loving bacteria.

Now, researchers have come up with an ingenious solution: embed microbes in plastic during the manufacturing process. Not only did the approach result in 93 percent of the plastic biodegrading within five months, but it even increased the strength and stretchability of the material.

“What’s remarkable is that our material breaks down even without the presence of additional microbes,” project co-leader Jon Pokorski from the University of California San Diego said in a press release.

“Chances are, most of these plastics will likely not end up in microbially rich composting facilities. So this ability to self-degrade in a microbe-free environment makes our technology more versatile.”

The main challenge when it came to incorporating bacteria into plastics was making sure they survived the high temperatures involved in manufacturing the material. The researchers worked with a soft plastic called thermoplastic polyurethane (TPU), which is used in footwear, cushions, and memory foam. TPU is manufactured by melting pellets of the material at around 275 degrees Fahrenheit and then extruding it into the desired shape.

Given the need to survive these high temperatures, the researchers selected a plastic-eating bacteria called Bacillus subtilis, which can form spores allowing it to survive harsh conditions. Even then, they discovered more than 90 percent of the bacteria were killed in under a minute at those temperatures.

So, the team used a technique called adaptive laboratory evolution to create a more heat-tolerant strain of the bacteria. They dunked the spores in boiling water for increasing lengths of time, collecting the survivors, growing the population back up, and then repeating the process. Over time, this selected for mutations that conferred greater heat tolerance, until the researchers were left with a strain that was able to withstand the manufacturing process.

When they incorporated the spores into the plastic, they were surprised to find the bacteria actually improved the mechanical properties of the material. In essence, the spores acted like steel rebar in concrete, making it harder to break and increasing its stretchability.

To test whether the impregnated spores could help the plastic biodegrade, the researchers took small strips of the plastic and put them in sterilized compost. After five months, they found the strips had lost 93 percent of their mass compared to 44 percent for TPU without spores, which suggests the spores were reactivated by nutrients in the compost and helped degrade the plastic substantially faster.

It’s unclear if the approach would work with other plastics, though the researchers say they plan to find out. There is also a danger the spores could reactivate before the plastic is disposed of, which could shorten the life of any products made with it. Perhaps most crucially, plastics researcher Steve Fletcher from the University of Portsmouth in the UK told the BBC that this kind of technology could distract from efforts to limit plastic waste.

“Care must be taken with potential solutions of this sort, which could give the impression that we should worry less about plastic pollution because any plastic leaking into the environment will quickly, and ideally safely, degrade,” he said. “For the vast majority of plastics, this is not the case.”

Given the scale of the plastic pollution problem today though, any attempt to mitigate the harm should be welcomed. While it’s early days, the prospect of making plastic that can biodegrade itself could go a long way towards tackling the problem.

Image Credit: David Baillot/UC San Diego Jacobs School of Engineering

Kategorie: Transhumanismus

AI Is Gathering a Growing Amount of Training Data Inside Virtual Worlds

1 Květen, 2024 - 18:52

To anyone living in a city where autonomous vehicles operate, it would seem they need a lot of practice. Robotaxis travel millions of miles a year on public roads in an effort to gather data from sensors—including cameras, radar, and lidar—to train the neural networks that operate them.

In recent years, due to a striking improvement in the fidelity and realism of computer graphics technology, simulation is increasingly being used to accelerate the development of these algorithms. Waymo, for example, says its autonomous vehicles have already driven some 20 billion miles in simulation. In fact, all kinds of machines, from industrial robots to drones, are gathering a growing amount of their training data and practice hours inside virtual worlds.

According to Gautham Sholingar, a senior manager at Nvidia focused on autonomous vehicle simulation, one key benefit is accounting for obscure scenarios for which it would be nearly impossible to gather training data in the real world.

“Without simulation, there are some scenarios that are just hard to account for. There will always be edge cases which are difficult to collect data for, either because they are dangerous and involve pedestrians or things that are challenging to measure accurately like the velocity of faraway objects. That’s where simulation really shines,” he told me in an interview for Singularity Hub.

While it isn’t ethical to have someone run unexpectedly into a street to train AI to handle such a situation, it’s significantly less problematic for an animated character inside a virtual world.

Industrial use of simulation has been around for decades, something Sholingar pointed out, but a convergence of improvements in computing power, the ability to model complex physics, and the development of the GPUs powering today’s graphics indicate we may be witnessing a turning point in the use of simulated worlds for AI training.

Graphics quality matters because of the way AI “sees” the world.

When a neural network processes image data, it’s converting each pixel’s color into a corresponding number. For black and white images, the number ranges from 0, which indicates a fully black pixel, up to 255, which is fully white, with numbers in between representing some variation of grey. For color images, the widely used RGB (red, green, blue) model can correspond to over 16 million possible colors. So as graphics rendering technology becomes ever more photorealistic, the distinction between pixels captured by real-world cameras and ones rendered in a game engine is falling away.

Simulation is also a powerful tool because it’s increasingly able to generate synthetic data for sensors beyond just cameras. While high-quality graphics are both appealing and familiar to human eyes, which is useful in training camera sensors, rendering engines are also able to generate radar and lidar data as well. Combining these synthetic datasets inside a simulation allows the algorithm to train using all the various types of sensors commonly used by AVs.

Due to their expertise in producing the GPUs needed to generate high-quality graphics, Nvidia have positioned themselves as leaders in the space. In 2021, the company launched Omniverse, a simulation platform capable of rendering high-quality synthetic sensor data and modeling real-world physics relevant to a variety of industries. Now, developers are using Omniverse to generate sensor data to train autonomous vehicles and other robotic systems.

In our discussion, Sholingar described some specific ways these types of simulations may be useful in accelerating development. The first involves the fact that with a bit of retraining, perception algorithms developed for one type of vehicle can be re-used for other types as well. However, because the new vehicle has a different sensor configuration, the algorithm will be seeing the world from a new point of view, which can reduce its performance.

“Let’s say you developed your AV on a sedan, and you need to go to an SUV. Well, to train it then someone must change all the sensors and remount them on an SUV. That process takes time, and it can be expensive. Synthetic data can help accelerate that kind of development,” Sholingar said.

Another area involves training algorithms to accurately detect faraway objects, especially in highway scenarios at high speeds. Since objects over 200 meters away often appear as just a few pixels and can be difficult for humans to label, there isn’t typically enough training data for them.

“For the far ranges, where it’s hard to annotate the data accurately, our goal was to augment those parts of the dataset,” Sholingar said. “In our experiment, using our simulation tools, we added more synthetic data and bounding boxes for cars at 300 meters and ran experiments to evaluate whether this improves our algorithm’s performance.”

According to Sholingar, these efforts allowed their algorithm to detect objects more accurately beyond 200 meters, something only made possible by their use of synthetic data.

While many of these developments are due to better visual fidelity and photorealism, Sholingar also stressed this is only one aspect of what makes capable real-world simulations.

“There is a tendency to get caught up in how beautiful the simulation looks since we see these visuals, and it’s very pleasing. What really matters is how the AI algorithms perceive these pixels. But beyond the appearance, there are at least two other major aspects which are crucial to mimicking reality in a simulation.”

First, engineers need to ensure there is enough representative content in the simulation. This is important because an AI must be able to detect a diversity of objects in the real world, including pedestrians with different colored clothes or cars with unusual shapes, like roof racks with bicycles or surfboards.

Second, simulations have to depict a wide range of pedestrian and vehicle behavior. Machine learning algorithms need to know how to handle scenarios where a pedestrian stops to look at their phone or pauses unexpectedly when crossing a street. Other vehicles can behave in unexpected ways too, like cutting in close or pausing to wave an oncoming vehicle forward.

“When we say realism in the context of simulation, it often ends up being associated only with the visual appearance part of it, but I usually try to look at all three of these aspects. If you can accurately represent the content, behavior, and appearance, then you can start moving in the direction of being realistic,” he said.

It also became clear in our conversation that while simulation will be an increasingly valuable tool for generating synthetic data, it isn’t going to replace real-world data collection and testing.

“We should think of simulation as an accelerator to what we do in the real world. It can save time and money and help us with a diversity of edge-case scenarios, but ultimately it is a tool to augment datasets collected from real-world data collection,” he said.

Beyond Omniverse, the wider industry of helping “things that move” develop autonomy is undergoing a shift toward simulation. Tesla announced they’re using similar technology to develop automation in Unreal Engine, while Canadian startup, Waabi, is taking a simulation-first approach to training their self-driving software. Microsoft, meanwhile, has experimented with a similar tool to train autonomous drones, although the project was recently discontinued.

While training and testing in the real world will remain a crucial part of developing autonomous systems, the continued improvement of physics and graphics engine technology means that virtual worlds may offer a low-stakes sandbox for machine learning algorithms to mature into functional tools that can power our autonomous future.

Image Credit: Nvidia

Kategorie: Transhumanismus

Mind-Bending Math Could Stop Quantum Hackers—but Few Understand It

30 Duben, 2024 - 20:04

Imagine the tap of a card that bought you a cup of coffee this morning also let a hacker halfway across the world access your bank account and buy themselves whatever they liked. Now imagine it wasn’t a one-off glitch, but it happened all the time: Imagine the locks that secure our electronic data suddenly stopped working.

This is not a science fiction scenario. It may well become a reality when sufficiently powerful quantum computers come online. These devices will use the strange properties of the quantum world to untangle secrets that would take ordinary computers more than a lifetime to decipher.

We don’t know when this will happen. However, many people and organizations are already concerned about so-called “harvest now, decrypt later” attacks, in which cybercriminals or other adversaries steal encrypted data now and store it away for the day when they can decrypt it with a quantum computer.

As the advent of quantum computers grows closer, cryptographers are trying to devise new mathematical schemes to secure data against their hypothetical attacks. The mathematics involved is highly complex—but the survival of our digital world may depend on it.

‘Quantum-Proof’ Encryption

The task of cracking much current online security boils down to the mathematical problem of finding two numbers that, when multiplied together, produce a third number. You can think of this third number as a key that unlocks the secret information. As this number gets bigger, the amount of time it takes an ordinary computer to solve the problem becomes longer than our lifetimes.

Future quantum computers, however, should be able to crack these codes much more quickly. So the race is on to find new encryption algorithms that can stand up to a quantum attack.

The US National Institute of Standards and Technology has been calling for proposed “quantum-proof” encryption algorithms for years, but so far few have withstood scrutiny. (One proposed algorithm, called Supersingular Isogeny Key Encapsulation, was dramatically broken in 2022 with the aid of Australian mathematical software called Magma, developed at the University of Sydney.)

The race has been heating up this year. In February, Apple updated the security system for the iMessage platform to protect data that may be harvested for a post-quantum future.

Two weeks ago, scientists in China announced they had installed a new “encryption shield” to protect the Origin Wukong quantum computer from quantum attacks.

Around the same time, cryptographer Yilei Chen announced he had found a way quantum computers could attack an important class of algorithms based on the mathematics of lattices, which were considered some of the hardest to break. Lattice-based methods are part of Apple’s new iMessage security, as well as two of the three frontrunners for a standard post-quantum encryption algorithm.

What Is a Lattice-Based Algorithm?

A lattice is an arrangement of points in a repeating structure, like the corners of tiles in a bathroom or the atoms in a diamond crystal. The tiles are two dimensional and the atoms in diamond are three dimensional, but mathematically we can make lattices with many more dimensions.

Most lattice-based cryptography is based on a seemingly simple question: If you hide a secret point in such a lattice, how long will it take someone else to find the secret location starting from some other point? This game of hide and seek can underpin many ways to make data more secure.

A variant of the lattice problem called “learning with errors” is considered to be too hard to break even on a quantum computer. As the size of the lattice grows, the amount of time it takes to solve is believed to increase exponentially, even for a quantum computer.

The lattice problem—like the problem of finding the factors of a large number on which so much current encryption depends—is closely related to a deep open problem in mathematics called the “hidden subgroup problem.”

Yilei Chen’s approach suggested quantum computers may be able to solve lattice-based problems more quickly under certain conditions. Experts scrambled to check his results—and rapidly found an error. After the error was discovered, Chen published an updated version of his paper describing the flaw.

Despite this discovery, Chen’s paper has made many cryptographers less confident in the security of lattice-based methods. Some are still assessing whether Chen’s ideas can be extended to new pathways for attacking these methods.

More Mathematics Required

Chen’s paper set off a storm in the small community of cryptographers who are equipped to understand it. However, it received almost no attention in the wider world—perhaps because so few people understand this kind of work or its implications.

Last year, when the Australian government published a national quantum strategy to make the country “a leader of the global quantum industry” where “quantum technologies are integral to a prosperous, fair and inclusive Australia,” there was an important omission: It didn’t mention mathematics at all.

Australia does have many leading experts in quantum computing and quantum information science. However, making the most of quantum computers—and defending against them—will require deep mathematical training to produce new knowledge and research.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ZENG YILI / Unsplash

Kategorie: Transhumanismus

Scientists Find a Surprising Way to Transform A and B Blood Types Into Universal Blood

30 Duben, 2024 - 00:04

Blood transfusions save lives. In the US alone, people receive around 10 million units each year. But blood banks are always short in supply—especially when it comes to the “universal donor” type O.

Surprisingly, the gut microbiome may hold a solution for boosting universal blood supplies by chemically converting other blood types into the universal O.

Infusing the wrong blood type—say, type A to type B—triggers deadly immune reactions. Type O blood, however, is compatible with nearly everyone. It’s in especially high demand following hurricanes, earthquakes, wildfires, and other crises because doctors have to rapidly treat as many people as possible.

Sometimes, blood banks have an imbalance of different blood types—for example, too much type A, not enough universal O. This week, a team from Denmark and Sweden discovered a cocktail of enzymes that readily converts type A and type B blood into the universal donor. Found in gut bacteria, the enzymes chew up an immune-stimulating sugar molecule dotted on the surfaces of type A and B blood cells, removing their tendency to spark an immune response.

Compared to previous attempts, the blend of enzymes converted A and B blood types to type O blood with “remarkably high efficiencies,” the authors wrote.

Wardrobe Change

Blood types can be characterized in multiple ways, but roughly speaking, the types come in four main forms: A, B, AB, and O.

These types are distinguished by what kinds of sugar molecules—called antigens—cover the surfaces of red blood cells. Antigens can trigger immune rejection if mismatched. Type A blood has A antigens; type B has B antigens; type AB has both. Type O has neither.

This is why type O blood can be used for most people. It doesn’t normally trigger an immune response and is highly coveted during emergencies when it’s difficult to determine a person’s blood type. One obvious way to boost type O stock is to recruit more donors, but that’s not always possible. As a workaround, scientists have tried to artificially produce type O blood using stem cell technology. While successful in the lab, it’s expensive and hard to scale up for real-world demands.

An alternative is removing the A and B antigens from donated blood. First proposed in the 1980s, this approach uses enzymes to break down the immune-stimulating sugar molecules. Like licking an ice cream cone, as the antigens gradually melt away, the blood cells are stripped of their A or B identity, eventually transforming into the universal O blood type.

The technology sounds high-tech, but breaking down sugars is something our bodies naturally do every day, thanks to microbes in the gut that happily digest our food. This got scientists wondering: Can we hunt down enzymes in the digestive track to convert blood types?

Over a half decade ago, a team from the University of British Columbia made headlines by using bacterial enzymes found in the gut microbiome to transform type A blood to type O. Some gut bugs eat away at mucus—a slimy substance made of sugary molecules covering the gut. These mucus linings are molecularly similar to the antigens on red blood cells.

So, digestive enzymes from gut microbes could potentially chomp away A and B antigens.

In one test, the team took samples of human poop (yup), which carry enzymes from the gut microbiome and looked for DNA that could break down red blood cell sugar chains.

They eventually discovered two enzymes from a single bacterial strain. Tested in human blood, the duo readily stripped away type A antigens, converting it into universal type O.

The study was a proof of concept for transforming one blood type into another, with potentially real-world implications. Type A blood—common in Europe and the US—makes up roughly one-third of the supply of donations. A technology that converts it to universal O could boost blood transplant resources in this part of the world.

“This is a first, and if these data can be replicated, it is certainly a major advance,” Dr. Harvey Klein at the National Institutes of Health’s Clinical Center, who was not involved in the work,  told Science at the time.

There’s one problem though. Converted blood doesn’t always work.

Let’s Talk ABO+

When tested in clinical trials, converted blood has raised safety concerns. Even when removing A or B antigens completely from donated blood, small hints from earlier studies found an immune mismatch between the transformed donor blood and the recipient. In other words, the engineered O blood sometimes still triggered an immune response.

Why?

There’s more to blood types than classic ABO. Type A is composed of two different subtypes—one with higher A antigen levels than the other. Type B, common in people of Asian and African descent, also comes in “extended” forms. These recently discovered sugar chains are longer and harder to break down than in the classic versions. Called “extended antigens,” they could be why some converted blood still stimulates the immune system after transfusion.

The new study tackled these extended forms by again peeking into gut bacteria DNA. One bacterial strain, A. muciniphila, stood out. These bugs contain enzymes that work like a previously discovered version that chops up type A and B antigens, but surprisingly, they also strip away extended versions of both antigens.

These enzymes weren’t previously known to science, with just 30 percent similarity when compared to a previous benchmark enzyme that cuts up B and extended B antigens.

Using cells from different donors, the scientists engineered an enzyme soup that rapidly wiped out blood antigens. The strategy is “unprecedented,” wrote the team.

Although the screen found multiple enzymes capable of blood type conversion, each individually had limited effects. But when mixed and matched, the recipe transformed donated B type cells into type O, with limited immune responses when mixed with other blood types.

A similar strategy yielded three different enzymes to cut out the problematic A antigen and, in turn, transform the blood to type O. Some people secrete the antigen into other bodily fluids—for example, saliva, sweat, or tears. Others, dubbed non-secreters, have less of these antigens floating around their bodies. Using blood donated from both secreters and non-secreters, the team treated red blood cells to remove the A antigen and its extended versions.

When mixed with other blood types, the enzyme cocktail lowered their immune response, although with lower efficacy than cells transformed from type B to O.

By mapping the structures of these enzymes, the team found some parts increased their ability to chop up sugar chains. Focusing on these hot-spot structures, scientists are set to hunt down other naturally-derived enzymes—or use AI to engineer ones with better efficacy and precision.

The system still needs to be tested in humans. And the team didn’t address other blood antigens, such as the Rh system, which is what makes blood types positive or negative. Still, bacterial enzymes appear to be an unexpected but promising way to engineer universal blood.

Image Credit: Zeiss Microscopy / Flickr

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 27)

27 Duben, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

Meta’s Open Source Llama 3 Is Already Nipping at OpenAI’s Heels
Will Knight | Wired
“OpenAI changed the world with ChatGPT, setting off a wave of AI investment and drawing more than 2 million developers to its cloud APIs. But if open source models prove competitive, developers and entrepreneurs may decide to stop paying to access the latest model from OpenAI or Google and use Llama 3 or one of the other increasingly powerful open source models that are popping up.”

BIOTECH

‘Real Hope’ for Cancer Cure as Personal mRNA Vaccine for Melanoma Trialed
Andrew Gregory | The Guardian
“Experts are testing new jabs that are custom-built for each patient and tell their body to hunt down cancer cells to prevent the disease ever coming back. A phase 2 trial found the vaccines dramatically reduced the risk of the cancer returning in melanoma patients. Now a final, phase 3, trial has been launched and is being led by University College London Hospitals NHS Foundation Trust (UCLH). Dr Heather Shaw, the national coordinating investigator for the trial, said the jabs had the potential to cure people with melanoma and are being tested in other cancers, including lung, bladder and kidney.”

DIGITAL MEDIA

An AI Startup Made a Hyperrealistic Deepfake of Me That’s So Good It’s Scary
Melissa Heikkilä | MIT Technology Review
“Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.”

ENERGY

Nuclear Fusion Experiment Overcomes Two Key Operating Hurdles
Matthew Sparkes | New Scientist
“A nuclear fusion reaction has overcome two key barriers to operating in a ‘sweet spot’ needed for optimal power production: boosting the plasma density and keeping that denser plasma contained. The milestone is yet another stepping stone towards fusion power, although a commercial reactor is still probably years away.”

FUTURE

Daniel Dennett: ‘ Why Civilization Is More Fragile Than We Realized’
Tom Chatfield | BBC
“[Dennett’s] warning was not of a takeover by some superintelligence, but of a threat he believed that nonetheless could be existential for civilization, rooted in the vulnerabilities of human nature. ‘If we turn this wonderful technology we have for knowledge into a weapon for disinformation,’ he told me, ‘we are in deep trouble.’ Why? ‘Because we won’t know what we know, and we won’t know who to trust, and we won’t know whether we’re informed or misinformed. We may become either paranoid and hyper-skeptical, or just apathetic and unmoved. Both of those are very dangerous avenues. And they’re upon us.'”

ENVIRONMENT

California Just Went 9.25 Hours Using Only Renewable Energy
Adele Peters | Fast Company
“Last Saturday, as 39 million Californians went about their daily lives—taking showers, doing laundry, or charging their electric cars—the whole state ran on 100% clean electricity for more than nine hours. The same thing happened on Sunday, as the state was powered without fossil fuels for more than eight hours. It was the ninth straight day that solar, wind, hydropower, geothermal, and battery storage fully powered the electric grid for at least some portion of the time. Over the last six and a half weeks, that’s happened nearly every day. In some cases, it’s just for 15 minutes. But often it’s for hours at a time.”

archive pa

TECH

AI Hype Is Deflating. Can AI Companies Find a Way to Turn a Profit?
Gerrit De Vynck | The Washington Post
“Some once-promising start-ups have cratered, and the suite of flashy products launched by the biggest players in the AI race—OpenAI, Microsoft, Google and Meta—have yet to upend the way people work and communicate with one another. While money keeps pouring into AI, very few companies are turning a profit on the tech, which remains hugely expensive to build and run. The road to widespread adoption and business success is still looking long, twisty and full of roadblocks, say tech executives, technologists and financial analysts.”

ARTIFICIAL INTELLIGENCE

Apple Releases Eight Small AI Language Models Aimed at On-Device Use
Benj Edwards | Ars Technica
“In the world of AI, what might be called ‘small language models’ have been growing in popularity recently because they can be run on a local device instead of requiring data center-grade computers in the cloud. On Wednesday, Apple introduced a set of tiny source-available AI language models called OpenELM that are small enough to run directly on a smartphone. They’re mostly proof-of-concept research models for now, but they could form the basis of future on-device AI offerings from Apple.”

SPACE

If Starship Is Real, We’re Going to Need Big Cargo Movers on the Moon and Mars
Eric Berger | Ars Technica
“Unloading tons of cargo on the Moon may seem like a preposterous notion. During Apollo, mass restrictions were so draconian that the Lunar Module could carry two astronauts, their spacesuits, some food, and just 300 pounds (136 kg) of scientific payload down to the lunar surface. By contrast, Starship is designed to carry 100 tons, or more, to the lunar surface in a single mission. This is an insane amount of cargo relative to anything in spaceflight history, but that’s the future that [Jaret] Matthews is aiming toward.”

Image Credit: CARTIST / Unsplash

Kategorie: Transhumanismus

How Quantum Computers Could Illuminate the Full Range of Human Genetic Diversity

26 Duben, 2024 - 19:26

Genomics is revolutionizing medicine and science, but current approaches still struggle to capture the breadth of human genetic diversity. Pangenomes that incorporate many people’s DNA could be the answer, and a new project thinks quantum computers will be a key enabler.

When the Human Genome Project published its first reference genome in 2001, it was based on DNA from just a handful of humans. While less than one percent of our DNA varies from person to person, this can still leave important gaps and limit what we can learn from genomic analyses.

That’s why the concept of a pangenome has become increasingly popular. This refers to a collection of genomic sequences from many different people that have been merged to cover a much greater range of human genetic possibilities.

Assembling these pangenomes is tricky though, and their size and complexity make carrying out computational analyses on them daunting. That’s why the University of Cambridge, the Wellcome Sanger Institute, and the European Molecular Biology Laboratory’s European Bioinformatics Institute have teamed up to see if quantum computers can help.

“We’ve only just scratched the surface of both quantum computing and pangenomics,” David Holland of the Wellcome Sanger Institute said in a press release. “So to bring these two worlds together is incredibly exciting. We don’t know exactly what’s coming, but we see great opportunities for major new advances.”

Pangenomes could be crucial for discovering how different genetic variants impact human biology, or that of other species. The current reference genome is used as a guide to assemble genetic sequences, but due to the variability of human genomes there are often significant chunks of DNA that don’t match up. A pangenome would capture a lot more of that diversity, making it easier to connect the dots and giving us a more complete view of possible human genomes.

Despite their power, pangenomes are difficult to work with. While the genome of a single person is just a linear sequence of genetic data, a pangenome is a complex network that tries to capture all the ways in which its constituent genomes do and don’t overlap.

These so-called “sequence graphs” are challenging to construct and even more challenging to analyze. And it will require high levels of computational power and novel techniques to make use of the rich representation of human diversity contained within.

That’s where this new project sees quantum computers lending a hand. Relying on the quirks of quantum mechanics, they can tackle certain computational problems that are near impossible for classical computers.

While there’s still considerable uncertainty about what kinds of calculations quantum computers will actually be able to run, many hope they will dramatically improve our ability to solve problems relating to complex systems with large numbers of variables. This new project is aimed at developing quantum algorithms that speed up both the production and analysis of pangenomes, though the researchers admit it’s early days.

“We’re starting from scratch because we don’t even know yet how to represent a pangenome in a quantum computing environment,” David Yuan from the European Bioinformatics Institute said in the press release. “If you compare it to the first moon landings, this project is the equivalent of designing a rocket and training the astronauts.”

The project has been awarded $3.5 million, which will be used to develop new algorithms and then test them on simulated quantum hardware using supercomputers. The researchers think the tools they develop could lead to significant breakthroughs in personalized medicine. They could also be applied to pangenomes of viruses and bacteria, improving our ability to track and manage disease outbreaks.

Given its exploratory nature and the difficulty of getting quantum computers to do anything practical, it could be some time before the project bears fruit. But if they succeed, the researchers could significantly expand our ability to make sense of the genes that shape our lives.

Image Credit: Gerd AltmannPixabay

Kategorie: Transhumanismus

This AI Just Designed a More Precise CRISPR Gene Editor for Human Cells From Scratch

25 Duben, 2024 - 22:25

CRISPR has revolutionized science. AI is now taking the gene editor to the next level.

Thanks to its ability to accurately edit the genome, CRISPR tools are now widely used in biotechnology and across medicine to tackle inherited diseases. In late 2023, a therapy using the Nobel Prize-winning tool gained approval from the FDA to treat sickle cell disease. CRISPR has also enabled CAR T cell therapy to battle cancers and been used to lower dangerously high cholesterol levels in clinical trials.

Outside medicine, CRISPR tools are changing the agricultural landscape, with projects ongoing to engineer hornless bulls, nutrient-rich tomatoes, and livestock and fish with more muscle mass.

Despite its real-world impact, CRISPR isn’t perfect. The tool snips both strands of DNA, which can cause dangerous mutations. It also can inadvertently nip unintended areas of the genome and trigger unpredictable side effects.

CRISPR was first discovered in bacteria as a defense mechanism, suggesting that nature hides a bounty of CRISPR components. For the past decade, scientists have screened different natural environments—for example, pond scum—to find other versions of the tool that could potentially increase its efficacy and precision. While successful, this strategy depends on what nature has to offer. Some benefits, such as a smaller size or greater longevity in the body, often come with trade-offs like lower activity or precision.

Rather than relying on evolution, can we fast-track better CRISPR tools with AI?

This week, Profluent, a startup based in California, outlined a strategy that uses AI to dream up a new universe of CRISPR gene editors. Based on large language models—the technology behind the popular ChatGPT—the AI designed several new gene-editing components.

In human cells, the components meshed to reliably edit targeted genes. The efficiency matched classic CRISPR, but with far more precision. The most promising editor, dubbed OpenCRISPR-1, could also precisely swap out single DNA letters—a technology called base editing—with an accuracy that rivals current tools.

“We demonstrate the world’s first successful editing of the human genome using a gene editing system where every component is fully designed by AI,” wrote the authors in a blog post.

Match Made in Heaven

CRISPR and AI have had a long romance.

The CRISPR recipe has two main parts: A “scissor” Cas protein that cuts or nicks the genome and a “bloodhound” RNA guide that tethers the scissor protein to the target gene.

By varying these components, the system becomes a toolbox, with each setup tailored to perform a specific type of gene editing. Some Cas proteins cut both strands of DNA; others give just one strand a quick snip. Alternative versions can also cut RNA, a type of genetic material found in viruses, and can be used as diagnostic tools or antiviral treatments.

Different versions of Cas proteins are often found by searching natural environments or through a process called direct evolution. Here, scientist rationally swap out some parts of the Cas protein to potentially boost efficacy.

It’s a highly time-consuming process. Which is where AI comes in.

Machine learning has already helped predict off-target effects in CRISPR tools. It’s also homed in on smaller Cas proteins to make downsized editors easier to deliver into cells.

Profluent used AI in a novel way: Rather than boosting current systems, they designed CRISPR components from scratch using large language models.

The basis of ChatGPT and DALL-E, these models launched AI into the mainstream. They learn from massive amounts of text, images, music, and other data to distill patterns and concepts. It’s how the algorithms generate images from a single text prompt—say, “unicorn with sunglasses dancing over a rainbow”—or mimic the music style of a given artist.

The same technology has also transformed the protein design world. Like words in a book, proteins are strung from individual molecular “letters” into chains, which then fold in specific ways to make the proteins work. By feeding protein sequences into AI, scientists have already fashioned antibodies and other functional proteins unknown to nature.

“Large generative protein language models capture the underlying blueprint of what makes a natural protein functional,” wrote the team in the blog post. “They promise a shortcut to bypass the random process of evolution and move us towards intentionally designing proteins for a specific purpose.”

Do AIs Dream of CRISPR Sheep?

All large language models need training data. The same is true for an algorithm that generates gene editors. Unlike text, images, or videos that can be easily scraped online, a CRISPR database is harder to find.

The team first screened over 26 terabytes of data about current CRISPR systems and built a CRISPR-Cas atlas—the most extensive to date, according to the researchers.

The search revealed millions of CRISPR-Cas components. The team then trained their ProGen2 language model—which was fine-tuned for protein discovery—using the CRISPR atlas.

The AI eventually generated four million protein sequences with potential Cas activity. After filtering out obvious deadbeats with another computer program, the team zeroed in on a new universe of Cas “protein scissors.”

The algorithm didn’t just dream up proteins like Cas9. Cas proteins come in families, each with its own quirks in gene-editing ability. The AI also designed proteins resembling Cas13, which targets RNA, and Cas12a, which is more compact than Cas9.

Overall, the results expanded the universe of potential Cas proteins nearly five-fold. But do any of them work?

Hello, CRISPR World

For the next test, the team focused on Cas9, because it’s already widely used in biomedical and other fields. They trained the AI on roughly 240,000 different Cas9 protein structures from multiple types of animals, with the goal of generating similar proteins to replace natural ones—but with higher efficacy or precision.

The initial results were surprising: The generated sequences, roughly a million of them, were totally different than natural Cas9 proteins. But using DeepMind’s AlphaFold2, a protein structure prediction AI, the team found the generated protein sequences could adopt similar shapes.

Cas proteins can’t function without a bloodhound RNA guide. With the CRISPR-Cas atlas, the team also trained AI to generate an RNA guide when given a protein sequence.

The result is a CRISPR gene editor with both components—Cas protein and RNA guide— designed by AI. Dubbed OpenCRISPR-1, its gene editing activity was similar to classic CRISPR-Cas9 systems when tested in cultured human kidney cells. Surprisingly, the AI-generated version slashed off-target editing by roughly 95 percent.

With a few tweaks, OpenCRISPR-1 could also perform base editing, which can change single DNA letters. Compared to classic CRISPR, base editing is likely more precise as it limits damage to the genome. In human kidney cells, OpenCRISPR-1 reliably converted one DNA letter to another in three sites across the genome, with an editing rate similar to current base editors.

To be clear, the AI-generated CRISPR tools have only been tested in cells in a dish. For treatments to reach the clinic, they’d need to undergo careful testing for safety and efficacy in living creatures, which can take a long time.

Profluent is openly sharing OpenCRISPR-1 with researchers and commercial groups but keeping the AI that created the tool in-house. “We release OpenCRISPR-1 publicly to facilitate broad, ethical usage across research and commercial applications,” they wrote.

As a preprint, the paper describing their work has yet to be analyzed by expert peer reviewers. Scientists will also have to show OpenCRISPR-1 or variants work in multiple organisms, including plants, mice, and humans. But tantalizingly, the results open a new avenue for generative AI—one that could fundamentally change our genetic blueprint.

Image Credit: Profluent

Kategorie: Transhumanismus

The Crucial Building Blocks of Life on Earth Form More Easily in Outer Space

23 Duben, 2024 - 16:00

The origin of life on Earth is still enigmatic, but we are slowly unraveling the steps involved and the necessary ingredients. Scientists believe life arose in a primordial soup of organic chemicals and biomolecules on the early Earth, eventually leading to actual organisms.

It’s long been suspected that some of these ingredients may have been delivered from space. Now a new study, published in Science Advances, shows that a special group of molecules, known as peptides, can form more easily under the conditions of space than those found on Earth. That means they could have been delivered to the early Earth by meteorites or comets—and that life may be able to form elsewhere, too.

The functions of life are upheld in our cells (and those of all living beings) by large, complex carbon-based (organic) molecules called proteins. How to make the large variety of proteins we need to stay alive is encoded in our DNA, which is itself a large and complex organic molecule.

However, these complex molecules are assembled from a variety of small and simple molecules such as amino acids—the so-called building blocks of life.

To explain the origin of life, we need to understand how and where these building blocks form and under what conditions they spontaneously assemble themselves into more complex structures. Finally, we need to understand the step that enables them to become a confined, self-replicating system—a living organism.

This latest study sheds light on how some of these building blocks might have formed and assembled and how they ended up on Earth.

Steps to Life

DNA is made up of about 20 different amino acids. Like letters of the alphabet, these are arranged in DNA’s double helix structure in different combinations to encrypt our genetic code.

Peptides are also an assemblage of amino acids in a chain-like structure. Peptides can be made up of as little as two amino acids, but also range to hundreds of amino acids.

The assemblage of amino acids into peptides is an important step because peptides provide functions such as catalyzing, or enhancing, reactions that are important to maintaining life. They are also candidate molecules that could have been further assembled into early versions of membranes, confining functional molecules in cell-like structures.

However, despite their potentially important role in the origin of life, it was not so straightforward for peptides to form spontaneously under the environmental conditions on the early Earth. In fact, the scientists behind the current study had previously shown that the cold conditions of space are actually more favorable to the formation of peptides.

The interstellar medium. Image Credit: Charles Carter/Keck Institute for Space Studies

In the very low density clouds of molecules and dust particles in a part of space called the interstellar medium (see above), single atoms of carbon can stick to the surfaces of dust grains together with carbon monoxide and ammonia molecules. They then react to form amino acid-like molecules. When such a cloud becomes denser and dust particles also start to stick together, these molecules can assemble into peptides.

In their new study, the scientists look at the dense environment of dusty disks, from which a new solar system with a star and planets emerges eventually. Such disks form when clouds suddenly collapse under the force of gravity. In this environment, water molecules are much more prevalent—forming ice on the surfaces of any growing agglomerates of particles that could inhibit the reactions that form peptides.

By emulating the reactions likely to occur in the interstellar medium in the laboratory, the study shows that, although the formation of peptides is slightly diminished, it is not prevented. Instead, as rocks and dust combine to form larger bodies such as asteroids and comets, these bodies heat up and allow for liquids to form. This boosts peptide formation in these liquids, and there’s a natural selection of further reactions resulting in even more complex organic molecules. These processes would have occurred during the formation of our own solar system.

Many of the building blocks of life such as amino acids, lipids, and sugars can form in the space environment. Many have been detected in meteorites.

Because peptide formation is more efficient in space than on Earth, and because they can accumulate in comets, their impacts on the early Earth might have delivered loads that boosted the steps towards the origin of life on Earth.

So, what does all this mean for our chances of finding alien life? Well, the building blocks for life are available throughout the universe. How specific the conditions need to be to enable them to self-assemble into living organisms is still an open question. Once we know that, we’ll have a good idea of how widespread, or not, life might be.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Aldebaran S / Unsplash

Kategorie: Transhumanismus