Transhumanismus

Technology is The How, Not the Why or What

Singularity Weblog - 23 Květen, 2024 - 13:00
 Technology is the new religion, Silicon Valley – the new Promised Land, and entrepreneurs – the new prophets. They promise a future of abundance and immortality—a techno-heaven beyond our wildest dreams. And we are all believers. We often forget technology is the how, not the why or what. It is a means to an […]
Kategorie: Transhumanismus

Dyson Spheres: Astronomers Report Potential Candidates for Alien Megastructures—Here’s What to Make of It

Singularity HUB - 22 Květen, 2024 - 20:22

There are three ways to look for evidence of alien technological civilizations. One is to look out for deliberate attempts by them to communicate their existence, for example, through radio broadcasts. Another is to look for evidence of them visiting the solar system. And a third option is to look for signs of large-scale engineering projects in space.

A team of astronomers have taken the third approach by searching through recent astronomical survey data to identify seven candidates for alien megastructures, known as Dyson spheres, “deserving of further analysis.”

This is a detailed study looking for “oddballs” among stars—objects that might be alien megastructures. However, the authors are careful not to make any overblown claims. The seven objects, all located within 1,000 light-years of Earth, are “M-dwarfs”—a class of stars that are smaller and less bright than the sun.

Dyson spheres were first proposed by the physicist Freeman Dyson in 1960 as a way for an advanced civilization to harness a star’s power. Consisting of floating power collectors, factories, and habitats, they’d take up more and more space until they eventually surrounded almost the entire star like a sphere.

What Dyson realized is that these megastructures would have an observable signature. Dyson’s signature (which the team searched for in the recent study) is a significant excess of infrared radiation. That’s because megastructures would absorb visible light given off by the star, but they wouldn’t be able to harness it all. Instead, they’d have to “dump” excess energy as infrared light with a much longer wavelength.

Unfortunately, such light can also be a signature of a lot of other things, such as a disc of gas and dust or discs of comets and other debris. But the seven promising candidates aren’t obviously due to a disc, as they weren’t good fits to disc models.

It is worth noting there is another signature of a Dyson sphere: that visible light from the star dips as the megastructure passes in front of it. Such a signature has been found before. There was a lot of excitement about Tabby’s Star, or KIC 8462852, which showed many really unusual dips in its light that could be due to an alien megastructure.

Tabby’s Star in infrared (left) and ultraviolet (right). Image Credit: Infrared: IPAC/NASA / Ultraviolet: STScI /NASA via Wikimedia Commons

It almost certainly isn’t an alien megastructure. A variety of natural explanations have been proposed, such as clouds of comets passing through a dust cloud. But it is an odd observation. An obvious follow up on the seven candidates would be to look for this signature as well.

The Case Against Dyson Spheres

Dyson spheres may well not even exist, however. I think they are unlikely to be there. That’s not to say they couldn’t exist, rather that any civilization capable of building them would probably not need to (unless it was some mega art project).

Dyson’s reasoning for considering such megastructures assumed that advanced civilizations would have vast power requirements. Around the same time, astronomer Nikolai Kardashev proposed a scale on which to rate the advancement of civilizations, which was based almost entirely on their power consumption.

In the 1960s, this sort of made sense. Looking back over history, humanity had just kept exponentially increasing its power use as technology advanced and the number of people increased, so they just extrapolated this ever-expanding need into the future.

However, our global energy use has started to grow much more slowly over the past 50 years, and especially over the last decade. What’s more, Dyson and Kardashev never specified what these vast levels of power would be used for, they just (fairly reasonably) assumed they’d be needed to do whatever it is that advanced alien civilizations do.

But as we now look ahead to future technologies, we see efficiency, miniaturization, and nanotechnologies promise vastly lower power use (the performance per watt of pretty much all technologies is constantly improving).

A quick calculation reveals that, if we wanted to collect 10 percent of the sun’s energy at the distance the Earth is from the sun, we’d need a surface area equal to 1 billion Earths. And if we had a super-advanced technology that could make the megastructure only 10 kilometers thick, that’d mean we’d need about a million Earths worth of material to build them from.

A significant problem is that our solar system only contains about 100 Earths worth of solid material, so our advanced alien civilization would need to dismantle all the planets in 10,000 planetary systems and transport it to the star to build their Dyson sphere. To do it with the material available in a single system, each part of the megastructure could only be one meter thick.

This is assuming they use all the elements available in a planetary system. If they needed, say, lots of carbon to make their structures, then we’re looking at dismantling millions of planetary systems to get hold of it. Now, I’m not saying a super-advanced alien civilization couldn’t do this, but it is one hell of a job.

I’d also strongly suspect that by the time a civilization got to the point of having the ability to build a Dyson sphere, they’d have a better way of getting the power than using a star, if they really needed it (I have no idea how, but they are a super-advanced civilization).

Maybe I’m wrong, but it can’t hurt to look.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Kevin Gill / Flickr

Kategorie: Transhumanismus

Can ChatGPT Mimic Theory of Mind? Psychology Is Probing AI’s Inner Workings

Singularity HUB - 21 Květen, 2024 - 23:26

If you’ve ever vented to ChatGPT about troubles in life, the responses can sound empathetic. The chatbot delivers affirming support, and—when prompted—even gives advice like a best friend.

Unlike older chatbots, the seemingly “empathic” nature of the latest AI models has already galvanized the psychotherapy community, with many wondering if  they can assist therapy.

The ability to infer other people’s mental states is a core aspect of everyday interaction. Called “theory of mind,” it lets us guess what’s going on in someone else’s mind, often by interpreting speech. Are they being sarcastic? Are they lying? Are they implying something that’s not overtly said?

“People care about what other people think and expend a lot of effort thinking about what is going on in other minds,” wrote Dr. Cristina Becchio and colleagues at the University Medical Center Hanburg-Eppendorf in a new study in Nature Human Behavior.”

In the study, the scientists asked if ChatGPT and other similar chatbots—which are based on machine learning algorithms called large language models—can also guess other people’s mindsets. Using a series of psychology tests tailored for certain aspects of theory of mind, they pitted two families of large language models, including OpenAI’s GPT series and Meta’s LLaMA 2, against over 1,900 human participants.

GPT-4, the algorithm behind ChatGPT, performed at, or even above, human levels in some tasks, such as identifying irony. Meanwhile, LLaMA 2 beat both humans and GPT at detecting faux pas—when someone says something they’re not meant to say but don’t realize it.

To be clear, the results don’t confirm LLMs have theory of mind. Rather, they show these algorithms can mimic certain aspects of this core concept that “defines us as humans,” wrote the authors.

What’s Not Said

By roughly four years old, children already know that people don’t always think alike. We have different beliefs, intentions, and needs. By placing themselves into other people’s shoes, kids can begin to understand other perspectives and gain empathy.

First introduced in 1978, theory of mind is a lubricant for social interactions. For example, if you’re standing near a closed window in a stuffy room, and someone nearby says, “It’s a bit hot in here,” you have to think about their perspective to intuit they’re politely asking you to open the window.

When the ability breaks down—for example, in autism—it becomes difficult to grasp other people’s emotions, desires, intentions, and to pick up deception. And we’ve all experienced when texts or emails lead to misunderstandings when a recipient misinterprets the sender’s meaning.

So, what about the AI models behind chatbots?

Man Versus Machine

Back in 2018, Dr. Alan Winfield, a professor in the ethics of robotics at the University of West England, championed the idea that theory of mind could let AI “understand” people and other robots’ intentions. At the time, he proposed giving an algorithm a programmed internal model of itself, with common sense about social interactions built in rather than learned.

Large language models take a completely different approach, ingesting massive datasets to generate human-like responses that feel empathetic. But do they exhibit signs of theory of mind?

Over the years, psychologists have developed a battery of tests to study how we gain the ability to model another’s mindset. The new study pitted two versions of OpenAI’s GPT models (GPT-4 and GPT-3.5) and Meta’s LLaMA-2-Chat against 1,907 healthy human participants. Based solely on text descriptions of social scenarios and using comprehensive tests spanning different theories of theory of mind abilities, they had to gauge the fictional person’s “mindset.”

Each test was already well-established for measuring theory of mind in humans in psychology.

The first, called “false belief,” is often used to test toddlers as they gain a sense of self and recognition of others. As an example, you listen to a story: Lucy and Mia are in the kitchen with a carton of orange juice in the cupboard. When Lucy leaves, Mia puts the juice in the fridge. Where will Lucy look for the juice when she comes back?

Both humans and AI guessed nearly perfectly that the person who’d left the room when the juice was moved would look for it where they last remembered seeing it. But slight changes tripped the AI up. When changing the scenario—for example, the juice was transported between two transparent containers—GPT models struggled to guess the answer. (Though, for the record, humans weren’t perfect on this either in the study.)

A more advanced test is “strange stories,” which relies on multiple levels of reasoning to test for advanced mental capabilities, such as misdirection, manipulation, and lying. For example, both human volunteers and AI models were told the story of Simon, who often lies. His brother Jim knows this and one day found his Ping-Pong paddle missing. He confronts Simon and asks if it’s under the cupboard or his bed. Simon says it’s under the bed. The test asks: Why would Jim look in the cupboard instead?

Out of all AI models, GPT-4 had the most success, reasoning that “the big liar” must be lying, and so it’s better to choose the cupboard. Its performance even trumped human volunteers.

Then came the “faux pas” study. In prior research, GPT models struggled to decipher these social situations. During testing, one example depicted a person shopping for new curtains, and while putting them up, a friend casually said, “Oh, those curtains are horrible, I hope you’re going to get some new ones.” Both humans and AI models were presented with multiple similar cringe-worthy scenarios and asked if the witnessed response was appropriate. “The correct answer is always no,” wrote the team.

GPT-4 correctly identified that the comment could be hurtful, but when asked whether the friend knew about the context—that the curtains were new—it struggled with a correct answer. This could be because the AI couldn’t infer the mental state of the person, and that recognizing a faux pas in this test relies on context and social norms not directly explained in the prompt, explained the authors. In contrast, LLaMA-2-Chat outperformed humans, achieving nearly 100 percent accuracy except for one run. It’s unclear why it has such as an advantage.

Under the Bridge

Much of communication isn’t what’s said, but what’s implied.

Irony is maybe one of the hardest concepts to translate between languages. When tested with an adapted psychological test for autism, GPT-4 surprisingly outperformed human participants in recognizing ironic statements—of course, through text only, without the usual accompanying eye-roll.

The AI also outperformed humans on a hinting task—basically, understanding an implied message. Derived from a test for assessing schizophrenia, it measures reasoning that relies on both memory and cognitive ability to weave and assess a coherent narrative. Both participants and AI models were given 10 written short skits, each depicting an everyday social interaction. The stories ended with a hint of how best to respond with open-ended answers. Over 10 stories, GPT-4 won against humans.

For the authors, the results don’t mean LLMs already have theory of mind. Each AI struggled with some aspects. Rather, they think the work highlights the importance of using multiple psychology and neuroscience tests—rather than relying on any one—to probe the opaque inner workings of machine minds. Psychology tools could help us better understand how LLMs “think”—and in turn, help us build safer, more accurate, and more trustworthy AI.

There’s some promise that “artificial theory of mind may not be too distant an idea,” wrote the authors.

Image Credit: Abishek / Unsplash

Kategorie: Transhumanismus

Scientists Are Working Towards a Unified Theory of Consciousness

Singularity HUB - 20 Květen, 2024 - 19:49

The origin of consciousness has teased the minds of philosophers and scientists for centuries. In the last decade, neuroscientists have begun to piece together its neural underpinnings—that is, how the brain, through its intricate connections, transforms electrical signaling between neurons into consciousness.

Yet the field is fragmented, an international team of neuroscientists recently wrote in a new paper in Neuron. Many theories of consciousness contradict each other, with different ideas about where and how consciousness emerges in the brain.

Some theories are even duking it out in a mano-a-mano test by imaging the brains of volunteers as they perform different tasks in clinical test centers across the globe.

But unlocking the neural basis of consciousness doesn’t have to be confrontational. Rather, theories can be integrated, wrote the authors, who were part of the Human Brain Project—a massive European endeavor to map and understand the brain—and specialize in decoding brain signals related to consciousness.

Not all authors agree on the specific brain mechanisms that allow us to perceive the outer world and construct an inner world of “self.” But by collaborating, they merged their ideas, showing that different theories aren’t necessarily mutually incompatible—in fact, they could be consolidated into a general framework of consciousness and even inspire new ideas that help unravel one of the brain’s greatest mysteries.

If successful, the joint mission could extend beyond our own noggins. Brain organoids, or “mini-brains,” that roughly mimic early human development are becoming increasingly sophisticated, spurring ethical concerns about their potential for developing self-awareness (to be clear, there aren’t any signs). Meanwhile, similar questions have been raised about AI. A general theory of consciousness, based on the human mind, could potentially help us evaluate these artificial constructs.

“Is it realistic to reconcile theories, or even aspire to a unified theory of consciousness?” the authors asked. “We take the standpoint that the existence of multiple theories is a sign of healthiness in this nascent field…such that multiple theories can simultaneously contribute to our understanding.”

Lost in Translation

I’m conscious. You are too. We see, smell, hear, and feel. We have an internal world that tells us what we’re experiencing. But the lines get blurry for people in different stages of coma or for those locked-in—they can still perceive their surroundings but can’t physically respond. We lose consciousness in sleep every night and during anesthesia. Yet, somehow, we regain consciousness. How?

With extensive imaging of the brain, neuroscientists today agree that consciousness emerges from the brain’s wiring and activity. But multiple theories argue about how electrical signals in the brain produce rich and intimate experiences of our lives.

Part of the problem, wrote the authors, is that there isn’t a clear definition of “consciousness.” In this paper, they separated the term into two experiences: one outer, one inner. The outer experience, called phenomenal consciousness, is when we immediately realize what we’re experiencing—for example, seeing a total solar eclipse or the northern lights.

The inner experience is a bit like a “gut feeling” in that it helps to form expectations and types of memory, so that tapping into it lets us plan behaviors and actions.

Both are aspects of consciousnesses, but the difference is hardly delineated in previous work. It makes comparing theories difficult, wrote the authors, but that’s what they set out to do.

Meet the Contenders

Using their “two experience” framework, they examined five prominent consciousness theories.

The first, the global neuronal workspace theory, pictures the brain as a city of sorts. Each local brain region “hub” dynamically interacts with a “global workspace,” which integrates and broadcasts information to other hubs for further processing—allowing information to reach the consciousness level. In other words, we only perceive something when all pieces of sensory information—sight, hearing, touch, taste—are woven into a temporary neural sketchpad. According to this theory, the seat of consciousness is in the frontal parts of the brain.

The second, integrated information theory, takes a more globalist view. The idea is that consciousness stems from a series of cause-effect reactions from the brain’s networks. With the right neural architecture, connections, and network complexity, consciousness naturally emerges. The theory suggests the back of the brain sparks consciousness.

Then there’s dendritic integration theory, the coolest new kid in town. Unlike previous ideas, this theory waved the front or back of the brain goodbye and instead zoomed in on single neurons in the cortex, the outermost part of the brain and a hub for higher cognitive functions such as reasoning and planning.

The cortex has extensive connections to other parts of the brain—for example, those that encode memories and emotions. One type of neuron, deep inside the cortex, especially stands out. Physically, these neurons resemble trees with extensive “roots” and “branches.” The roots connect to other parts of the brain, whereas the upper branches help calculate errors in the neuron’s computing. In turn, these upper branches generate an error signal that corrects mistakes through multiple rounds of learning.

The two compartments, while physically connected, go about their own business—turning a single neuron into multiple computers. Here’s the crux: There’s a theoretical “gate” between the upper and lower neural “offices” for each neuron. During consciousness, the gate opens, allowing information to flow between the cortex and other brain regions. In dreamless sleep and other unconscious states, the gate closes.

Like a light switch, this theory suggests that consciousness is supported by flicking individual neuron gates on or off on a grand scale.

The last two theories propose that recurrent processing in the brain—that is, it learns from previous experiences—is essential for consciousness. Instead of “experiencing” the world, the brain builds an internal simulation that constantly predicts the “here and now” to control what we perceive.

A Unified Theory?

All the theories have extensive experiments to back up their claims. So, who’s right? To the authors, the key is to consider consciousness not as a singular concept, but as a “ladder” of sorts. The brain functions at multiple levels: cells, local networks, brain regions, and finally, the whole brain.

When examining theories of consciousness, it also makes sense to delineate between different levels. For example, the dendritic integration theory—which considers neurons and their connections—is on the level of single cells and how they contribute to consciousness. It makes the theory “neutral,” in that it can easily fit into ideas at a larger scale—those that mostly rely on neural network connections or across larger brain regions.

Although it’s seemingly difficult to reconcile various ideas about consciousness, two principles tie them together, wrote the team. One is that consciousness requires feedback, within local neural circuits and throughout the brain. The other is integration, in that any feedback signals need to be readily incorporated back into neural circuits, so they can change their outputs. Finally, all authors agree that local, short connections are vital but not enough. Long distance connections from the cortex to deeper brain areas are required for consciousness.

So, is an integrated theory of consciousness possible? The authors are optimistic. By defining multiple aspects of consciousness—immediate responses versus internal thoughts—it’ll be clearer how to explore and compare results from different experiments. For now, the global neuronal workspace theory mostly focuses on the “inner experience” that leads to consciousness, whereas others try to tackle the “outer experience”—what we immediately experience.

For the theories to merge, the latter groups will have to explain how consciousness is used for attention and planning, which are hallmarks for immediate responses. But fundamentally, wrote the authors, they are all based on different aspects of neuronal connections near and far. With more empirical experiments, and as increasingly more sophisticated brain atlases come online, they’ll move the field forward.

Hopefully, the authors write, “an integrated theory of consciousness…may come within reach within the next years or decades.”

Image Credit: SIMON LEE / Unsplash

Kategorie: Transhumanismus

From Mutual Dependence to Obsolescence: The Future of Labor in an AI-Driven Economy

Singularity Weblog - 20 Květen, 2024 - 12:00
 Throughout history, capital and labor have been interdependent forces driving economic growth. Capital relies on labor to generate returns on investment, while labor depends on capital for wages. Despite historical fluctuations in their balance of power, classical economics suggests a theoretical long-term equilibrium where both parties benefit—capital sees growing returns, and labor enjoys rising […]
Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through May 18)

Singularity HUB - 18 Květen, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

It’s Time to Believe the AI Hype
Steven Levy | Wired
“There’s universal agreement in the tech world that AI is the biggest thing since the internet, and maybe bigger. …Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos aren’t lying. We will eventually become acclimated to the AI marvels unveiled this week. The smartphone once seemed exotic; now it’s an appendage no less critical to our daily life than an arm or a leg. At a certain point AI’s feats, too, may not seem magical any more.”

archive page

COMPUTING

How to Put a Datacenter in a Shoebox
Anna Herr and Quentin Herr | IEEE Spectrum
“At Imec, we have spent the past two years developing superconducting processing units that can be manufactured using standard CMOS tools. A processor based on this work would be one hundred times as energy efficient as the most efficient chips today, and it would lead to a computer that fits a data-center’s worth of computing resources into a system the size of a shoebox.”

BIOTECH

IndieBio’s SF Incubator Lineup Is Making Some Wild Biotech Promises
Devin Coldewey | TechCrunch
“We took special note of a few, which were making some major, bordering on ludicrous, claims that could pay off in a big way. Biotech has been creeping out in recent years to touch adjacent industries, as companies find how much they rely on outdated processes or even organisms to get things done. So it may not surprise you that there’s a microbiome company in the latest batch—but you might be surprised when you hear it’s the microbiome of copper ore.”

TECH

It’s the End of Google Search as We Know It
Lauren Goode | Wired
“It’s as though Google took the index cards for the screenplay it’s been writing for the past 25 years and tossed them into the air to see where the cards might fall. Also: The screenplay was written by AI. These changes to Google Search have been long in the making. Last year the company carved out a section of its Search Labs, which lets users try experimental new features, for something called Search Generative Experience. The big question since has been whether, or when, those features would become a permanent part of Google Search. The answer is, well, now.”

AUTOMATION

Waymo Says Its Robotaxis Are Now Making 50,000 Paid Trips Every Week
Mariella Moon | Engadget
“If you’ve been seeing more Waymo robotaxis recently in Phoenix, San Francisco, and Los Angeles, that’s because more and more people are hailing one for a ride. The Alphabet-owned company has announced on Twitter/X that it’s now serving more than 50,000 paid trips every week across three cities. Waymo One operates 24/7 in parts of those cities. If the company is getting 50,000 rides a week, that means it receives an average of 300 bookings every hour or five bookings every minute.”

CULTURE

Technology Is Probably Changing Us for the Worse—or So We Always Think
Timothy Maher | MIT Technology Review
“We’ve always greeted new technologies with a mixture of fascination and fear,  says Margaret O’Mara, a historian at the University of Washington who focuses on the intersection of technology and American politics. ‘People think: “Wow, this is going to change everything affirmatively, positively,”‘ she says. ‘And at the same time: ‘It’s scary—this is going to corrupt us or change us in some negative way.”‘ And then something interesting happens: ‘We get used to it,’ she says. ‘The novelty wears off and the new thing becomes a habit.'”

TECH

This Is the Next Smartphone Evolution
Matteo Wong | The Atlantic
“Earlier [this week], OpenAI announced its newest product: GPT-4o, a faster, cheaper, more powerful version of its most advanced large language model, and one that the company has deliberately positioned as the next step in ‘natural human-computer interaction.’ …Watching the presentation, I felt that I was witnessing the murder of Siri, along with that entire generation of smartphone voice assistants, at the hands of a company most people had not heard of just two years ago.”

SPACE

In the Race for Space Metals, Companies Hope to Cash In
Sarah Scoles | Undark
“Previous companies have rocketed toward similar goals before but went bust about a half decade ago. In the years since that first cohort left the stage, though, ‘the field has exploded in interest,’ said Angel Abbud-Madrid, director of the Center for Space Resources at the Colorado School of Mines. …The economic picture has improved with the cost of rocket launches decreasing, as has the regulatory environment, with countries creating laws specifically allowing space mining. But only time will tell if this decade’s prospectors will cash in where others have drilled into the red or be buried by their business plans.”

FUTURE

What I Got Wrong in a Decade of Predicting the Future of Tech
Christopher Mims | The Wall Street Journal
“Anniversaries are typically a time for people to get misty-eyed and recount their successes. But after almost 500 articles in The Wall Street Journal, one thing I’ve learned from covering the tech industry is that failures are far more instructive. Especially when they’re the kind of errors made by many people. Here’s what I’ve learned from a decade of embarrassing myself in public—and having the privilege of getting an earful about it from readers.”

FUTURE OF FOOD

Lab-Grown Meat Is on Shelves Now. But There’s a Catch
Matt Reynolds | Wired
“Now cultivated meat is available in one store in Singapore. There is a catch, however: The chicken on sale at Huber’s Butchery contains just 3 percent animal cells. The rest will be made of plant protein—the same kind of ingredients you’d find in plant-based meats that are already on supermarket shelves worldwide. This might feel like a bit of a bait and switch. Didn’t cultivated meat firms promise us real chicken? And now we’re getting plant-based products with a sprinkling of animal cells? That criticism wouldn’t be entirely fair, though.”

Image Credit: Pawel Czerwinski / Unsplash

Kategorie: Transhumanismus

Smelting Steel With Sunlight: New Solar Trap Tech Could Help Decarbonize Industrial Heat

Singularity HUB - 17 Květen, 2024 - 16:46

Some of the hardest sectors to decarbonize are industries that require high temperatures like steel smelting and cement production. A new approach uses a synthetic quartz solar trap to generate temperatures of over 1,000 degrees Celsius (1,832 degrees Fahrenheit)—hot enough for a host of carbon-intensive industries.

While most of the focus on the climate fight has been on cleaning up the electric grid and transportation, a surprisingly large amount of fossil fuel usage goes into industrial heat. As much as 25 percent of global energy consumption goes towards manufacturing glass, steel, and cement.

Electrifying these processes is challenging because it’s difficult to reach the high temperatures required. Solar receivers, which use thousands of sun-tracking mirrors to concentrate energy from the sun, have shown promise as they can hit temperatures of 3,000 C. But they’re very inefficient when processes require temperatures over 1,000 C because much of the energy is radiated back out.

To get around this, researchers from ETH Zurich in Switzerland showed that adding semi-transparent quartz to a solar receiver could trap solar energy at temperatures as high as 1,050 C. That’s hot enough to replace fossil fuels in a range of highly polluting industries, the researchers say.

“Previous research has only managed to demonstrate the thermal-trap effect up to 170 C,” lead researcher Emiliano Casati said in a press release. “Our research showed that solar thermal trapping works not just at low temperatures, but well above 1,000 C. This is crucial to show its potential for real-world industrial applications.”

The researchers used a silicon carbide disk to absorb solar energy but attached a roughly one-foot-long quartz rod to it. Because quartz is semi-transparent, light is able pass through it, but it also readily absorbs heat and prevents it from being radiated back out.

That meant that when the researchers subjected the quartz rod to simulated sunlight equivalent to 136 suns, the solar energy readily passed through to the silicon plate and was then trapped there. This allowed the plate to heat up to 1,050 C, compared to just 600 C at the other end of the rod.

Simulations of the device found that the quartz’s thermal trapping capabilities could significantly boost the efficiency of solar receivers. Adding a quartz rod to a state-of-the-art receiver could boost efficiency from 40 percent to 70 percent when attempting to hit temperatures of 1,200 C. That kind of efficiency gain could drastically reduce the size, and therefore cost, of solar heat installations.

While still just a proof of concept, the simplicity of the approach means it would probably not be too difficult to apply to existing receiver technology. Companies like Heliogen, which is backed by Bill Gates, has already developed solar furnace technology designed to generate the high temperatures required in a wide range of industries.

Casati says the promise is clear, but work remains to be done to prove its commercial feasibility.

“Solar energy is readily available, and the technology is already here,” he says. “To really motivate industry adoption, we need to demonstrate the economic viability and advantages of this technology at scale.”

But the prospect of replacing such a big chunk of our fossil fuel usage with solar power should be motivation enough to bring this technology to fruition.

Image Credit: A new solar trap built by a team of ETH Zurich scientists reaches 1050 C (Device/Casati et al.)

Kategorie: Transhumanismus

Scientists Step Toward Quantum Internet With Experiment Under the Streets of Boston

Singularity HUB - 16 Květen, 2024 - 19:00

A quantum internet would essentially be unhackable. In the future, sensitive information—financial or national security data, for instance, as opposed to memes and cat pictures—would travel through such a network in parallel to a more traditional internet.

Of course, building and scaling systems for quantum communications is no easy task. Scientists have been steadily chipping away at the problem for years. A Harvard team recently took another noteworthy step in the right direction. In a paper published this week in Nature, the team says they’ve sent entangled photons between two quantum memory nodes 22 miles (35 kilometers) apart on existing fiber optic infrastructure under the busy streets of Boston.

“Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area is an important step toward practical networking between quantum computers,” Mikhail Lukin, who led the project and is a physics professor at Harvard, said in a press release.

The team leased optical fiber under the Boston streets, connecting the two memory nodes located at Harvard by way of a 22-mile (35-kilometer) loop of cable. Image Credit: Can Knaut via OpenStreetMap

One way a quantum network can transmit information is by using entanglement, a quantum property where two particles, likely photons in this case, are linked so a change in the state of one tells us about the state of the other. If the sender and receiver of information each have one of a pair of entangled photons, they can securely transmit data using them. This means quantum communications will rely on generating enormous numbers of entangled photons and reliably sending them to far-off destinations.

Scientists have sent entangled particles long distances over fiber optic cables before, but to make a quantum internet work, particles will need to travel hundreds or thousands of miles. Because cables tend to absorb photons over such distances, the information will be lost—unless it can be periodically refreshed.

Enter quantum repeaters.

You can think of a repeater as a kind of internet gas station. Information passing through long stretches of fiber optic cables naturally degrades. A repeater refreshes that information at regular intervals, strengthening the signal and maintaining its fidelity. A quantum repeater is the same thing, only it also preserves entanglement.

That scientists have yet to build a quantum repeater is one reason we’re still a ways off from a working quantum internet at scale. Which is where the Harvard study comes in.

The team of researchers from Harvard and Amazon Web Services (AWS) have been working on quantum memory nodes. Each node houses a piece of diamond with an atom-sized hole, or silicon-vacancy center, containing two qubits: one for storage, one for communication. The nodes are basically small quantum computers, operating at near absolute zero, that can receive, record, and transmit quantum information. The Boston experiment, according to the team, is the longest distance anyone has sent information between such devices and a big step towards a quantum repeater.

“Our experiment really put us in a position where we’re really close to working on a quantum repeater demonstration,” Can Knaut, a Harvard graduate student in Lukin’s lab, told New Scientist.

Next steps include expanding the system to include multiple nodes.

Along those lines, a separate group in China, using a different technique for quantum memory involving clouds of rubidium atoms, recently said they’d linked three nodes 6 miles (10 kilometers) apart. The same group, led by Xiao-Hui Bao at the University of Science and Technology of China, had previously entangled memory nodes 13.6 miles (22 kilometers) apart.

It’ll take a lot more work to make the technology practical. Researchers need to increase the rate at which their machines entangle photons, for example. But as each new piece falls into place, the prospect of unhackable communications gets a bit closer.

Image Credit: Visax / Unsplash

Kategorie: Transhumanismus

‘Noise’ in the Machine: Human Differences in Judgment Lead to Problems for AI

Singularity HUB - 14 Květen, 2024 - 19:26

Many people understand the concept of bias at some intuitive level. In society, and in artificial intelligence systems, racial and gender biases are well documented.

If society could somehow remove bias, would all problems go away? The late Nobel laureate Daniel Kahneman, who was a key figure in the field of behavioral economics, argued in his last book that bias is just one side of the coin. Errors in judgments can be attributed to two sources: bias and noise.

Bias and noise both play important roles in fields such as law, medicine, and financial forecasting, where human judgments are central. In our work as computer and information scientists, my colleagues and I have found that noise also plays a role in AI.

Statistical Noise

Noise in this context means variation in how people make judgments of the same problem or situation. The problem of noise is more pervasive than initially meets the eye. A seminal work, dating back all the way to the Great Depression, has found that different judges gave different sentences for similar cases.

Worryingly, sentencing in court cases can depend on things such as the temperature and whether the local football team won. Such factors, at least in part, contribute to the perception that the justice system is not just biased but also arbitrary at times.

Other examples: Insurance adjusters might give different estimates for similar claims, reflecting noise in their judgments. Noise is likely present in all manner of contests, ranging from wine tastings to local beauty pageants to college admissions.

Noise in the Data

On the surface, it doesn’t seem likely that noise could affect the performance of AI systems. After all, machines aren’t affected by weather or football teams, so why would they make judgments that vary with circumstance? On the other hand, researchers know that bias affects AI, because it is reflected in the data that the AI is trained on.

For the new spate of AI models like ChatGPT, the gold standard is human performance on general intelligence problems such as common sense. ChatGPT and its peers are measured against human-labeled commonsense datasets.

Put simply, researchers and developers can ask the machine a commonsense question and compare it with human answers: “If I place a heavy rock on a paper table, will it collapse? Yes or No.” If there is high agreement between the two—in the best case, perfect agreement—the machine is approaching human-level common sense, according to the test.

So where would noise come in? The commonsense question above seems simple, and most humans would likely agree on its answer, but there are many questions where there is more disagreement or uncertainty: “Is the following sentence plausible or implausible? My dog plays volleyball.” In other words, there is potential for noise. It is not surprising that interesting commonsense questions would have some noise.

But the issue is that most AI tests don’t account for this noise in experiments. Intuitively, questions generating human answers that tend to agree with one another should be weighted higher than if the answers diverge—in other words, where there is noise. Researchers still don’t know whether or how to weigh AI’s answers in that situation, but a first step is acknowledging that the problem exists.

Tracking Down Noise in the Machine

Theory aside, the question still remains whether all of the above is hypothetical or if in real tests of common sense there is noise. The best way to prove or disprove the presence of noise is to take an existing test, remove the answers and get multiple people to independently label them, meaning provide answers. By measuring disagreement among humans, researchers can know just how much noise is in the test.

The details behind measuring this disagreement are complex, involving significant statistics and math. Besides, who is to say how common sense should be defined? How do you know the human judges are motivated enough to think through the question? These issues lie at the intersection of good experimental design and statistics. Robustness is key: One result, test, or set of human labelers is unlikely to convince anyone. As a pragmatic matter, human labor is expensive. Perhaps for this reason, there haven’t been any studies of possible noise in AI tests.

To address this gap, my colleagues and I designed such a study and published our findings in Nature Scientific Reports, showing that even in the domain of common sense, noise is inevitable. Because the setting in which judgments are elicited can matter, we did two kinds of studies. One type of study involved paid workers from Amazon Mechanical Turk, while the other study involved a smaller-scale labeling exercise in two labs at the University of Southern California and the Rensselaer Polytechnic Institute.

You can think of the former as a more realistic online setting, mirroring how many AI tests are actually labeled before being released for training and evaluation. The latter is more of an extreme, guaranteeing high quality but at much smaller scales. The question we set out to answer was how inevitable is noise, and is it just a matter of quality control?

The results were sobering. In both settings, even on commonsense questions that might have been expected to elicit high—even universal—agreement, we found a nontrivial degree of noise. The noise was high enough that we inferred that between 4 percent and 10 percent of a system’s performance could be attributed to noise.

To emphasize what this means, suppose I built an AI system that achieved 85 percent on a test, and you built an AI system that achieved 91 percent. Your system would seem to be a lot better than mine. But if there is noise in the human labels that were used to score the answers, then we’re not sure anymore that the 6 percent improvement means much. For all we know, there may be no real improvement.

On AI leaderboards, where large language models like the one that powers ChatGPT are compared, performance differences between rival systems are far narrower, typically less than 1 percent. As we show in the paper, ordinary statistics do not really come to the rescue for disentangling the effects of noise from those of true performance improvements.

Noise Audits

What is the way forward? Returning to Kahneman’s book, he proposed the concept of a “noise audit” for quantifying and ultimately mitigating noise as much as possible. At the very least, AI researchers need to estimate what influence noise might be having.

Auditing AI systems for bias is somewhat commonplace, so we believe that the concept of a noise audit should naturally follow. We hope that this study, as well as others like it, leads to their adoption.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Michael Dziedzic / Unsplash

Kategorie: Transhumanismus

Google and Harvard Map a Tiny Piece of the Human Brain With Extreme Precision

Singularity HUB - 13 Květen, 2024 - 21:32

Scientists just published the most detailed map of a cubic millimeter of the human brain. Smaller than a grain of rice, the mapped section of brain includes over 57,000 cells, 230 millimeters of blood vessels, and 150 million synapses.

The project, a collaboration between Harvard and Google, is looking to accelerate connectomics—the study of how neurons are wired together—over a much larger scale.

Our brains are like a jungle.

Neuron branches crisscross regions, forming networks that process perception, memories, and even consciousness. Blood vessels tightly wrap around these branches to provide nutrients and energy. Other brain cell types form intricate connections with neurons, support the brain’s immune function, and fine-tune neural network connections.

In biology, structure determines function. Like tracing wires of a computer, mapping components of the brain and their connections can improve our understanding of how the brain works—and when and why it goes wrong. A brain map that charts the jungle inside our heads could help us tackle some of the most perplexing neurological disorders, such as Alzheimer’s disease, and decipher the origins of emotions, thoughts, and behaviors.

Aided by machine learning tools from Google Research, the Harvard team traced neurons, blood vessels, and other brain cells at nanoscale levels. The images revealed previously unknown quirks in the human brain—including mysterious tangles in neuron wiring and neurons that connect through multiple “contacts” to other cells. Overall, the dataset incorporates a massive 1.4 petabytes of information—roughly the storage amount of a thousand high-end laptops—and is free to explore.

“It’s a little bit humbling,” Dr. Viren Jain, a neuroscientist at Google and study author, told Nature. “How are we ever going to really come to terms with all this complexity?” The database, first released as a preprint paper in 2021, has already garnered much enthusiasm in the scientific field.

“It’s probably the most computer-intensive work in all of neuroscience,” Dr. Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science, who was not involved in the project, told MIT Technology Review.

Why So Complicated?

Many types of brain maps exist. Some chart gene expression in brain cells; others map different cell types across the brain. But the goal is the same. They aim to help scientists understand how the brain works in health and disease.

The connectome details highways between brain regions that “talk” to each other. These connections, called synapses, number in the hundreds of trillions in human brains—on the scale of the number of stars in the universe.

Decades ago, the first whole-brain wiring map detailed all 302 neurons in the roundworm Caenorhabditis elegans. Because its genetics are largely known, the lowly worm delivered insights, such as how the brain and body communicate to increase healthy longevity. Next, scientists charted the fruit fly connectome and found the underpinnings of spatial navigation.

More recently, the MouseLight Project and MICrONS have been deciphering a small chunk of a mouse’s brain—the outermost area called the cortex. It’s hoped such work can help inform neuro-inspired AI algorithms with lower power requirements and higher efficacy.

But mice are not people. In the new study, scientists mapped a cubic millimeter of human brain tissue from the temporal cortex—a nexus that’s important for memory, emotions, and sensations. Although just one-millionth of a human brain, the effort reconstructed connections in 3D at nanoscale resolution.

Slice It Up

Sourcing is a challenge when mapping the human brain. Brain tissues rapidly deteriorate after trauma or death, which changes their wiring and chemistry. Brain organoids—”mini-brains” grown in test tubes—somewhat resemble the brain’s architecture, but they can’t replicate the real thing.

Here, the team took a tiny bit of brain tissue from a 45-year-old woman with epilepsy during surgery—the last resort for those who suffer severe seizures and don’t respond to medication.

Using a machine like a deli-meat slicer armed with a diamond knife, the Harvard team, led by connectome expert Dr. Jeff Lichtman, meticulously sliced the sample into 5,019 cross sections. Each was roughly 30 nanometers thick—a fraction of the width of a human hair. They imaged the slices with an electron microscope, capturing nanoscale cellular details, including the “factories” inside cells that produce energy, eliminate waste, or transport molecules.

Piecing these 2D images into a 3D reconstruction is a total headache. A decade ago, scientists had to do it by hand. Jain’s team at Google developed an AI to automate the job. The AI was able to track fragments of whole components—say, a part of a neuron (its body or branches)—and stick them back together throughout the images.

In total, the team pieced together thousands of neurons and over a hundred million synaptic connections. Other brain components included blood vessels and myelin—a protective molecular “sheath” covering neurons. Like electrical insulation, when myelin deteriorates, it causes multiple brain disorders.

“I remember this moment, going into the map and looking at one individual synapse from this woman’s brain, and then zooming out into these other millions of pixels,” Jain told Nature. “It felt sort of spiritual.”

A Whole New World

Even a cursory look at the data led to surprising insights into the brain’s intricate neural wiring.

Cortical neurons have a forest-like structure for input and a single “cable” that delivers output signals. Called axons, these are dotted with thousands of synapses connecting to other cells.

Usually, a synapse grabs onto just one spot of a neighboring neuron. But the new map found a rare, strange group that connects with up to 50 points. “We’ve always had a theory that there would be super connections, if you will, amongst certain cells…But it’s something we’ve never had the resolution to prove,” Dr. Tim Mosca, who was not involved in the work, told Popular Science. These could be extra-potent connections that allow neural communications to go into “autopilot mode,” like when riding a bike or navigating familiar neighborhoods.

More strange structures included “axon whorls” that wrapped around themselves like tangled headphones. An axon’s main purpose is to reach out and connect with other neurons—so why do some fold into themselves? Do they serve a purpose, or are they just a hiccup in brain wiring? It’s a mystery. Another strange observation found pairs of neurons that perfectly mirrored each other. What this symmetry does for the brain is also unknown.

The bottom line: Our understanding of the brain’s connections and inner workings is still only scratching the surface. The new database is a breakthrough, but it’s not perfect. The results are from a single person with epilepsy, which can’t represent everyone. Some wiring changes, for example, may be due to the disorder. The team is planning a follow-up to separate epilepsy-related circuits from those that are more universal in people.

Meanwhile, they’ve opened the entire database for anyone to explore. And the team is also working with scientists to manually examine the results and eliminate potential AI-induced errors during reconstruction. So far, hundreds of cells have been “proofread” and validated by humans, but it’s just a fraction of the 50,000 neurons in the database.

The technology can also be used for other species, such as the zebrafish—another animal model often used in neuroscience research—and eventually the entire mouse brain.

Although this study only traced a tiny nugget of the human brain, the atlas is a stunning way to peek inside its seemingly chaotic wiring and make sense of things. “Further studies using this resource may bring valuable insights into the mysteries of the human brain,” wrote the team.

Image Credit: Google Research and Lichtman Lab

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through May 11)

Singularity HUB - 11 Květen, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

OpenAI Could Unveil Its Google Search Competitor on Monday
Jess Weatherbed | The Verge
“OpenAI is reportedly gearing up to announce a search product powered by artificial intelligence on Monday that could threaten Google’s dominance. That target date, provided to Reuters by ‘two sources familiar with the matter,’ would time the announcement a day before Google kicks off its annual I/O conference, which is expected to focus on the search giant’s own AI model offerings like Gemini and Gemma.”

archive page

ROBOTICS

DeepMind Is Experimenting With a Nearly Indestructible Robot Hand
Jeremy Hsu | New Scientist
“This latest robotic hand developed by the UK-based Shadow Robot Company can go from fully open to closed within 500 milliseconds and perform a fingertip pinch with up to 10 newtons of force. It can also withstand repeated punishment such as pistons punching the fingers from multiple angles or a person smashing the device with a hammer.”

BIOTECH

First Patient Begins Newly Approved Sickle Cell Gene Therapy
Gina Kolata | The New York Times
“On Wednesday, Kendric Cromer, a 12-year-old boy from a suburb of Washington, became the first person in the world with sickle cell disease to begin a commercially approved gene therapy that may cure the condition. For the estimated 20,000 people with sickle cell in the United States who qualify for the treatment, the start of Kendric’s monthslong medical journey may offer hope. But it also signals the difficulties patients face as they seek a pair of new sickle cell treatments.”

SPACE

Commercial Space Stations Approach Launch Phase 
Andrew Jones | IEEE Spectrum
“A changing of the guard in space stations is on the horizon as private companies work towards providing new opportunities for science, commerce, and tourism in outer space. …The challenge [new space stations like Blue Origin’s] Orbital Reef faces is considerable: reimagining successful earthbound technologies—such as regenerative life support systems, expandable habitats and 3D printing—but now in orbit, on a commercially viable platform.”

FUTURE

This Gigantic 3D Printer Could Reinvent Manufacturing
Nate Berg | Fast Company
“This machine isn’t just spitting out basic building materials like some massive glue gun. It’s also able to do subtractive manufacturing, like milling, as well as utilize a robotic arm for more complicated tasks. A built-in system allows it to lay down fibers in a printed object that give it greater structural integrity, allowing printed spans to stretch farther, and enabling factory-based 3D printed buildings to become even larger.”

AUTOMATION

Wayve Raises $1B to Take Its Tesla-Like Technology for Self-Driving to Many Carmakers
Mike Butcher | TechCrunch
“Wayve calls its hardware-agnostic mapless product an ‘Embodied AI,’ and it plans to distribute its platform not just to car makers but also to robotics companies serving manufacturers of all descriptions, allowing the platform to learn from human behavior in a wide variety of real-world environments.”

BIOTECH

The US Is Cracking Down on Synthetic DNA
Emily Mullin | Wired
“Synthesizing DNA has been possible for decades, but it’s become increasingly easier, cheaper, and faster to do so in recent years thanks to new technology that can ‘print’ custom gene sequences. Now, dozens of companies around the world make and ship synthetic nucleic acids en masse. And with AI, it’s becoming possible to create entirely new sequences that don’t exist in nature—including those that could pose a threat to humans or other living things.”

SPACE

Fall Into a Black Hole in Mind-Bending NASA Animation
Robert Lea | Space.com
“If you’ve ever wondered what would happen if you were unlucky enough to fall into a black hole, NASA has your answer. A visualization created on a NASA supercomputer to celebrate the beginning of black hole week on Monday (May 6) takes the viewer on a one-way plunge beyond the event horizon of a black hole.”

ENERGY

A Company Is Building a Giant Compressed-Air Battery in the Australian Outback
Dan Gearino | Wired
“Toronto-based Hydrostor is one of the businesses developing long-duration energy storage that has moved beyond lab scale and is now focusing on building big things. The company makes systems that store energy underground in the form of compressed air, which can be released to produce electricity for eight hours or longer.”

SCIENCE

The Way Whales Communicate Is Closer to Human Language Than We Realized
Rhiannon Williams | MIT Technology Review
“A team of researchers led by Pratyusha Sharma at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) working with Project CETI, a nonprofit focused on using AI to understand whales, used statistical models to analyze whale codas and managed to identify a structure to their language that’s similar to features of the complex vocalizations humans use. Their findings represent a tool future research could use to decipher not just the structure but the actual meaning of whale sounds.”

Image Credit: Benjamin Cheng / Unsplash

Kategorie: Transhumanismus

Global Carbon Capture Capacity Quadruples as the Biggest Plant Yet Revs Up in Iceland

Singularity HUB - 10 Květen, 2024 - 19:18

Pulling carbon dioxide out of the atmosphere is likely to be a crucial weapon in the battle against climate change. And now global carbon capture capacity has quadrupled with the opening of the world’s largest direct air capture plant in Iceland.

Scientists and policymakers initially resisted proposals to remove CO2 from the atmosphere, due to concerns it could lead to a reduced sense of urgency around emissions reductions. But with progress on that front falling behind schedule, there’s been growing acceptance that carbon capture will be crucial if we want to avoid the worst consequences of climate change.

A variety of approaches, including reforestation, regenerative agriculture, and efforts to lock carbon up in minerals, could play a role. But the approach garnering most of the attention is direct air capture, which relies on large facilities powered by renewable energy to suck CO2 out of the air.

One of the leaders in this space is Swiss company Climeworks, whose Orca plant in Iceland previously held the title for world’s largest. But this week, the company started operations at a new plant called Mammoth that has nearly ten times the capacity. The facility, also in Iceland, will be able to extract 36,000 tons of CO2 a year, which is nearly four times the 10,000 tons a year currently being captured globally.

“Starting operations of our Mammoth plant is another proof point in Climeworks’ scale-up journey to megaton capacity by 2030 and gigaton by 2050,” co-CEO of Climeworks Jan Wurzbacher said in a statement. “Constructing multiple real-world plants in rapid sequences makes Climeworks the most deployed carbon removal company with direct air capture at the core.”

Climeworks plants use fans to suck air into large collector units filled with a material called a sorbent, which absorbs CO2. Once the sorbent is saturated, the collector shuts and is heated to roughly 212 degrees Fahrenheit to release the CO2.

The Mammoth plant will eventually feature 72 of these collector units, though only 12 are currently operational. That’s still more than Orca’s eight units, which allows it to capture roughly 4,000 tons of CO2 a year. Adding an extra level to the stacks of collectors has also reduced land use per ton of CO2 captured, while a new V-shaped configuration improves airflow, boosting performance.

To permanently store the captured carbon, Climeworks has partnered with Icelandic company Carbfix, which has developed a process to inject CO2 dissolved in water deep into porous rock formations made of basalt. Over the course of a couple years, the dissolved CO2 reacts with the rocks to form solid carbonate minerals that are stable for thousands of years.

With the Orca plant, CO2 had to be transported through hundreds of meters of pipeline to Carbfix’s storage site. But Mammoth features two injection wells on-site reducing transportation costs. It also has a new CO2 absorption tower that dissolves the gas in water at lower pressures, reducing energy costs compared to the previous approach.

Climeworks has much bigger ambitions than Mammoth though. The US government has earmarked $3.5 billion to build four direct air capture hubs, each capable of capturing one million tons of CO2 a year, and Climeworks will provide the technology for one of the proposed facilities in Louisiana.

The company says it’s aiming to reach megaton-scale—removing one million tons a year—by 2030 and gigaton-scale—a billion tons a year by 2050. Hopefully, they won’t be the only ones, because climate forecasts suggest we’ll need to be removing 3.5 gigatons of CO2 a year by 2050 to keep warming below 1.5 degrees Celsius.

There’s also little clarity on the economics of the approach. According to Reuters, Climeworks did not reveal how much it costs Mammoth to remove each ton of CO2, though it said it’s targeting $400-600 per ton by 2030 and $200-350 per ton by 2040. And while plants in Iceland can take advantage of abundant, green geothermal energy, it’s less clear what they will rely on elsewhere.

Either way, there’s growing agreement that carbon capture will be an important part of our efforts to tackle climate change. While Mammoth might not make much of a dent in emissions, it’s a promising sign that direct air capture technology is maturing.

Image Credit: Climeworks

Kategorie: Transhumanismus

Google DeepMind’s New AlphaFold AI Maps Life’s Molecular Dance in Minutes

Singularity HUB - 9 Květen, 2024 - 23:33

Proteins are biological workhorses.

They build our bodies and orchestrate the molecular processes in cells that keep them healthy. They also present a wealth of targets for new medications. From everyday pain relievers to sophisticated cancer immunotherapies, most current drugs interact with a protein. Deciphering protein architectures could lead to new treatments.

That was the promise of AlphaFold 2, an AI model from Google DeepMind that predicted how proteins gain their distinctive shapes based on the sequences of their constituent molecules alone. Released in 2020, the tool was a breakthrough half a decade in the making.

But proteins don’t work alone. They inhabit an entire cellular universe and often collaborate with other molecular inhabitants like, for example, DNA, the body’s genetic blueprint.

This week, DeepMind and Isomorphic Labs released a big new update that allows the algorithm to predict how proteins work inside cells. Instead of only modeling their structures, the new version—dubbed AlphaFold 3—can also map a protein’s interactions with other molecules.

For example, could a protein bind to a disease-causing gene and shut it down? Can adding new genes to crops make them resilient to viruses? Can the algorithm help us rapidly engineer new vaccines to tackle existing diseases—or whatever new ones nature throws at us?

“Biology is a dynamic system…you have to understand how properties of biology emerge due to the interactions between different molecules in the cell,” said Demis Hassabis, the CEO of DeepMind, in a press conference.

AlphaFold 3 helps explain “not only how proteins talk to themselves, but also how they talk to other parts of the body,” said lead author Dr. John Jumper.

The team is releasing the new AI online for academic researchers by way of an interface called the AlphaFold Server. With a few clicks, a biologist can run a simulation of an idea in minutes, compared to the weeks or months usually needed for experiments in a lab.

Dr. Julien Bergeron at King’s College London, who builds nano-protein machines but was not involved in the work, said the AI is “transformative science” for speeding up research, which could ultimately lead to nanotech devices powered by the body’s mechanisms alone.

For Dr. Frank Uhlmann at the Francis Crick Laboratory, who gained early access to AlphaFold 3 and used it to study how DNA divides when cells divide, the AI is “democratizing discovery research.”

Molecular Universe

Proteins are finicky creatures. They’re made of strings of molecules called amino acids that fold into intricate three-dimensional shapes that determine what the protein can do.

Sometimes the folding processes goes wrong. In Alzheimer’s disease, misfolded proteins clump into dysfunctional blobs that clog up around and inside brain cells.

Scientists have long tried to engineer drugs to break up disease-causing proteins. One strategy is to map protein structure—know thy enemy (and friends). Before AlphaFold, this was done with electron microscopy, which captures a protein’s structure at the atomic level. But it’s expensive, labor intensive, and not all proteins can tolerate the scan.

Which is why AlphaFold 2 was revolutionary. Using amino acid sequences alone—the constituent molecules that make up proteins—the algorithm could predict a protein’s final structure with startling accuracy. DeepMind used AlphaFold to map the structure of nearly all proteins known to science and how they interact. According to the AI lab, in just three years, researchers have mapped roughly six million protein structures using AlphaFold 2.

But to Jumper, modeling proteins isn’t enough. To design new drugs, you have to think holistically about the cell’s whole ecosystem.

It’s an idea championed by Dr. David Baker at the University of Washington, another pioneer in the protein-prediction space. In 2021, Baker’s team released AI-based software called RoseTTAFold All-Atom to tackle interactions between proteins and other biomolecules.

Picturing these interactions can help solve tough medical challenges, allowing scientists to design better cancer treatments or more precise gene therapies, for example.

“Properties of biology emerge through the interactions between different molecules in the cell,” said Hassabis in the press conference. “You can think about AlphaFold 3 as our first big sort of step towards that.”

A Revamp

AlphaFold 3 builds on its predecessor, but with significant renovations.

One way to gauge how a protein interacts with other molecules is to examine evolution. Another is to map a protein’s 3D structure and—with a dose of physics—predict how it can grab onto other molecules. While AlphaFold 2 mostly used an evolutionary approach—training the AI on what we already know about protein evolution in nature—the new version heavily embraces physical and chemical modeling.

Some of this includes chemical changes. Proteins are often tagged with different chemicals. These tags sometimes change protein structure but are essential to their behavior—they can literally determine a cell’s fate, for example, life, senescence, or death.

The algorithm’s overall setup makes some use of its predecessor’s machinery to map proteins, DNA, and other molecules and their interactions. But the team also looked to diffusion models—the algorithms behind OpenAI’s DALL-E 2 image generator—to capture structures at the atomic level. Diffusion models are trained to reverse noisy images in steps until they arrive at a prediction for what the image (or in this case a 3D model of a biomolecule) should look like without the noise. This addition made a “substantial change” to performance, said Jumper.

Like AlphaFold 2, the new version has a built-in “sanity check” that indicates how confident it is in a generated model so scientists can proofread its outputs. This has been a core component of all their work, said the DeepMind team. They trained the AI using the Protein Data Bank, an open-source compilation of 3D protein structures that’s constantly updated, including new experimentally validated structures of proteins binding to DNA and other biomolecules

Pitted against existing software, AlphaFold 3 broke records. One test for molecular interactions between proteins and small molecules—ones that could become medications—succeeded 76 percent of the time. Previous attempts were successful in roughly 42 percent of cases.

When it comes to deciphering protein functions, AlphaFold 3 “seeks to solve the exact same problem [as RoseTTAFold All-Atom]…but is clearly more accurate,” Baker told Singularity Hub.

But the tool’s accuracy depends on which interaction is being modeled. The algorithm isn’t yet great at protein-RNA interactions, for example, Columbia University’s Mohammed AlQuraishi told MIT Technology Review. Overall, accuracy ranged from 40 to more than 80 percent.

AI to Real Life

Unlike previous iterations, DeepMind isn’t open-sourcing AlphaFold 3’s code. Instead, they’re releasing the tool as a free online platform, called AlphaFold Server, that allows scientists to test their ideas for protein interactions with just a few clicks.

AlphaFold 2 required technical expertise to install and run the software. The server, in contrast, can help people unfamiliar with code to use the tool. It’s for non-commercial use only and can’t be reused to train other machine learning models for protein prediction. But it is freely available for scientists to try. The team envisions the software helping develop new antibodies and other treatments at a faster rate. Isomorphic Labs, a spin-off of DeepMind, is already using AlphaFold 3 to develop medications for a variety of diseases.

For Bergeron, the upgrade is “transformative.” Instead of spending years in the lab, it’s now possible to mimic protein interactions in silico—a computer simulation—before beginning the labor- and time-intensive work of investigating promising solutions using cells.

“I’m pretty certain that every structural biology and protein biochemistry research group in the world will immediately adopt this system,” he said.

Image Credit: Google DeepMind

Kategorie: Transhumanismus

Astronomers Discover 27,500 New Asteroids Lurking in Archival Images

Singularity HUB - 8 Květen, 2024 - 21:03

There are well over a million asteroids in the solar system. Most don’t cross paths with Earth, but some do and there’s a risk one of these will collide with our planet. Taking a census of nearby space rocks, then, is prudent. As conventional wisdom would have it, we’ll need lots of telescopes, time, and teams of astronomers to find them.

But maybe not, according to the B612 Foundation’s Asteroid Institute.

In tandem with Google Cloud, the Asteroid Institute recently announced they’ve spotted 27,500 new asteroids—more than all discoveries worldwide last year—without requiring a single new observation. Instead, over a period of just a few weeks, the team used new software to scour 1.7 billion points of light in some 400,000 images taken over seven years and archived by the National Optical-Infrared Astronomy Research Laboratory (NOIRLab).

To discover new asteroids, astronomers usually need multiple images over several nights (or more) to find moving objects and calculate their orbits. This means they have to make new observations with asteroid discovery in mind. There is also, however, a trove of existing one-time observations made for other purposes, and these are likely packed with photobombing asteroids. But identifying them is difficult and computationally intensive.

Working with the University of Washington, the Asteroid Institute team developed an algorithm, Tracklet-less Heliocentric Orbit Recovery, or THOR, to scan archived images recorded at different times or even by different telescopes. The tool can tell if moving points of light recorded in separate images are the same object. Many of these will be asteroids.

Running THOR on Google Cloud, the team scoured the NOIRLab data and found plenty. Most of the new asteroids are in the main asteroid belt, but more than 100 are near-Earth asteroids. Though the team classified their findings as “high-confidence,” these near-Earth asteroids have not yet been confirmed. They’ll submit their findings to the Minor Planet Center, and ESA and NASA will then verify orbits and assess risk. (The team says they have no reason to believe any pose a risk to Earth.)

While the new software could speed up the pace of discovery, the process still requires volunteers and scientists to manually review the algorithm’s finds. The team plans to use the raw data from the recent run including human review to train an AI model. The hope is that some or all of the manual review process can be automated, making the process even faster.

In the future, the algorithm will go to work on data from the Vera C. Rubin Observatory, a telescope in Chile’s Atacama desert. The telescope, set to begin operations next year, will make twice nightly observations of the sky with asteroid detection in mind. THOR may be able to make discoveries with only one nightly run, freeing the telescope up for other work.

All this is in service of the plan to discover as many Earth-crossing asteroids as possible.

According to NASA, we’ve found over 1.3 million asteroids35,000 of which are near-Earth asteroids. Of these, over 90 percent of the biggest and most dangerous—in the same class as the impact that ended the dinosaurs—have been discovered. Scientists are now filling out the list of smaller but still dangerous asteroids. The vast majority of all known asteroids were catalogued this century. Before that we were flying blind.

While no dangerous asteroids are known to be headed our way soon, space agencies are working on a plan of action—sans nukes and Bruce Willis—should we discover one.

In 2022, NASA rammed the DART spacecraft into an asteroid, Dymorphos, to see if it would deflect the space rock’s orbit. This is a planetary defense strategy known as a “kinetic impactor.” Scientists thought DART might change the asteroid’s orbit by 7 minutes. Instead, DART changed Dymorphos’ orbit by a whopping 33 minutes, much of which was due to recoil produced by a giant plume of material ejected by the impact.

The conclusion of scientists studying the aftermath? “Kinetic impactor technology is a viable technique to potentially defend Earth if necessary.” With the caveat: If we have enough time. Such impacts amount to a nudge, so we need years of advance notice.

Algorithms like THOR could help give us that crucial heads up.

Image Credit: B612 Foundation

Kategorie: Transhumanismus

AI Can Now Generate Entire Songs on Demand. What Does This Mean for Music as We Know It?

Singularity HUB - 7 Květen, 2024 - 19:28

In March, we saw the launch of a “ChatGPT for music” called Suno, which uses generative AI to produce realistic songs on demand from short text prompts. A few weeks later, a similar competitor—Udioarrived on the scene.

I’ve been working with various creative computational tools for the past 15 years, both as a researcher and a producer, and the recent pace of change has floored me. As I’ve argued elsewhere, the view that AI systems will never make “real” music like humans do should be understood more as a claim about social context than technical capability.

The argument “sure, it can make expressive, complex-structured, natural-sounding, virtuosic, original music which can stir human emotions, but AI can’t make proper music” can easily begin to sound like something from a Monty Python sketch.

After playing with Suno and Udio, I’ve been thinking about what it is exactly they change—and what they might mean not only for the way professionals and amateur artists create music, but the way all of us consume it.

Expressing Emotion Without Feeling It

Generating audio from text prompts in itself is nothing new. However, Suno and Udio have made an obvious development: from a simple text prompt, they generate song lyrics (using a ChatGPT-like text generator), feed them into a generative voice model, and integrate the “vocals” with generated music to produce a coherent song segment.

This integration is a small but remarkable feat. The systems are very good at making up coherent songs that sound expressively “sung” (there I go anthropomorphizing).

The effect can be uncanny. I know it’s AI, but the voice can still cut through with emotional impact. When the music performs a perfectly executed end-of-bar pirouette into a new section, my brain gets some of those little sparks of pattern-processing joy that I might get listening to a great band.

To me this highlights something sometimes missed about musical expression: AI doesn’t need to experience emotions and life events to successfully express them in music that resonates with people.

Music as an Everyday Language

Like other generative AI products, Suno and Udio were trained on vast amounts of existing work by real humans—and there is much debate about those humans’ intellectual property rights.

Nevertheless, these tools may mark the dawn of mainstream AI music culture. They offer new forms of musical engagement that people will just want to use, to explore, to play with, and actually listen to for their own enjoyment.

AI capable of “end-to-end” music creation is arguably not technology for makers of music, but for consumers of music. For now it remains unclear whether users of Udio and Suno are creators or consumers—or whether the distinction is even useful.

A long-observed phenomenon in creative technologies is that as something becomes easier and cheaper to produce, it is used for more casual expression. As a result, the medium goes from an exclusive high art form to more of an everyday language—think what smartphones have done to photography.

So imagine you could send your father a professionally produced song all about him for his birthday, with minimal cost and effort, in a style of his preference—a modern-day birthday card. Researchers have long considered this eventuality, and now we can do it. Happy birthday, Dad!

Mr Bown’s Blues. Generated by Oliver Bown using Udio [3.75 MB (download)] Can You Create Without Control?

Whatever these systems have achieved and may achieve in the near future, they face a glaring limitation: the lack of control.

Text prompts are often not much good as precise instructions, especially in music. So these tools are fit for blind search—a kind of wandering through the space of possibilities—but not for accurate control. (That’s not to diminish their value. Blind search can be a powerful creative force.)

Viewing these tools as a practicing music producer, things look very different. Although Udio’s about page says “anyone with a tune, some lyrics, or a funny idea can now express themselves in music,” I don’t feel I have enough control to express myself with these tools.

I can see them being useful to seed raw materials for manipulation, much like samples and field recordings. But when I’m seeking to express myself, I need control.

Using Suno, I had some fun finding the most gnarly dark techno grooves I could get out of it. The result was something I would absolutely use in a track.

Cheese Lovers’ Anthem. Generated by Oliver Bown using Suno [2.75 MB (download)]

 

But I found I could also just gladly listen. I felt no compulsion to add anything or manipulate the result to add my mark.

And many jurisdictions have declared that you won’t be awarded copyright for something just because you prompted it into existence with AI.

For a start, the output depends just as much on everything that went into the AI—including the creative work of millions of other artists. Arguably, you didn’t do the work of creation. You simply requested it.

New Musical Experiences in the No-Man’s Land Between Production and Consumption

So Udio’s declaration that anyone can express themselves in music is an interesting provocation. The people who use tools like Suno and Udio may be considered more consumers of music AI experiences than creators of music AI works, or as with many technological impacts, we may need to come up with new concepts for what they’re doing.

A shift to generative music may draw attention away from current forms of musical culture, just as the era of recorded music saw the diminishing (but not death) of orchestral music, which was once the only way to hear complex, timbrally rich and loud music. If engagement in these new types of music culture and exchange explodes, we may see reduced engagement in the traditional music consumption of artists, bands, radio and playlists.

While it is too early to tell what the impact will be, we should be attentive. The effort to defend existing creators’ intellectual property protections, a significant moral rights issue, is part of this equation.

But even if it succeeds I believe it won’t fundamentally address this potentially explosive shift in culture, and claims that such music might be inferior also have had little effect in halting cultural change historically, as with techno or even jazz, long ago. Government AI policies may need to look beyond these issues to understand how music works socially and to ensure that our musical cultures are vibrant, sustainable, enriching, and meaningful for both individuals and communities.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Pawel Czerwinski / Unsplash

Kategorie: Transhumanismus

A Massive Study Is Revealing Why Exercise Is So Good for Our Health

Singularity HUB - 6 Květen, 2024 - 21:30

We all know that exercise is good for us.

A brisk walk of roughly an hour a day can stave off chronic diseases, including heart or blood vessel issues and Type 2 diabetes. Regular exercise delays memory loss due to aging, boosts the immune system, slashes stress, and may even increase lifespan.

For decades, scientists have tried to understand why. Throughout the body, our organs and tissues release a wide variety of molecules during—and even after—exercise to reap its benefits. But no single molecule works alone. The hard part is understanding how they collaborate in networks after exercise.

Enter the Molecular Transducers of Physical Activity Consortium (MoTrPAC) project. Established nearly a decade ago and funded by the National Institutes of Health (NIH), the project aims to create comprehensive molecular maps of how genes and proteins change after exercise in both rodents and people. Rather than focusing on single proteins or genes, the project takes a Google Earth approach—let’s see the overall picture.

It’s not simply for scientific curiosity. If we can find important molecular processes that trigger exercise benefits, we could potentially mimic those reactions using medications and help people who physically can’t work out—a sort of “exercise in a pill.”

This month, the project announced multiple results.

In one study, scientists built an atlas of bodily changes before, during, and after exercise in rats. Altogether, the team collected nearly 9,500 samples across multiple tissues to examine how exercise changes gene expression across the body. Another study detailed differences between sexes after exercise. A third team mapped exercise-related genes to those associated with diseases.

According to the project’s NIH webpage: “When the MoTrPAC study is completed, it will be the largest research study examining the link between exercise and its improvement of human health.”

Work It

Our tissues are chatterboxes. The gut “talks” to the brain through a vast maze of molecules. Muscles pump out proteins to fine-tune immune system defenses. Plasma—the liquid part of blood—can transfer the learning and memory benefits of running when injected into “couch potato” mice and delay cognitive decline.

Over the years, scientists have identified individual molecules and processes that could mediate these effects, but the health benefits are likely due to networks of molecules working together.

“MoTrPAC was launched to fill an important gap in exercise research,” said former NIH director Dr. Francis Collins in a 2020 press release. “It shifts focus from a specific organ or disease to a fundamental understanding of exercise at the molecular level—an understanding that may lead to personalized, prescribed exercise regimens based on an individual’s needs and traits.”

The project has two arms. One observes rodents before, during, and after wheel running to build comprehensive maps of molecular changes due to exercise. These maps aim to capture gene expression alongside metabolic and epigenetic changes in multiple organs.

Another arm will recruit roughly 2,600 healthy volunteers aged 10 to over 60 years old. With a large pool of participants, the team hopes to account for variation between people and even identify differences in the body’s response to exercise based on age, gender, or race. The volunteers will undergo 12 weeks of exercise, either endurance training—such as long-distance running—or weightlifting.

Altogether, the goal is to detect how exercise affects cells at a molecular level in multiple tissue types—blood, fat, and muscle.

Exercise Encyclopedia

Last week, MoTrPAC released an initial wave of findings.

In one study, the group collected blood and 18 different tissue samples from adult rats, both male and female, as they happily ran for a week to two months. The team then screened how the body changes with exercise by comparing rats that work out with “couch potato” rats as a baseline. Physical training increased the rats’ aerobic capacity—the amount of oxygen the body can use—by roughly 17 percent.

Next, the team analyzed the molecular fingerprints of exercise in whole blood, plasma, and 18 solid tissues, including heart, liver, lung, kidney, fat tissue, and the hippocampus, a brain region associated with memory. They used an impressive array of tools that, for example, captured changes in overall gene expression and the epigenetic landscape. Others mapped differences in the body’s proteins, fat, immune system, and metabolism.

“Altogether, datasets were generated from 9,466 assays across 211 combinations of tissues and molecular platforms,” wrote the team.

Using an AI-based method, they integrated the results across time into a comprehensive molecular map. The map pinpointed multiple molecular changes that could dampen liver diseases, inflammatory bowel disease, and protect against heart health and tissue injuries.

All this represents “the first whole-organism molecular map” capturing how exercise changes the body, wrote the team. (All of the data is free to explore.)

Venus and Mars

Most previous studies on exercise in rodents focused on males. What about the ladies?

After analyzing the MoTrPAC database, another study found that exercise changes the body’s molecular signaling differently depending on biological sex.

After running, female rats triggered genes in white fat—the type under the skin—related to insulin signaling and the body’s ability to form fat. Meanwhile, males showed molecular signatures of a ramped up metabolism.

With consistent exercise, male rats rapidly lost fat and weight, whereas females maintained their curves but with improved insulin signaling, which might protect them against heart diseases.

A third study integrated gene expression data collected from exercised rats with disease-relevant gene databases previously found in humans. The goal is to link workout-related genes in a particular organ or tissue with a disease or other health outcome—what the authors call “trait-tissue-gene triplets.” Overall, they found 5,523 triplets “to serve as a valuable starting point for future investigations,” they wrote.

We’re only scratching the surface of the complex puzzle that is exercise. Through extensive mapping efforts, the project aims to eventually tailor workout regimens for people with chronic diseases or identify key “druggable” components that could confer some health benefits of exercise with a pill.

“This is an unprecedented large-scale effort to begin to explore—in extreme detail—the biochemical, physiological, and clinical impact of exercise,” Dr. Russell Tracy at the University of Vermont, a MoTrPAC member, said in a press release.

Image Credit: Fitsum Admasu / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through May 4)

Singularity HUB - 4 Květen, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

Sam Altman Says Helpful Agents Are Poised to Become AI’s Killer Function
James O’Donnell | MIT Technology Review
“Altman, who was visiting Cambridge for a series of events hosted by Harvard and the venture capital firm Xfund, described the killer app for AI as a ‘super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.’ It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to.”archive page

COMPUTING

Expect a Wave of Wafer-Scale Computers
Samuel K. Moore | IEEE Spectrum
“At TSMC’s North American Technology Symposium on Wednesday, the company detailed both its semiconductor technology and chip-packaging technology road maps. While the former is key to keeping the traditional part of Moore’s Law going, the latter could accelerate a trend toward processors made from more and more silicon, leading quickly to systems the size of a full silicon wafer. …In 2027, you will get a full-wafer integration that delivers 40 times as much compute power, more than 40 reticles’ worth of silicon, and room for more than 60 high-bandwidth memory chips, TSMC predicts.”

FUTURE

Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything?
Will Knight | Wired
“With the publication of his last book, Superintelligence: Paths, Dangers, Strategies, in 2014, Bostrom drew public attention to what was then a fringe idea—that AI would advance to a point where it might turn against and delete humanity. …Bostrom’s new book takes a very different tack. Rather than play the doomy hits, Deep Utopia: Life and Meaning in a Solved World, considers a future in which humanity has successfully developed superintelligent machines but averted disaster.”

TECH

AI Start-Ups Face a Rough Financial Reality Check
Cade Metz, Karen Weise, and Tripp Mickle | The New York Times
“The AI revolution, it is becoming clear in Silicon Valley, is going to come with a very big price tag. And the tech companies that have bet their futures on it are scrambling to figure out how to close the gap between those expenses and the profits they hope to make somewhere down the line.”

ROBOTICS

Every Tech Company Wants to Be Like Boston Dynamics
Jacob Stern | The Atlantic
“Clips of robots running faster than Usain Bolt and dancing in sync, among many others, have helped [Boston Dynamics] reach true influencer status. Its videos have now been viewed more than 800 million times, far more than those of much bigger tech companies, such as Tesla and OpenAI. The creator of Black Mirror even admitted that an episode in which killer robot dogs chase a band of survivors across an apocalyptic wasteland was directly inspired by Boston Dynamics’ videos.”

ETHICS

ChatGPT Shows Better Moral Judgment Than a College Undergrad
Kyle Orland | Ars Technica
“In ‘Attributions toward artificial agents in a modified Moral Turing Test’…[Georgia State University] researchers found that morality judgments given by ChatGPT4 were ‘perceived as superior in quality to humans’ along a variety of dimensions like virtuosity and intelligence. But before you start to worry that philosophy professors will soon be replaced by hyper-moral AIs, there are some important caveats to consider.”

SPACE

New Space Company Seeks to Solve Orbital Mobility With High Delta-V Spacecraft
Eric Berger | Ars Technica
“[Portal Space Systems founder, Jeff Thornburg] envisions a fleet of refuelable Supernova vehicles at medium-Earth and geostationary orbit capable of swooping down to various orbits and providing services such as propellant delivery, mobility, and observation for commercial and military satellites. His vision is to provide real-time, responsive capability for existing satellites. If one needs to make an emergency maneuver, a Supernova vehicle could be there within a couple of hours. ‘If we’re going to have a true space economy, that means logistics and supply services,’ he said.”

AUTOMATION

Google’s Waymo Is Expanding Its Self-Driving ‘Robotaxi’ Testing
William Gavin | Quartz
“Waymo plans to soon start testing fully autonomous rides across California’s San Francisco Peninsula, despite criticism and concerns from residents and city officials. In the coming weeks, Waymo employees will begin testing rides without a human driver on city streets north of San Mateo, the company said Friday.”

VIRTUAL REALITY

Ukraine Unveils AI-Generated Foreign Ministry Spokesperson
Agence France-Presse | The Guardian
“Dressed in a dark suit, the spokesperson introduced herself as Victoria Shi, a ‘digital person,’ in a presentation posted on social media. The figure gesticulates with her hands and moves her head as she speaks. The foreign ministry’s press service said that the statements given by Shi would not be generated by AI but ‘written and verified by real people.'”

Image Credit: Drew Walker / Unsplash

Kategorie: Transhumanismus

This Plastic Is Embedded With Bacterial Spores That Break It Down After It’s Thrown Out

Singularity HUB - 2 Květen, 2024 - 20:49

Getting microbes to eat plastic is a frequently touted solution to our growing waste problem, but making the approach practical is tricky. A new technique that impregnates plastic with the spores of plastic-eating bacteria could make the idea a reality.

The impact of plastic waste on the environment and our health has gained increasing attention in recent years. The latest round of UN talks aiming for a global treaty to end plastic pollution just concluded in Ottawa, Canada earlier this week, though considerable disagreements remain.

Recycling will inevitably be a crucial ingredient in any plan to deal with the problem. But a 2022 report from the Organization for Economic Cooperation and Development found only 9 percent of plastic waste ever gets recycled. That’s partly due to the fact that existing recycling approaches are energy intensive and time consuming.

This has spurred a search for new approaches, and one of the most promising is the use of bacteria to break down plastics, either by rendering them harmless or using them to produce building blocks that can be repurposed into other valuable materials and chemicals. The main problem with the approach is making sure plastic waste ends up in the same place as these plastic-loving bacteria.

Now, researchers have come up with an ingenious solution: embed microbes in plastic during the manufacturing process. Not only did the approach result in 93 percent of the plastic biodegrading within five months, but it even increased the strength and stretchability of the material.

“What’s remarkable is that our material breaks down even without the presence of additional microbes,” project co-leader Jon Pokorski from the University of California San Diego said in a press release.

“Chances are, most of these plastics will likely not end up in microbially rich composting facilities. So this ability to self-degrade in a microbe-free environment makes our technology more versatile.”

The main challenge when it came to incorporating bacteria into plastics was making sure they survived the high temperatures involved in manufacturing the material. The researchers worked with a soft plastic called thermoplastic polyurethane (TPU), which is used in footwear, cushions, and memory foam. TPU is manufactured by melting pellets of the material at around 275 degrees Fahrenheit and then extruding it into the desired shape.

Given the need to survive these high temperatures, the researchers selected a plastic-eating bacteria called Bacillus subtilis, which can form spores allowing it to survive harsh conditions. Even then, they discovered more than 90 percent of the bacteria were killed in under a minute at those temperatures.

So, the team used a technique called adaptive laboratory evolution to create a more heat-tolerant strain of the bacteria. They dunked the spores in boiling water for increasing lengths of time, collecting the survivors, growing the population back up, and then repeating the process. Over time, this selected for mutations that conferred greater heat tolerance, until the researchers were left with a strain that was able to withstand the manufacturing process.

When they incorporated the spores into the plastic, they were surprised to find the bacteria actually improved the mechanical properties of the material. In essence, the spores acted like steel rebar in concrete, making it harder to break and increasing its stretchability.

To test whether the impregnated spores could help the plastic biodegrade, the researchers took small strips of the plastic and put them in sterilized compost. After five months, they found the strips had lost 93 percent of their mass compared to 44 percent for TPU without spores, which suggests the spores were reactivated by nutrients in the compost and helped degrade the plastic substantially faster.

It’s unclear if the approach would work with other plastics, though the researchers say they plan to find out. There is also a danger the spores could reactivate before the plastic is disposed of, which could shorten the life of any products made with it. Perhaps most crucially, plastics researcher Steve Fletcher from the University of Portsmouth in the UK told the BBC that this kind of technology could distract from efforts to limit plastic waste.

“Care must be taken with potential solutions of this sort, which could give the impression that we should worry less about plastic pollution because any plastic leaking into the environment will quickly, and ideally safely, degrade,” he said. “For the vast majority of plastics, this is not the case.”

Given the scale of the plastic pollution problem today though, any attempt to mitigate the harm should be welcomed. While it’s early days, the prospect of making plastic that can biodegrade itself could go a long way towards tackling the problem.

Image Credit: David Baillot/UC San Diego Jacobs School of Engineering

Kategorie: Transhumanismus

AI Is Gathering a Growing Amount of Training Data Inside Virtual Worlds

Singularity HUB - 1 Květen, 2024 - 18:52

To anyone living in a city where autonomous vehicles operate, it would seem they need a lot of practice. Robotaxis travel millions of miles a year on public roads in an effort to gather data from sensors—including cameras, radar, and lidar—to train the neural networks that operate them.

In recent years, due to a striking improvement in the fidelity and realism of computer graphics technology, simulation is increasingly being used to accelerate the development of these algorithms. Waymo, for example, says its autonomous vehicles have already driven some 20 billion miles in simulation. In fact, all kinds of machines, from industrial robots to drones, are gathering a growing amount of their training data and practice hours inside virtual worlds.

According to Gautham Sholingar, a senior manager at Nvidia focused on autonomous vehicle simulation, one key benefit is accounting for obscure scenarios for which it would be nearly impossible to gather training data in the real world.

“Without simulation, there are some scenarios that are just hard to account for. There will always be edge cases which are difficult to collect data for, either because they are dangerous and involve pedestrians or things that are challenging to measure accurately like the velocity of faraway objects. That’s where simulation really shines,” he told me in an interview for Singularity Hub.

While it isn’t ethical to have someone run unexpectedly into a street to train AI to handle such a situation, it’s significantly less problematic for an animated character inside a virtual world.

Industrial use of simulation has been around for decades, something Sholingar pointed out, but a convergence of improvements in computing power, the ability to model complex physics, and the development of the GPUs powering today’s graphics indicate we may be witnessing a turning point in the use of simulated worlds for AI training.

Graphics quality matters because of the way AI “sees” the world.

When a neural network processes image data, it’s converting each pixel’s color into a corresponding number. For black and white images, the number ranges from 0, which indicates a fully black pixel, up to 255, which is fully white, with numbers in between representing some variation of grey. For color images, the widely used RGB (red, green, blue) model can correspond to over 16 million possible colors. So as graphics rendering technology becomes ever more photorealistic, the distinction between pixels captured by real-world cameras and ones rendered in a game engine is falling away.

Simulation is also a powerful tool because it’s increasingly able to generate synthetic data for sensors beyond just cameras. While high-quality graphics are both appealing and familiar to human eyes, which is useful in training camera sensors, rendering engines are also able to generate radar and lidar data as well. Combining these synthetic datasets inside a simulation allows the algorithm to train using all the various types of sensors commonly used by AVs.

Due to their expertise in producing the GPUs needed to generate high-quality graphics, Nvidia have positioned themselves as leaders in the space. In 2021, the company launched Omniverse, a simulation platform capable of rendering high-quality synthetic sensor data and modeling real-world physics relevant to a variety of industries. Now, developers are using Omniverse to generate sensor data to train autonomous vehicles and other robotic systems.

In our discussion, Sholingar described some specific ways these types of simulations may be useful in accelerating development. The first involves the fact that with a bit of retraining, perception algorithms developed for one type of vehicle can be re-used for other types as well. However, because the new vehicle has a different sensor configuration, the algorithm will be seeing the world from a new point of view, which can reduce its performance.

“Let’s say you developed your AV on a sedan, and you need to go to an SUV. Well, to train it then someone must change all the sensors and remount them on an SUV. That process takes time, and it can be expensive. Synthetic data can help accelerate that kind of development,” Sholingar said.

Another area involves training algorithms to accurately detect faraway objects, especially in highway scenarios at high speeds. Since objects over 200 meters away often appear as just a few pixels and can be difficult for humans to label, there isn’t typically enough training data for them.

“For the far ranges, where it’s hard to annotate the data accurately, our goal was to augment those parts of the dataset,” Sholingar said. “In our experiment, using our simulation tools, we added more synthetic data and bounding boxes for cars at 300 meters and ran experiments to evaluate whether this improves our algorithm’s performance.”

According to Sholingar, these efforts allowed their algorithm to detect objects more accurately beyond 200 meters, something only made possible by their use of synthetic data.

While many of these developments are due to better visual fidelity and photorealism, Sholingar also stressed this is only one aspect of what makes capable real-world simulations.

“There is a tendency to get caught up in how beautiful the simulation looks since we see these visuals, and it’s very pleasing. What really matters is how the AI algorithms perceive these pixels. But beyond the appearance, there are at least two other major aspects which are crucial to mimicking reality in a simulation.”

First, engineers need to ensure there is enough representative content in the simulation. This is important because an AI must be able to detect a diversity of objects in the real world, including pedestrians with different colored clothes or cars with unusual shapes, like roof racks with bicycles or surfboards.

Second, simulations have to depict a wide range of pedestrian and vehicle behavior. Machine learning algorithms need to know how to handle scenarios where a pedestrian stops to look at their phone or pauses unexpectedly when crossing a street. Other vehicles can behave in unexpected ways too, like cutting in close or pausing to wave an oncoming vehicle forward.

“When we say realism in the context of simulation, it often ends up being associated only with the visual appearance part of it, but I usually try to look at all three of these aspects. If you can accurately represent the content, behavior, and appearance, then you can start moving in the direction of being realistic,” he said.

It also became clear in our conversation that while simulation will be an increasingly valuable tool for generating synthetic data, it isn’t going to replace real-world data collection and testing.

“We should think of simulation as an accelerator to what we do in the real world. It can save time and money and help us with a diversity of edge-case scenarios, but ultimately it is a tool to augment datasets collected from real-world data collection,” he said.

Beyond Omniverse, the wider industry of helping “things that move” develop autonomy is undergoing a shift toward simulation. Tesla announced they’re using similar technology to develop automation in Unreal Engine, while Canadian startup, Waabi, is taking a simulation-first approach to training their self-driving software. Microsoft, meanwhile, has experimented with a similar tool to train autonomous drones, although the project was recently discontinued.

While training and testing in the real world will remain a crucial part of developing autonomous systems, the continued improvement of physics and graphics engine technology means that virtual worlds may offer a low-stakes sandbox for machine learning algorithms to mature into functional tools that can power our autonomous future.

Image Credit: Nvidia

Kategorie: Transhumanismus

Mind-Bending Math Could Stop Quantum Hackers—but Few Understand It

Singularity HUB - 30 Duben, 2024 - 20:04

Imagine the tap of a card that bought you a cup of coffee this morning also let a hacker halfway across the world access your bank account and buy themselves whatever they liked. Now imagine it wasn’t a one-off glitch, but it happened all the time: Imagine the locks that secure our electronic data suddenly stopped working.

This is not a science fiction scenario. It may well become a reality when sufficiently powerful quantum computers come online. These devices will use the strange properties of the quantum world to untangle secrets that would take ordinary computers more than a lifetime to decipher.

We don’t know when this will happen. However, many people and organizations are already concerned about so-called “harvest now, decrypt later” attacks, in which cybercriminals or other adversaries steal encrypted data now and store it away for the day when they can decrypt it with a quantum computer.

As the advent of quantum computers grows closer, cryptographers are trying to devise new mathematical schemes to secure data against their hypothetical attacks. The mathematics involved is highly complex—but the survival of our digital world may depend on it.

‘Quantum-Proof’ Encryption

The task of cracking much current online security boils down to the mathematical problem of finding two numbers that, when multiplied together, produce a third number. You can think of this third number as a key that unlocks the secret information. As this number gets bigger, the amount of time it takes an ordinary computer to solve the problem becomes longer than our lifetimes.

Future quantum computers, however, should be able to crack these codes much more quickly. So the race is on to find new encryption algorithms that can stand up to a quantum attack.

The US National Institute of Standards and Technology has been calling for proposed “quantum-proof” encryption algorithms for years, but so far few have withstood scrutiny. (One proposed algorithm, called Supersingular Isogeny Key Encapsulation, was dramatically broken in 2022 with the aid of Australian mathematical software called Magma, developed at the University of Sydney.)

The race has been heating up this year. In February, Apple updated the security system for the iMessage platform to protect data that may be harvested for a post-quantum future.

Two weeks ago, scientists in China announced they had installed a new “encryption shield” to protect the Origin Wukong quantum computer from quantum attacks.

Around the same time, cryptographer Yilei Chen announced he had found a way quantum computers could attack an important class of algorithms based on the mathematics of lattices, which were considered some of the hardest to break. Lattice-based methods are part of Apple’s new iMessage security, as well as two of the three frontrunners for a standard post-quantum encryption algorithm.

What Is a Lattice-Based Algorithm?

A lattice is an arrangement of points in a repeating structure, like the corners of tiles in a bathroom or the atoms in a diamond crystal. The tiles are two dimensional and the atoms in diamond are three dimensional, but mathematically we can make lattices with many more dimensions.

Most lattice-based cryptography is based on a seemingly simple question: If you hide a secret point in such a lattice, how long will it take someone else to find the secret location starting from some other point? This game of hide and seek can underpin many ways to make data more secure.

A variant of the lattice problem called “learning with errors” is considered to be too hard to break even on a quantum computer. As the size of the lattice grows, the amount of time it takes to solve is believed to increase exponentially, even for a quantum computer.

The lattice problem—like the problem of finding the factors of a large number on which so much current encryption depends—is closely related to a deep open problem in mathematics called the “hidden subgroup problem.”

Yilei Chen’s approach suggested quantum computers may be able to solve lattice-based problems more quickly under certain conditions. Experts scrambled to check his results—and rapidly found an error. After the error was discovered, Chen published an updated version of his paper describing the flaw.

Despite this discovery, Chen’s paper has made many cryptographers less confident in the security of lattice-based methods. Some are still assessing whether Chen’s ideas can be extended to new pathways for attacking these methods.

More Mathematics Required

Chen’s paper set off a storm in the small community of cryptographers who are equipped to understand it. However, it received almost no attention in the wider world—perhaps because so few people understand this kind of work or its implications.

Last year, when the Australian government published a national quantum strategy to make the country “a leader of the global quantum industry” where “quantum technologies are integral to a prosperous, fair and inclusive Australia,” there was an important omission: It didn’t mention mathematics at all.

Australia does have many leading experts in quantum computing and quantum information science. However, making the most of quantum computers—and defending against them—will require deep mathematical training to produce new knowledge and research.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ZENG YILI / Unsplash

Kategorie: Transhumanismus
Syndikovat obsah