This Week’s Awesome Tech Stories From Around the Web (Through December 9)

Singularity HUB - 9 Prosinec, 2023 - 16:00

Google’s Gemini Is the Real Start of the Generative AI Boom
Will Knight | Wired
“[Google DeepMind’s Demis Hassabis] was predictably effusive about Gemini, claiming it introduces new capabilities that will eventually make Google’s products stand out. But Hassabis also said that to deliver AI systems that can understand the world in ways that today’s chatbots can’t, LLMs will need to be combined with other AI techniques. Hassabis is in an aggressive competition with OpenAI, but the rivals seem to agree that radical new approaches are needed.”


Google DeepMind’s New Gemini Model Looks Amazing—but Could Signal Peak AI Hype
Melissa Heikkilä and archive page and  and Will Douglas Heaven | MIT Technology Review
“Hype about Gemini, Google DeepMind’s long-rumored response to OpenAI’s GPT-4, has been building for months. Today the company finally revealed what it has been working on in secret all this time. Was the hype justified? Yes—and no. …It’s a big step for Google, but not necessarily a giant leap for the field as a whole.”archive page


The First Crispr Medicine Is Now Approved in the US
Emily Mullen | Wired
“The US Food and Drug Administration [on Friday] approved a first-of-its kind medical treatment that uses CRISPR gene editing. Called Casgevy, the therapy is intended to treat patients with sickle cell disease, an inherited blood disorder that affects more than 100,000 people in the United States. The UK first approved the groundbreaking treatment on November 16. …The treatment aims to eliminate episodes of debilitating pain that are a hallmark of sickle cell disease.”


I Received the New Gene-Editing Drug for Sickle Cell Disease. It Changed My Life.
Jimi Olagherearchive page | MIT Technology Review
“After I received exa-cel, I started to experience things I had only dreamt of: boundless energy and the ability to recover by merely sleeping. My physical symptoms—including a yellowish tint in my eyes caused by the rapid breakdown of malfunctioning red blood cells—virtually disappeared overnight. Most significantly, I gained the confidence that sickle cell disease won’t take me away from my family, and a sense of control over my own destiny.”


AMD’s Next GPU Is a 3D-Integrated Superchip
Samuel K. Moore | IEEE Spectrum
“AMD lifted the hood on its next AI accelerator chip, the Instinct MI300, at the AMD Advancing AI event today, and it’s an unprecedented feat of 3D integration. MI300, a version of which will power the El Capitan supercomputer, is a layer cake of computing, memory, and communication that’s three slices of silicon high and that can sling as much as 17 terabytes of data vertically between those slices. The result is as much as a 3.4-fold boost in speed for certain machine-learning-critical calculations.”


The EU Just Passed Sweeping New Rules to Regulate AI
Morgan Meaker | Wired
“The European Union agreed on terms of the AI Act, a major new set of rules that will govern the building and use of AI and have major implications for Google, OpenAI, and others racing to develop AI systems. …It’s a milestone law that, lawmakers hope, will create a blueprint for the rest of the world.”


DNA Nanobots Can Exponentially Self-Replicate
Matthew Sparkes | New Scientist
“Feng Zhou at New York University and his colleagues created the tiny machines, which are just 100 nanometers across, using four strands of DNA. The nanorobots are held in a solution with these DNA-strand raw materials, which they arrange into copies of themselves one at a time by using their own structure as a scaffold.”


SpaceX Shares Cinematic Footage of Last Month’s Starship Mission
Trevor Mogg | Digital Trends
“SpaceX has shared spectacular new footage of last month’s launch of the most powerful rocket ever to fly. The cinematic content (see video below) shows the first-stage Super Heavy booster and the Starship spacecraft (collectively known as the Starship) blasting skyward in the second integrated test flight of the vehicle, which could one day carry astronauts to the moon, Mars, and beyond.”


The Binance Crackdown Will Be an ‘Unprecedented’ Bonanza for Crypto Surveillance
Andy Greenberg | Wired
“[The crackdown] means that when the company is sentenced in a matter of months, it will be forced to open its past books to regulators, too. What was once a haven for anarchic crypto commerce is about to be transformed into the opposite: perhaps the most fed-friendly business in the cryptocurrency industry, retroactively offering more than a half-decade of users’ transaction records to US regulators and law enforcement.”


Can’t Sleep? Listen to an AI-Generated Bedtime Story From Jimmy Stewart.
Isabella Kwai | The New York Times
“The sleep and meditation app Calm on Tuesday released a new story for premium users told by Mr. Stewart, the beloved actor who starred in ‘It’s a Wonderful Life.’ But the voice in their ear lulling them to sleep is not from Mr. Stewart, who died in 1997. It is a version of his signature drawl generated by artificial intelligence.”


Meta and IBM Launch AI Alliance
Belle Lin | The Wall Street Journal
“The AI Alliance, whose members include Intel, Oracle, Cornell University and the National Science Foundation, said it is pooling resources to stand behind ‘open innovation and open science’ in AI. Its members largely support open source, an approach in which technology is shared free and draws on a history of collaboration among Big Tech, academics and a fervent movement of independent programmers.”

Image Credit: SpaceX

Kategorie: Transhumanismus

Building Telescopes on the Moon Could Transform Astronomy—and It’s Becoming an Achievable Goal

Singularity HUB - 9 Prosinec, 2023 - 00:25

Lunar exploration is undergoing a renaissance. Dozens of missions, organized by multiple space agencies—and increasingly by commercial companies—are set to visit the moon by the end of this decade. Most of these will involve small robotic spacecraft, but NASA’s ambitious Artemis program, aims to return humans to the lunar surface by the middle of the decade.

There are various reasons for all this activity, including geopolitical posturing and the search for lunar resources, such as water-ice at the lunar poles, which can be extracted and turned into hydrogen and oxygen propellant for rockets. However, science is also sure to be a major beneficiary.

The moon still has much to tell us about the origin and evolution of the solar system. It also has scientific value as a platform for observational astronomy.

The potential role for astronomy on Earth’s natural satellite was discussed at a Royal Society meeting earlier this year. The meeting itself had, in part, been sparked by the enhanced access to the lunar surface now in prospect.

Far Side Benefits

Several types of astronomy would benefit. The most obvious is radio astronomy, which can be conducted from the side of the moon that always faces away from Earth—the far side.

The lunar far side is permanently shielded from the radio signals generated by humans on Earth. During the lunar night, it is also protected from the sun. These characteristics make it probably the most “radio-quiet” location in the whole solar system as no other planet or moon has a side that permanently faces away from the Earth. It is therefore ideally suited for radio astronomy.

Radio waves are a form of electromagnetic energy—as are, for example, infrared, ultraviolet, and visible-light waves. They are defined by having different wavelengths in the electromagnetic spectrum.

Radio waves with wavelengths longer than about 15 meters are blocked by Earth’s ionosphere. But radio waves at these wavelengths reach the moon’s surface unimpeded. For astronomy, this is the last unexplored region of the electromagnetic spectrum, and it is best studied from the lunar far side.

Observations of the cosmos at these wavelengths come under the umbrella of “low-frequency radio astronomy.” These wavelengths are uniquely able to probe the structure of the early universe, especially the cosmic “dark ages”—an era before the first galaxies formed.

At that time, most of the matter in the universe, excluding the mysterious dark matter, was in the form of neutral hydrogen atoms. These emit and absorb radiation with a characteristic wavelength of 21 centimeters. Radio astronomers have been using this property to study hydrogen clouds in our own galaxy—the Milky Way—since the 1950s.

Because the universe is constantly expanding, the 21-centimeter signal generated by hydrogen in the early universe has been shifted to much longer wavelengths. As a result, hydrogen from the cosmic “dark ages” will appear to us with wavelengths greater than 10 meters. The lunar far side may be the only place where we can study this.

The astronomer Jack Burns provided a good summary of the relevant science background at the recent Royal Society meeting, calling the far side of the moon a “pristine, quiet platform to conduct low-radio-frequency observations of the early Universe’s Dark Ages, as well as space weather and magnetospheres associated with habitable exoplanets.”

Signals From Other Stars

As Burns says, another potential application of far side radio astronomy is trying to detect radio waves from charged particles trapped by magnetic fields—magnetospheres—of planets orbiting other stars.

This would help to assess how capable these exoplanets are of hosting life. Radio waves from exoplanet magnetospheres would probably have wavelengths greater than 100 meters, so they would require a radio-quiet environment in space. Again, the far side of the moon will be the best location.

A similar argument can be made for attempts to detect signals from intelligent aliens. And, by opening up an unexplored part of the radio spectrum, there is also the possibility of making serendipitous discoveries of new phenomena.

Artist’s conception of the LuSEE-Night radio astronomy experiment on the moon. Image Credit Nasa/Tricia Talbert

We should get an indication of the potential of these observations when NASA’s LuSEE-Night mission lands on the lunar far side in 2025 or 2026.

Crater Depths

The moon also offers opportunities for other types of astronomy as well. Astronomers have lots of experience with optical and infrared telescopes operating in free space, such as the Hubble telescope and JWST. However, the stability of the lunar surface may confer advantages for these types of instruments.

Moreover, there are craters at the lunar poles that receive no sunlight. Telescopes that observe the universe at infrared wavelengths are very sensitive to heat and therefore have to operate at low temperatures. JWST, for example, needs a huge sunshield to protect it from the sun’s rays. On the moon, a natural crater rim could provide this shielding for free.

Permanently shadowed craters at the lunar poles could eventually host infrared telescopes. Image Credit: LROC / ASU / NASA

The moon’s low gravity may also enable the construction of much larger telescopes than is feasible for free-flying satellites. These considerations have led the astronomer Jean-Pierre Maillard to suggest that the moon may be the future of infrared astronomy.

The cold, stable environment of permanently shadowed craters may also have advantages for the next generation of instruments to detect gravitational waves—“ripples” in space-time caused by processes such as exploding stars and colliding black holes.

Moreover, for billions of years the moon has been bombarded by charged particles from the sun—solar wind—and galactic cosmic rays. The lunar surface may contain a rich record of these processes. Studying them could yield insights into the evolution of both the sun and the Milky Way.

For all these reasons, astronomy stands to benefit from the current renaissance in lunar exploration. In particular, astronomy is likely to benefit from the infrastructure built up on the moon as lunar exploration proceeds. This will include both transportation infrastructure—rockets, landers, and other vehicles—to access the surface, as well as humans and robots on-site to construct and maintain astronomical instruments.

But there is also a tension here: human activities on the lunar far side may create unwanted radio interference, and plans to extract water-ice from shadowed craters might make it difficult for those same craters to be used for astronomy. As my colleagues and I recently argued, we will need to ensure that lunar locations that are uniquely valuable for astronomy are protected in this new age of lunar exploration.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA / Ernie Wright

Kategorie: Transhumanismus

Mice Just Passed the Mirror Test. Here’s What That Says About Our Sense of Self

Singularity HUB - 8 Prosinec, 2023 - 00:26

Here’s a fun test: Dab some blush onto the forehead of a six-month-old baby and plop them in front of a mirror. They might look at their reflection with curiosity but ignore the rouge. Redo the experiment at two years old. Now they’ll likely furrow their brows, touch the blush, and try to wipe it off.

In other words, with a few years of life experience, they’ve learned to see the person in the mirror as “me.”

The so-called mirror test has been a staple in cognitive science to gauge self-recognition—the ability to realize that a reflection of you is you and learn how you differ from other people. It’s a skill that naturally comes to babies, but how this works in the brain has long baffled scientists.

This week, a study in Neuron suggests that mice may also have a rudimentary sense of self.

When the scientists dabbed white ink on the foreheads of mice with black fur, they readily groomed it off when looking at themselves in a mirror, but ignored the stain if it matched their fur tone. Like when we peer in the mirror and see a pimple, the mice “recognized” their reflection and realized something was wrong. Similar to other species—including humans—they could better “recognize” themselves when raised with other mice.

The scientists then used gene mapping technologies to hunt down the neurons involved in self-recognition. Buried in the hippocampus, a brain region associated with memory and the regulation of emotions, cells lit up when the mice saw their reflections in the mirror and also seemed related to their grooming behavior. The mice ignored the white blob on their foreheads when these cells were dampened—as if they no longer recognized themselves.

These lowly rodents join an elite group of animals that has passed the mirror test, including our closest evolutionary cousin, the chimpanzee. Because we can readily record the electrical chatter in their brains, the mice could help unveil the neural circuits behind self-recognition.

To study author Dr. Takashi Kitamura at the University of Texas Southwestern Medical Center, self-recognition isn’t about vanity, it’s about constructing a sense of self.

As we go about our lives, the brain stores information “about where, what, when and who, and the most important component is self-information,” he said in a press release. “Researchers usually examine how the brain encodes or recognizes others,” but how the brain constructs a model of the self is a mystery. These mice may finally crack the black box of self-recognition.

Mirror, Mirror, on the Wall

Glance at a mirror, and you’ll immediately recognize yourself. We take the skill for granted.

Under the hood, constructing a visual sense of “me” takes complex cognitive gymnastics. A dramatic new haircut or pair of glasses can make your reflection strange or even unrecognizable. The brain must gradually recalibrate how you see yourself and still know that it’s you. It’s thought self-recognition relies on high-level cognitive processes, but because it’s based on an internal “sense,” the mechanism has been difficult to gauge objectively.

Here’s where the mirror test comes in. Developed by Dr. Gordon Gallup Jr. in the 1970s, it became a staple among scientists testing self-recognition in an array of species, from killer whales to magpies.

Here’s how it works. Put a mark onto the face of any cooperating species and place them in front of a mirror. Do they recognize that the mark on the face in the mirror is a mark on their own face? Gallup tried it with chimps. “What they did was to reach up and touch and examine the marks on their faces that could only be seen in the mirror,” Gallup told NPR in 2020.

Over the decades, the test was used widely to study childhood development and self-recognition in animals. But because it requires heavy cognitive power, mice were written off.

Not so fast, the new study says.

A Social Reflection

The team first tested mice with glossy black fur to see how they reacted to a mirror.

The mice happily roamed around an “apartment” with two rooms. One side of the “wall” had a mirror, the other did not. To make things more challenging, the mirror wall was moved around every day. When first faced with their reflection, most mice reared up in an aggressive attacking pose—suggesting they didn’t realize they were looking at themselves. Two weeks later, they mostly ignored the reflection.

But is it because they learned to recognize themselves, or that they were happy to live with a strange doppelgänger?

For an answer, the team squeezed a dab of either white or black ink directly onto the mice’s foreheads and set them loose in the chamber. Using deep learning software to detect different types of behavior, the team found that larger white ink stains—but not ones that matched their fur color—caused a grooming frenzy when they saw themselves in the mirror.

The mice furiously pawed at the inkblots but groomed other body parts—whiskers and tails—as usual (despite their reputation, mice love to clean themselves). It’s like finding a sauce splatter on your forehead after seeing yourself in the mirror. You recognize yourself, see the stain, and try to brush it off.

Not all mice behaved the same way. Those raised by foster mice with lighter fur—or those raised alone without social interactions—didn’t mind the white ink blot. Previous studies in gorillas reported similar results, showing that social experiences are critical for self-recognition, explained the team.

Who Am I Inside

To be very clear: The study isn’t saying the mice are self-aware or conscious.

But the setup could help us track down the neurons supporting our sense of self. In one test, the team mapped gene expression changes in the whole brain after the mirror test to see which neurons were activated and then traced their connections.

A small part of the hippocampus, a brain region that encodes and retrieves memories, lit up. When the team dampened these neurons’ activity, the mice no longer groomed the white ink blob in front of the mirror.

Surprisingly, these neurons also sparked to life when the mice saw peers that looked like them. The brain network seems to not only support self-recognition, but also recognition of others that look like us—like a parent.

The study is just a first step toward unraveling the mechanisms behind self-recognition.

And it has flaws. For example, the mirror test doesn’t account for behaviors specific to different species. The urge to wipe off a stain is a very primate-like response and relies on vision. Some species, such as Asian elephants or dogs, both of which have tried the mirror test, may not care about a stain, or they may heavily rely on other senses. Many animals also avoid eye contact—including when looking at themselves in the mirror—as it can be a sign of hostility. While the mice showed signs of self-recognition, they needed far more training and visual cues than a human baby.

But to the authors, the results are a start. Next, they plan to see if mice can recognize themselves with virtual filters—like puppy face ones in social media apps—and hunt down other potential brain regions allowing us to build a visual image of “me.”

Image Credit: Nick Fewings / Unsplash

Kategorie: Transhumanismus

IBM Is Planning to Build Its First Fault-Tolerant Quantum Computer by 2029

Singularity HUB - 7 Prosinec, 2023 - 01:50

This week, IBM announced a pair of shiny new quantum computers.

The company’s Condor processor is the first quantum chip of its kind with over 1,000 qubits, a feat that would have made big headlines just a few years ago. But earlier this year, a startup, Atom Computing, unveiled a 1,180-qubit quantum computer using a different approach. And although IBM says Condor demonstrates it can reliably produce high-quality qubits at scale, it’ll likely be the largest single chip the company makes until sometime next decade.

Instead of growing the number of qubits crammed onto each chip, IBM will focus on getting the most out of the qubits it has. In this respect, the second chip announced, Heron, is the future.

Though Heron has fewer qubits than Condor—just 133—it’s significantly faster and less error-prone. The company plans to combine several of these smaller chips into increasingly more powerful systems, a bit like the multicore processors powering smartphones. The first of these, System Two, also announced this week, contains three linked Condor chips.

IBM also updated its quantum roadmap, a timeline of key engineering milestones, through 2033. Notably, the company is aiming to complete a fault-tolerant quantum computer by 2029. The machine won’t be large enough to run complex quantum algorithms, like the one expected to one day break standard encryption. Still, it’s a bold promise.

Quantum Correction

Practical quantum computers will be able to tackle problems that can’t be solved using classical computers. But today’s systems are far too small and error-ridden to realize that dream. To get there, engineers are working on a solution called error-correction.

A qubit is the fundamental unit of a quantum computer. In your laptop, the basic unit of information is a 1 or 0 represented by a transistor that’s either on or off. In a quantum computer, the unit of information is 1, 0, or—thanks to quantum weirdness—some combination of the two. The physical component can be an atom, electron, or tiny superconducting loop of wire.

Opting for the latter, IBM makes its quantum computers by cooling loops of wire, or transmons, to temperatures near absolute zero and placing them into quantum states. Here’s the problem. Qubits are incredibly fragile, easily falling out of these quantum states throughout a calculation. This introduces errors that make today’s machines unreliable.

One way to solve this problem is to minimize errors. IBM’s made progress here. Heron uses some new hardware to significantly speed up how quickly the system places pairs of qubits into quantum states—an operation known as a “gate”—limiting the number of errors that crop up and spread to neighboring qubits (researchers call this “crosstalk”).

“It’s a beautiful device,” Gambetta told Ars Technica. “It’s five times better than the previous devices, the errors are way less, [and] crosstalk can’t really be measured.”

But you can’t totally eliminate errors. In the future, redundancy will also be key.

By spreading information between a group of qubits, you can reduce the impact of any one error and also check for and correct errors in the group. Because it takes multiple physical qubits to form one of these error-corrected “logical qubits,” you need an awful lot of them to complete useful calculations. This is why scale matters.

Software can also help. IBM is already employing a technique called error mitigation, announced earlier this year, in which it simulates likely errors and subtracts them from calculations. They’ve also identified a method of error-correction that reduces the number of physical qubits in a logical qubit by nearly an order of magnitude. But all this will require advanced forms of connectivity between qubits, which could be the biggest challenge ahead.

“You’re going to have to tie them together,” Dario Gil, senior vice president and director of research at IBM, told Reuters. “You’re going to have to do many of these things together to be practical about it. Because if not, it’s just a paper exercise.”

On the Road

Something that makes IBM unique in the industry is that it publishes a roadmap looking a decade into the future.

This may seem risky, but to date, they’ve stuck to it. Alongside the Condor and Heron news, IBM also posted an updated version of its roadmap.

Next year, they’ll release an upgraded version of Heron capable of 5,000 gate operations. After Heron comes Flamingo. They’ll link seven of these Flamingo chips into a single system with over 1,000 qubits. They also plan to grow Flamingo’s gate count by roughly 50 percent a year until it hits 15,000 in 2028. In parallel, the company will work on error-correction, beginning with memory, then moving on to communication and gates.

All this will culminate in a 200-qubit, fault-tolerant chip called Starling in 2029 and a leap in gate operations to 100 million. Starling will give way to the bigger Blue Jay in 2033.

Heisenberg’s Horse Race

Though it may be the most open about them, IBM isn’t alone in its ambitions.

Google is pursuing the same type of quantum computer and has been focused on error-correction over scaling for a few years. Then there are other kinds of quantum computers entirely—some use charged ions as qubits while others use photons, electrons, or like Atom Computing, neutral atoms. Each approach has its tradeoffs.

“When it comes down to it, there’s a simple set of metrics for you to compare the performance of the quantum processors,” Jerry Chow, director of quantum systems at IBM, told the Verge. “It’s scale: what number of qubits can you get to and build reliably? Quality: how long do those qubits live for you to perform operations and calculations on? And speed: how quickly can you actually run executions and problems through these quantum processors?”

Atom Computing favors neutral atoms because they’re identical—eliminating the possibility of manufacturing flaws—can be controlled wirelessly, and operate at room temperature. Chow agrees there are interesting things happening in the nuetral atom space but speed is a drawback. “It comes down to that speed,” he said. “Anytime you have these actual atomic items, either an ion or an atom, your clock rates end up hurting you.”

The truth is the race isn’t yet won, and won’t be for awhile yet. New advances or unforeseen challenges could rework the landscape. But Chow said the company’s confidence in its approach is what allows them to look ahead 10 years.

“And to me it’s more that there are going to be innovations within that are going to continue to compound over those 10 years, that might make it even more attractive as time goes on. And that’s just the nature of technology,” he said.

Image Credit: IBM

Kategorie: Transhumanismus

Brain Implant Sparks Remarkable Recovery in Patients With Severe Brain Injury

Singularity HUB - 4 Prosinec, 2023 - 23:15

At 21 years old, a young woman’s life was turned upside down after suffering a blow to the head and severe brain injury during a devastating traffic accident.

She’s been living with the consequences ever since, struggling to focus long enough to complete simple everyday tasks. Juggling multiple chores was nearly impossible. Her memory would slip. Words would get stuck on the tip of her tongue. Her body seemed to have a mind of its own. Constantly in motion, it was difficult to sit still. Depression and anxiety clouded her mind.

Eighteen years later, she underwent a surgery that again changed her life. After carefully mapping her brain, surgeons implanted electrodes deep into the thalamus. Made of two bulbous structures—one on each hemisphere—the thalamus is the Grand Central Station of the brain, its connections reaching far and wide across multiple regions. A stimulator, implanted near her collar bone, automatically activated the neural implant for 12 hours a day.

The results were striking. In just three months, her scores improved on a standard test measuring myriad cognitive functions. For the first time in decades, she no longer felt overwhelmed throughout her day. She began to love reading and other hobbies.

“I just—I want to think,” she told the researchers. “I am using my mind…I don’t know why, it just makes me laugh, but it’s amazing to me that I enjoy doing these things.”

The woman, known as P1, took part in a small, ambitious trial seeking to reverse cognitive troubles from brain injuries. Led by Dr. Jaimie Henderson at Stanford University, the clinical trial recruited six people to see if electrically stimulating the thalamus restored the participants’ ability to logically reason, make plans, and focus on a given task.

On average, five of the participants’ scores improved by up to 52 percent, far outperforming the team’s modest goals by over five-fold. Because the stimulation is automatic, the volunteers went about their daily lives as the implant worked its therapeutic effects under the hood.

The benefits were noticeable. One participant said he could finally concentrate on TV shows, whereas previously he struggled due to short attention span. Another said he could now track multiple activities and switch attention—like keeping up a conversation while putting groceries away.

While promising, the therapy requires brain surgery, which can be risky. One participant withdrew midway due to infection. But for those who tolerated the therapy, it’s been a life-changer not just for them, but for their families.

“I got my daughter back. It’s a miracle,” said a member of P1’s family.

Tunneling Deep

Deep brain stimulation, the core of the therapy, has a long history.

The idea is simple. The brain relies on multiple circuits working in tandem. These connections can break due to disease or injury, making it impossible for electrical signals to coordinate and form thoughts or decisions.

One solution is to bridge broken brain networks with a neural implant. Thanks to sophisticated implants and AI, we can now tap into the brain and spinal cord’s electrical chatter, decode their intent, and use this “neural code” to drive robotic arms or allow paralyzed people to walk again.

While powerful, these implants often sit on the outer layer of the brain or around nerves in the spinal cord that are relatively easy to access.

Deep brain stimulation presents a challenge because it targets regions buried inside the brain. Invented in the 1980s to treat motor symptoms in Parkinson’s disease, the technology has since been used to battle depression, with just a few zaps easing symptoms in the severely depressed.

The new study built on these results. People with long-term traumatic brain injury often struggle with mood and attention span, making it difficult to balance multiple tasks without headaches and fatigue. They also struggle to sit still.

These functions are controlled by different areas of the brain. But one critical link is the thalamus, a hub that connects regions supporting attention, mood, and movement. The thalamus is made up of two garlic-shaped bulbs, each nestled in the brain’s hemispheres, that coordinate signals from across the brain. A major sensory relay station, it’s been dubbed “the gateway to consciousness.”

Previous studies in mice pinpointed part of the thalamus as a potential therapeutic hub for traumatic brain injury. Other studies found that stimulating the region was safe in people with minimal consciousness and helped them recover. That’s the region the new study targeted.

Zapping Away

The team narrowed down over 400 volunteers to just six—four men, two women with moderate to severe traumatic brain injury symptoms. Before surgery, they were given multiple tests to gauge their baseline cognitive abilities, mood, and general outlook on life.

Each participant had a commercially available neurostimulator implanted into their thalamus in both brain hemispheres. To catch potential early effects after implantation, they were assigned to three groups based on how soon the implant was turned on post-surgery.

The participants experimented with different zapping patterns for two weeks. Like scrolling through Spotify playlists, each eventually found a pattern optimized to their neural makeup: The stimulation’s timing and intensity allowed them to think clearer and feel better, with minimal side effects. The implant then stimulated their thalamus 12 hours a day for three months.

The results were impressive. Overall, the participants improved between 15 and 52 percent as measured by the same cognitive test used for their baseline. Two patients, including P1, improved so much that they no longer met the diagnosis for lower moderate disability. This boost in mental capacity suggests the participants can tackle work and reconnect with friends and family with minimal struggle, the team wrote in the study.

Another test halted the stimulation in a handful of participants for nearly a month. Neither the researchers nor participants initially knew whose implants were turned off. Within weeks, two patients noticed they felt much worse and withdrew from the test. Of the three people remaining, two improved—and one got worse—with the stimulator on. Further investigation found the implant was erroneously zapping the non-responsive patient’s brain when it should have been turned off.

Although there were minimal side effects, the treatment didn’t disrupt the participant’s lives. The zapping caused some jaw muscle strangeness in a few people. P1, for example, found she slurred her words when on the highest stimulation intensity. Another person had trouble staying still, and some experienced mood changes.

The study is still early, and many questions remain unanswered. For example, does the treatment work regardless of where the brain was injured? The volunteers were only tested for three months after surgery, meaning longer term improvements, if any, remain a mystery. That said, multiple participants signed on to keep their implants and participate in future studies.

Even with these caveats, participants and their loved ones were thankful.  “It’s so profound to us,” P1’s family member said. “I never would’ve believed it. It’s beyond my hopes, beyond anticipation. Somebody turned the lights back on.”

Image Credit: National Institute of Mental Health, National Institutes of Health

Kategorie: Transhumanismus

Are We Ready to Head to Mars? Not So Fast.

Singularity HUB - 3 Prosinec, 2023 - 16:00

In August 1998, 700 people came to Boulder, Colorado to attend the founding convention of the Mars Society. The group’s cofounder and president, Robert Zubrin, extolled the virtues of sending humans to Mars to terraform the planet and establish a human colony. The Mars Society’s founding declaration began, “The time has come for humanity to journey to the planet Mars,” and declared that “Given the will, we could have our first crews on Mars within a decade.” That was two and a half decades ago.

In their hilarious, highly informative and cheeky book, A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?, Kelly and Zach Weinersmith inventory the challenges standing in the way of Zubrin-like visions for Mars settlement. The wife-and-husband team serves a strong, but never stern, counterargument to the visionaries promising that we’ll put humans on Mars in the very near future. “Think of this book as the straight-talking homesteader’s guide to the rest of the solar system,” they write.

Just as in their previous book, Soonish: Ten Emerging Technologies That’ll Improve and/or Ruin Everything, the authors—she’s a faculty member in the biosciences department at Rice University and he’s a cartoonist—use humor and science to douse techno dreams with a dose of reality. “After a few years of researching space settlements, we began in secret to refer to ourselves as the ‘space bastards’ because we found we were more pessimistic than almost everyone in the space-settlement field,” they write. “We weren’t always this way. The data made us do it.”

While working on their deeply researched book, the Weinersmiths came to view sending people to Mars as a problem far more complicated and difficult than you’d know by listening to enthusiasts like Elon Musk or Robert Zubrin. It’s a challenge that “won’t be solved simply by ambitious fantasies or giant rockets.” Eventually humans are likely to expand into space, the Weinersmiths write, but for now, “the discourse needs more realism—not in order to ruin everyone’s fun, but to provide guardrails against genuinely dangerous directions for planet Earth.”

Figuring out rocket technology and determining the power needs of a settlement or the available minerals on different planets or asteroids is the easy part. The bigger challenges, they argue, are “the big, open questions about things like medicine, reproduction, law, ecology, economics, sociology, and warfare.”

Take physiology. Although we now have a small number of astronauts who have experienced living at the International Space Station for long stretches, these astronauts have not had to deal with nearly as much radiation as would befall travelers far beyond. “With current knowledge, it’s hard to predict the effect of radiation on the body,” the Weinersmiths write, adding that the need to manage exposure to radiation is “one of the major factors that will shape human habitation designs off-world.”

For now, “the discourse needs more realism—not in order to ruin everyone’s fun, but to provide guardrails against genuinely dangerous directions for planet Earth.”

In the book, they recount architect Brent Sherwood dismissing those popular images of crystalline domes with sweeping views of space as “baseless.” As Sherwood wrote, “Such architecture would bake the inhabitants and their parklands in strong sunlight while poisoning them with space radiation at the same time.” Instead, spomes (short for “space homes”) are likely to be placed underground or at the very least, surrounded in rocks to protect against radiation.

What’s more, if we’re going to sustain a population far away from Earth, we’ll need to figure out space sex, and the book spends several pages covering the debate over whether this activity has or has not happened yet. Although there’s been speculation that the 1992 space shuttle flight with married couple Mark Lee and Jan Davis would have provided a plausible opportunity for a successful “rendezvous and docking,” the authors write that there’s no evidence that this actually happened and there were five other crew members/potential witnesses aboard the flight that left little room for privacy.

If space travelers were somehow able to create a pregnancy, it would be no easy ride, the Weinersmiths write. We simply don’t know which, if any, part of the developmental process requires constant gravity, and the mother’s bones would be weakened in microgravity, which could make childbirth risky. If artificial gravity couldn’t be provided to the mother-to-be, an alternative might be a human-sized centrifuge to spin the pregnant person around. Such a device, called an “Apparatus for Facilitating the Birth of a Child by Centrifugal Force,” was patented in 1963, and Zach Weinersmith sketches a diagram of it that shows it to be just as bizarre as it sounds. In fact, his sketches often serve to demonstrate just how absurd some of the ideas promoted around space habitation really are.

What astronauts really long for when they’re away from home is, well, home. Anything that can help them recreate Earth far from home can provide some comfort. The book recalls how cosmonaut Anatoly Berezovoy loved to listen to cassette tapes with recordings of nature sounds like thunder, rain, and birdsongs during his 211-day spaceflight in 1982, saying, “We never grew tired of them.”

Living on Mars, which has no birds or rain, gets less than half the sunlight per area that Earth does, and is often plagued by dust storms that further blot out the sun, could be a soul-deadening experience.

The book spends several chapters covering space law and governance, which, in the Weinersmiths’ hands, is more interesting than it sounds. They explore the philosophical question of “who owns the universe?” and shoot down a common argument “that all law is pointless because if Elon Musk has a Mars settlement, who’s going to stop him?” (“One of your authors has a brother who makes this argument. His name is Marty and he is wrong.”)

In fact, there are already frameworks that could guide space law, and the book covers them, and their alternatives, in detail. They use Earth-bound examples, like the breakup of the former Socialist Federal Republic of Yugoslavia and the governance of Antarctica to explore how various governance scenarios might play out on other planets.

Mostly though, the Weinersmiths use facts to debunk grand ideas about how fun and easy life will be on Mars. “An Earth with climate change and nuclear war and, like, zombies and werewolves is still a way better place than Mars,” they write.

They also run through a list of “Bad Arguments for Space Settlement,” which include “Space Will Save Humanity from Near-Term Calamity by Providing a New Home,” and “Space Exploration Is a Natural Human Urge.” These detailed examinations of the stark realities regarding space travel and habitation serve as a foil to the breathlessly optimistic accounts that are so ubiquitous in popular media.

“An Earth with climate change and nuclear war and, like, zombies and werewolves is still a way better place than Mars.”

Despite often sounding like a couple of Debbie Downers, they somehow succeed at keeping the narrative upbeat and interesting. They do this with humor, frankness, and Zach’s fun sketches. Even as they shoot down a long list of space fantasies, they explore a lot of really interesting research and anecdotes (“Did you know the Colombian constitution asserts a claim to a specific region of space?”), so there’s rarely a dull moment.

The Weinersmiths view themselves not as “barriers on the road to progress” but as “guardrails” who want us to go to Mars as much as anybody. The trouble is that these self-professed science geeks (who watch late-night rocket launches with their kids) “just cannot convince ourselves that the usual arguments for space settlements are good.”

But they also assert, rather earnestly, that “If you hate our conclusions here, we have excellent news: we are not powerful people.”

This article was originally published on Undark. Read the original article.

Image Credit: NASA/Pat Rawlings, SAIC

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through December 2)

Singularity HUB - 2 Prosinec, 2023 - 19:00

When AI Unplugs, All Bets Are Off
Matthew Smith | IEEE Spectrum
“The next great chatbot will run at lighting speed on your laptop PC—no internet connection required. …Every big name in consumer tech, from Apple to Qualcomm, is racing to optimize its hardware and software to run artificial intelligence at the ‘edge’—meaning on local hardware, not remote cloud servers. The goal? Personalized, private AI so seamless you might forget it’s ‘AI’ at all.”


The ‘Self-Operating’ Computer Emerges
Bryson Masse | VentureBeat
“As [OthersideAI developer Josh] Bickett described, the [self-operating computer] framework ‘lets the AI control both the mouse where it clicks and all the keyboard triggers essentially. It’s like an agent like autoGPT except it’s not text based. It’s vision based so it takes a screenshot of the computer and then it decides mouse clicks and keyboards, exactly like a person would.'”


These Clues Hint at the True Nature of OpenAI’s Shadowy Q* Project
Will Knight | Wired
“Reports of a mysterious breakthrough called Q* at OpenAI sparked anxious rumors. …What could Q* be? Combining a close read of the initial reports with consideration of the hottest problems in AI right now suggests it may be related to a project that OpenAI announced in May, claiming powerful new results from a technique called ‘process supervision.’ The project involved Ilya Sutskever, OpenAI’s chief scientist and cofounder, who helped oust Altman but later recanted—The Information says he led work on Q*.”


The Inside Story of Microsoft’s Partnership with OpenAI
Charles Duhigg | The New Yorker
“At around 11:30 am on the Friday before Thanksgiving, Microsoft’s chief executive, Satya Nadella, was having his weekly meeting with senior leaders when a panicked colleague told him to pick up the phone. An executive from OpenAI, an artificial intelligence startup into which Microsoft had invested a reported thirteen billion dollars, was calling to explain that within the next twenty minutes the company’s board would announce that it had fired Sam Altman, OpenAI’s CEO and co-founder.”


Could a Drug Give Your Pet More Dog Years?
Emily Anthes | The New York Times
“Aging may be an inevitability, but it is not an unyielding one. Scientists have created longer-lived worms, flies, and mice by tweaking key aging-related genes. These findings have raised the tantalizing possibility that scientists might be able to find drugs that had the same life-extending effects in people. That remains an active area of research, but canine longevity has recently started to attract more attention, in part because dogs are good models for human aging and in part because many pet owners would love more time with their furry family members.”


Making an Image With Generative AI Uses as Much Energy as Charging Your Phone
Melissa Heikkilä | MIT Technology Review
“This is the first time the carbon emissions caused by using an AI model for different tasks have been calculated. …Luccioni and her team looked at the emissions associated with 10 popular AI tasks on the Hugging Face platform, such as question answering, text generation, image classification, captioning, and image generation. They ran the experiments on 88 different models.”

archive page


Admit It, the Cybertruck Is Awesome
Saahil Desai | The Atlantic
“‘This car is very amateurish,’ Adrian Clarke, a former car designer for Land Rover and a writer for the Autopian, told me. But at least it’s different. Most other EVs can’t say as much, even though the electric age can and should be a chance to make cars not just harder, faster, stronger, and better, but also stranger.”


Robots Made from Human Cells Can Move on Their Own and Heal Wounds
Philip Ball | Scientific American
“In 2020 biologist Michael Levin and his colleagues reported that they had made ‘biological robots’ by shaping clusters of [frog] cells into tiny artificial forms that could ‘walk’ around on surfaces. …Some researchers argued that such behavior wasn’t so surprising in the cells of amphibians, which are renowned for their ability to regenerate body parts if damaged. But now Levin and his colleagues at Tufts University report in Advanced Science that they have made similar ‘robotlike’ entities from human cells. They call them anthrobots.”


Exactly How Much Life Is on Earth?
Dennis Overbye | The New York Times
“What’s in a number? According to a recent calculation by a team of biologists and geologists, there are a more living cells on Earth—a million trillion trillion, or 10^30 in math notation, a 1 followed by 30 zeros—than there are stars in the universe or grains of sand on our planet.”

Image Credit: Maxim Berg / Unsplash

Kategorie: Transhumanismus

This DeepMind AI Rapidly Learns New Skills Just by Watching Humans

Singularity HUB - 1 Prosinec, 2023 - 20:00

Teaching algorithms to mimic humans typically requires hundreds or thousands of examples. But a new AI from Google DeepMind can pick up new skills from human demonstrators on the fly.

One of humanity’s greatest tricks is our ability to acquire knowledge rapidly and efficiently from each other. This kind of social learning, often referred to as cultural transmission, is what allows us to show a colleague how to use a new tool or teach our children nursery rhymes.

It’s no surprise that researchers have tried to replicate the process in machines. Imitation learning, in which AI watches a human complete a task and then tries to mimic their behavior, has long been a popular approach for training robots. But even today’s most advanced deep learning algorithms typically need to see many examples before they can successfully copy their trainers.

When humans learn through imitation, they can often pick up new tasks after just a handful of demonstrations. Now, Google DeepMind researchers have taken a step toward rapid social learning in AI with agents that learn to navigate a virtual world from humans in real time.

“Our agents succeed at real-time imitation of a human in novel contexts without using any pre-collected human data,” the researchers write in a paper in Nature Communications. We identify a surprisingly simple set of ingredients sufficient for generating cultural transmission.”

The researchers trained their agents in a specially designed simulator called GoalCycle3D. The simulator uses an algorithm to generate an almost endless number of different environments based on rules about how the simulation should operate and what aspects of it should vary.

In each environment, small blob-like AI agents must navigate uneven terrain and various obstacles to pass through a series of colored spheres in a specific order. The bumpiness of the terrain, the density of obstacles, and the configuration of the spheres varies between environments.

The agents are trained to navigate using reinforcement learning. They earn a reward for passing through the spheres in the correct order and use this signal to improve their performance over many trials. But in addition, the environments also feature an expert agent—which is either hard-coded or controlled by a human—that already knows the correct route through the course.

Over many training runs, the AI agents learn not only the fundamentals of how the environments operate, but also that the quickest way to solve each problem is to imitate the expert. To ensure the agents were learning to imitate rather than just memorizing the courses, the team trained them on one set of environments and then tested them on another. Crucially, after training, the team showed that their agents could imitate an expert and continue to follow the route even without the expert.

This required a few tweaks to standard reinforcement learning approaches.

The researchers made the algorithm focus on the expert by having it predict the location of the other agent. They also gave it a memory module. During training, the expert would drop in and out of environments, forcing the agent to memorize its actions for when it was no longer present. The AI also trained on a broad set of environments, which ensured it saw a wide range of possible tasks.

It might be difficult to translate the approach to more practical domains though. A key limitation is that when the researchers tested if the AI could learn from human demonstrations, the expert agent was controlled by one person during all training runs. That makes it hard to know whether the agents could learn from a variety of people.

More pressingly, the ability to randomly alter the training environment would be difficult to recreate in the real world. And the underlying task was simple, requiring no fine motor control and occurring in highly controlled virtual environments.

Still, social learning progress in AI is welcome. If we’re to live in a world with intelligent machines, finding efficient and intuitive ways to share our experience and expertise with them will be crucial.

Image Credit: Juliana e Mariana Amorim / Unsplash

Kategorie: Transhumanismus

A Google DeepMind AI Just Discovered 380,000 New Materials. This Robot Is Cooking Them Up.

Singularity HUB - 30 Listopad, 2023 - 23:40

A robot chemist just teamed up with an AI brain to create a trove of new materials.

Two collaborative studies from Google DeepMind and the University of California, Berkeley, describe a system that predicts the properties of new materials—including those potentially useful in batteries and solar cells—and produces them with a robotic arm.

We take everyday materials for granted: plastic cups for a holiday feast, components in our smartphones, or synthetic fibers in jackets that keep us warm when chilly winds strike.

Scientists have painstakingly discovered roughly 20,000 different types of materials that let us build anything from computer chips to puffy coats and airplane wings. Tens of thousands more potentially useful materials are in the works. Yet we’ve only scratched the surface.

The Berkeley team developed a chef-like robot that mixes and heats ingredients, automatically transforming recipes into materials. As a “taste test,” the system, dubbed the A-Lab, analyzes the chemical properties of each final product to see if it hits the mark.

Meanwhile, DeepMind’s AI dreamed up myriad recipes for the A-Lab chef to cook. It’s a hefty list. Using a popular machine learning strategy, the AI found two million chemical structures and 380,000 new stable materials—many counter to human intuition. The work is an “order-of-magnitude” expansion on the materials that we currently know, the authors wrote.

Using DeepMind’s cookbook, A-Lab ran for 17 days and synthesized 41 out of 58 target chemicals—a win that would’ve taken months, if not years, of traditional experiments.

Together, the collaboration could launch a new era of materials science. “It’s very impressive,” said Dr. Andrew Rosen at Princeton University, who was not involved in the work.

Let’s Talk Chemicals

Look around you. Many things we take for granted—that smartphone screen you may be scrolling on—are based on materials chemistry.

Scientists have long used trial and error to discover chemically stable structures. Like Lego blocks, these components can be built into complex materials that resist dramatic temperature changes or high pressures, allowing us to explore the world from deep sea to outer space.

Once mapped, scientists capture the crystal structures of these components and save those structures for reference. Tens of thousands are already deposited into databanks.

In the new study, DeepMind took advantage of these known crystal structures. The team trained an AI system on a massive library with hundreds of thousands of materials called the Materials Project. The library includes materials we’re already familiar with and use, alongside thousands of structures with unknown but potentially useful properties.

DeepMind’s new AI trained on 20,000 known inorganic crystals—and another 28,000 promising candidates—from the Materials Project to learn what properties make a material desirable.

Essentially, the AI works like a cook testing recipes: Add a little something here, change some ingredients there, and through trial-and-error, it reaches the desired results. Fed data from the dataset, it generated predictions for potentially stable new chemicals, along with their properties. The results were fed back into the AI to further hone its “recipes.”

Over many rounds, the training allowed the AI to make small mistakes. Rather than swapping out multiple chemical structures at the same time—a potentially catastrophic move—the AI iteratively evaluated small chemical changes. For example, instead of replacing one chemical component with another, it could try to only substitute half. If the swaps didn’t work, no problem, the system weeded out any candidates that weren’t stable.

The AI eventually produced 2.2 million chemical structures, 380,000 of which it predicted would be stable if synthesized. Over 500 of the newly found materials were related to lithium-ion conductors, which play a critical part in today’s batteries.

“This is like ChatGPT for materials discovery,” said Dr. Carla Gomes at Cornell University, who was not involved in the research.

Mind to Matter

DeepMind’s AI predictions are just that: What looks good on paper may not always work out.

Here’s where A-Lab comes in. A team led by Dr. Gerbrand Ceder at UC Berkeley and the Lawrence Berkeley National Laboratory built an automated robotic system directed by an AI trained on more than 30,000 published chemical recipes. Using robotic arms, A-Lab builds new materials by picking, mixing, and heating ingredients according to a recipe.

Over two weeks of training, A-Lab produced a string of recipes for 41 new materials without any human input. It wasn’t a total success: 17 materials failed to meet their mark. However, with a dash of human intervention, the robot synthesized these materials without a hitch.

Together, the two studies open a universe of novel compounds that might meet today’s global challenges. Next steps include adding chemical and physical properties to the algorithm to further improve its understanding of the physical world and synthesizing more materials for testing.

DeepMind is releasing their AI and some of its chemical recipes to the public. Meanwhile, A-Lab is running recipes from the database and uploading their results to the Materials Project.

To Ceder, an AI-generated map of new materials could “change the world.” It’s not A-lab itself, he said. Rather, it’s “the knowledge and information that it generates.”

Image Credit: Marilyn Sargent/Berkeley Lab

Kategorie: Transhumanismus

Merriam-Webster’s Word of the Year Reflects Growing Concerns Over AI’s Ability to Deceive

Singularity HUB - 28 Listopad, 2023 - 21:38

When Merriam-Webster announced that its word of the year for 2023 was “authentic,” it did so with over a month to go in the calendar year.

Even then, the dictionary publisher was late to the game.

In a lexicographic form of Christmas creep, Collins English Dictionary announced its 2023 word of the year, “AI,” on October 31. Cambridge University Press followed suit on November 15 with “hallucinate,” a word used to refer to incorrect or misleading information provided by generative AI programs.

At any rate, terms related to artificial intelligence appear to rule the roost, with “authentic” also falling under that umbrella.

AI and the Authenticity Crisis

For the past 20 years, Merriam-Webster, the oldest dictionary publisher in the US, has chosen a word of the year—a term that encapsulates, in one form or another, the zeitgeist of that past year. In 2020, the word was “pandemic.” The next year’s winner? “Vaccine.”

“Authentic” is, at first glance, a little less obvious.

According to the publisher’s editor-at-large, Peter Sokolowski, 2023 represented “a kind of crisis of authenticity.” He added that the choice was also informed by the number of online users who looked up the word’s meaning throughout the year.

The word “authentic,” in the sense of something that is accurate or authoritative, has its roots in French and Latin. The Oxford English Dictionary has identified its usage in English as early as the late 14th century.

And yet the concept—particularly as it applies to human creations and human behavior—is slippery.

Is a photograph made from film more authentic than one made from a digital camera? Does an authentic scotch have to be made at a small-batch distillery in Scotland? When socializing, are you being authentic—or just plain rude—when you skirt niceties and small talk? Does being your authentic self mean pursuing something that feels natural, even at the expense of cultural or legal constraints?

The more you think about it, the more it seems like an ever-elusive ideal—one further complicated by advances in artificial intelligence.

How Much Human Touch?

Intelligence of the artificial variety—as in nonhuman, inauthentic, computer-generated intelligence—was the technology story of the past year.

At the end of 2022, OpenAI publicly released ChatGPT, a chatbot derived from so-called large language models. It was widely seen as a breakthrough in artificial intelligence, but its rapid adoption led to questions about the accuracy of its answers.

The chatbot also became popular among students, which compelled teachers to grapple with how to ensure their assignments weren’t being completed by ChatGPT.

Issues of authenticity have arisen in other areas as well. In November 2023, a track described as the “last Beatles song” was released. “Now and Then” is a compilation of music originally written and performed by John Lennon in the 1970s, with additional music recorded by the other band members in the 1990s. A machine learning algorithm was recently employed to separate Lennon’s vocals from his piano accompaniment, and this allowed a final version to be released.

But is it an authentic Beatles song? Not everyone is convinced.

Advances in technology have also allowed the manipulation of audio and video recordings. Referred to as “deepfakes,” such transformations can make it appear that a celebrity or a politician said something that they did not—a troubling prospect as the US heads into what is sure to be a contentious 2024 election season.

Writing for The Conversation in May 2023, education scholar Victor R. Lee explored the AI-fueled authenticity crisis.

Our judgments of authenticity are knee-jerk, he explained, honed over years of experience. Sure, occasionally we’re fooled, but our antennae are generally reliable. Generative AI short-circuits this cognitive framework.

“That’s because back when it took a lot of time to produce original new content, there was a general assumption … that it only could have been made by skilled individuals putting in a lot of effort and acting with the best of intentions,” he wrote.

“These are not safe assumptions anymore,” he added. “If it looks like a duck, walks like a duck, and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg.”

Though there seems to be a general understanding that human minds and human hands must play some role in creating something authentic or being authentic, authenticity has always been a difficult concept to define.

So it’s somewhat fitting that as our collective handle on reality has become ever more tenuous, an elusive word for an abstract ideal is Merriam-Webster’s word of the year.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: 愚木混株 cdd20 / Unsplash 

Kategorie: Transhumanismus

An AI Tool Just Revealed Almost 200 New Systems for CRISPR Gene Editing

Singularity HUB - 27 Listopad, 2023 - 23:36

CRISPR has a problem: an embarrassment of riches.

Ever since the gene editing system rocketed to fame, scientists have been looking for variants with better precision and accuracy.

One search method screens for genes related to CRISPR-Cas9 in the DNA of bacteria and other creatures. Another artificially evolves CRISPR components in the lab to give them better therapeutic properties—like greater stability, safety, and efficiency inside the human body.

This data is stored in databases containing billions of genetic sequences. While there may be exotic CRISPR systems hidden in these libraries, there are simply too many entries to search.

This month, a team at MIT and Harvard led by CRISPR pioneer Dr. Feng Zhang took inspiration from an existing big-data approach and used AI to narrow the sea of genetic sequences to a handful that are similar to known CRISPR systems.

The AI scoured open-source databases with genomes from uncommon bacteria—including those found in breweries, coal mines, chilly Antarctic shores, and (no kidding) dog saliva.

In just a few weeks, the algorithm pinpointed thousands of potential new biological “parts” that could make up 188 new CRISPR-based systems—including some that are exceedingly rare.

Several of the new candidates stood out. For example, some could more precisely lock onto the target gene for editing with fewer side effects. Other variations aren’t directly usable but could provide insight into how some existing CRISPR systems work—for example, those targeting RNA, the “messenger” molecule directing cells to build proteins from DNA.

“Biodiversity is such a treasure trove,” said Zhang. “Doing this analysis kind of allows us to kill two birds with one stone: both study biology and also potentially find useful things,” he added.

A Wild Hunt

Although CRISPR is known for its gene editing prowess in humans, scientists first discovered the system in bacteria where it combats viral infections.

Scientists have long collected bacterial samples from nooks and crannies all over the globe. Thanks to increasingly affordable and efficient DNA sequencing, many of these samples—some from unexpected sources such as pond scum—have had their genetic blueprint mapped out and deposited into databases.

Zhang is no stranger to the hunt for new CRISPR systems. “A number of years ago, we started to ask, ‘What is there beyond CRISPR, and are there other RNA-programmable systems out there in nature?’” Zhang told MIT News earlier this year.

CRISPR is made up of two structures. One is a “bloodhound” guide RNA sequence, usually about 20 bases long, that targets a particular gene. The other is the scissors-like Cas protein. Once inside a cell, the bloodhound finds the target, and the scissors snip the gene. More recent versions of the system, such as base editing or prime editing, use different types of Cas proteins to perform single-letter DNA swaps or even edit RNA targets.

Back in 2021, Zhang’s lab traced the origins of the CRISPR family tree, identifying an entirely new family line. Dubbed OMEGA, these systems use foreign guide RNAs and protein scissors, yet they could still readily snip DNA in human cells cultured in petri dishes.

More recently, the team expanded their search to a new branch of life: eukaryotes. Members in this family—including plants, animals, and humans—have their DNA tightly wrapped inside a nut-like structure. Bacteria, in contrast, don’t have these structures. By screening fungi, algae, and clams (yup, biodiversity is weird and awesome), the team found proteins they call Fanzors that can be reprogrammed to edit human DNA—a first proof that a CRISPR-like mechanism also exists in eukaryotes.

But the goal isn’t to hunt down shiny, new gene editors just for the sake of it. Rather, it’s to tap nature’s gene editing prowess to build a collection of gene editors, each with its own strengths, that can treat genetic disorders and help us understand our body’s inner workings.

Collectively, scientists have discovered six main CRISPR systems—some collaborate with different Cas enzymes, for instance, while others specialize in either DNA or RNA.

“Nature is amazing. There’s so much diversity,” Zhang said. “There are probably more RNA-programmable systems out there, and we’re continuing to explore and will hopefully discover more.”

Bioengineering Scrabble

That’s what the team built the new AI, called FLSHclust, to do. They transformed technology that analyzes bewilderingly large datasets—like software highlighting similarities in large deposits of document, audio, or image files—into a tool to hunt genes related to CRISPR.

Once complete, the algorithm analyzed gene sequences from bacteria and collected them into groups—a bit like clustering colors into a rainbow, grouping similar colors together so it’s easier to find the shade you’re after. From here, the team honed in on genes associated with CRISPR.

The algorithm combed through multiple open-source databases including hundreds of thousands of genomes from bacteria and archaea and millions of mystery DNA sequences. In all, it scanned billions of protein-encoding genes and grouped them into roughly 500 million clusters. In these, the team identified 188 genes no one has yet associated with CRISPR and that could make up thousands of new CRISPR systems.

Two systems, developed from microbes in the guts of animals and the Black sea, used a 32-base  guide RNA instead of the usual 20 used in CRISPR-Cas9. Like a search query, the longer it is, the more precise the results. These longer guide RNA “queries” suggest the systems could have fewer side effects. Another system is like a previous CRISPR-based diagnostic system called SHERLOCK, which can rapidly sense a single DNA or RNA molecule from an infectious invader.

When tested in cultured human cells, both systems could snip a single strand of the targeted gene and insert small genetic sequences at roughly 13 percent efficiency. It doesn’t sound like much, but it’s a baseline that can be improved.

The team also uncovered genes for a new CRISPR system targeting RNA previously unknown to science. Only found after close scrutiny, it seems this version and any yet to be discovered aren’t easily captured by sampling bacteria around the world and are thus extremely rare in nature.

“Some of these microbial systems were exclusively found in water from coal mines,” said study author Dr. Soumya Kannan. “If someone hadn’t been interested in that, we may never have seen those systems.”

It’s still too early to known whether these systems can be used in human gene editing. Those that randomly chop up DNA, for example, would be useless for therapeutic purposes. However, the AI can mine a vast universe of genetic data to find potential “unicorn” gene sequences and is now available to other scientists for further exploration.

Image Credit: NIH

Kategorie: Transhumanismus

DeepMind Defines Artificial General Intelligence and Ranks Today’s Leading Chatbots

Singularity HUB - 26 Listopad, 2023 - 16:00

Artificial general intelligence, or AGI, has become a much-abused buzzword in the AI industry. Now, Google DeepMind wants to put the idea on a firmer footing.

The concept at the heart of the term AGI is that a hallmark of human intelligence is its generality. While specialist computer programs might easily outperform us at picking stocks or translating French to German, our superpower is the fact we can learn to do both.

Recreating this kind of flexibility in machines is the holy grail for many AI researchers, and is often speculated to be the first step towards artificial superintelligence. But what exactly people mean by AGI is rarely specified, and the idea is frequently described in binary terms, where AGI represents a piece of software that has crossed some mythical boundary, and once on the other side, it’s on par with humans.

Researchers at Google DeepMind are now attempting to make the discussion more precise by concretely defining the term. Crucially, they suggest that rather than approaching AGI as an end goal, we should instead think about different levels of AGI, with today’s leading chatbots representing the first rung on the ladder.

“We argue that it is critical for the AI research community to explicitly reflect on what we mean by AGI, and aspire to quantify attributes like the performance, generality, and autonomy of AI systems,” the team writes in a preprint published on arXiv.

The researchers note that they took inspiration from autonomous driving, where capabilities are split into six levels of autonomy, which they say enable clear discussion of progress in the field.

To work out what they should include in their own framework, they studied some of the leading definitions of AGI proposed by others. By looking at some of the core ideas shared across these definitions, they identified six principles any definition of AGI needs to conform with.

For a start, a definition should focus on capabilities rather than the specific mechanisms AI uses to achieve them. This removes the need for AI to think like a human or be conscious to qualify as AGI.

They also suggest that generality alone is not enough for AGI, the models also need to hit certain thresholds of performance in the tasks they perform. This performance doesn’t need to be proven in the real world, they say—it’s enough to simply demonstrate a model has the potential to outperform humans at a task.

While some believe true AGI will not be possible unless AI is embodied in physical robotic machinery, the DeepMind team say this is not a prerequisite for AGI. The focus, they say, should be on tasks that fall in the cognitive and metacognitive—for instance, learning to learn—realms.

Another requirement is that benchmarks for progress have “ecological validity,” which means AI is measured on real-world tasks valued by humans. And finally, the researchers say the focus should be on charting progress in the development of AGI rather than fixating on a single endpoint.

Based on these principles, the team proposes a framework they call “Levels of AGI” that outlines a way to categorize algorithms based on their performance and generality. The levels range from “emerging,” which refers to a model equal to or slightly better than an unskilled human, to “competent,” “expert,” “virtuoso,” and “superhuman,” which denotes a model that outperforms all humans. These levels can be applied to either narrow or general AI, which helps distinguish between highly specialized programs and those designed to solve a wide range of tasks.

The researchers say some narrow AI algorithms, like DeepMind’s protein-folding algorithm AlphaFold, for instance, have already reached the superhuman level. More controversially, they suggest leading AI chatbots like OpenAI’s ChatGPT and Google’s Bard are examples of emerging AGI.

Julian Togelius, an AI researcher at New York University, told MIT Technology Review that separating out performance and generality is a useful way to distinguish previous AI advances from progress towards AGI. And more broadly, the effort helps to bring some precision to the AGI discussion. “This provides some much-needed clarity on the topic,” he says. “Too many people sling around the term AGI without having thought much about what they mean.”

The framework outlined by the DeepMind team is unlikely to win everyone over, and there are bound to be disagreements about how different models should be ranked. But with any luck, it will get people to think more deeply about a critical concept at the heart of the field.

Image Credit: Resource Database / Unsplash

Kategorie: Transhumanismus

Did This Chemical Reaction Create the Building Blocks of Life on Earth?

Singularity HUB - 25 Listopad, 2023 - 16:00

How did life begin? How did chemical reactions on the early Earth create complex, self-replicating structures that developed into living things as we know them?

According to one school of thought, before the current era of DNA-based life, there was a kind of molecule called RNA (or ribonucleic acid). RNA—which is still a crucial component of life today—can replicate itself and catalyze other chemical reactions.

But RNA molecules themselves are made from smaller components called ribonucleotides. How would these building blocks have formed on the early Earth and then combined into RNA?

Chemists like me are trying to recreate the chain of reactions required to form RNA at the dawn of life, but it’s a challenging task. We know whatever chemical reaction created ribonucleotides must have been able to happen in the messy, complicated environment found on our planet billions of years ago.

I have been studying whether “autocatalytic” reactions may have played a part. These are reactions that produce chemicals that encourage the same reaction to happen again, which means they can sustain themselves in a wide range of circumstances.

In our latest work, my colleagues and I have integrated autocatalysis into a well-known chemical pathway for producing the ribonucleotide building blocks, which could have plausibly happened with the simple molecules and complex conditions found on the early Earth.

The Formose Reaction

Autocatalytic reactions play crucial roles in biology, from regulating our heartbeats to forming patterns on seashells. In fact, the replication of life itself, where one cell takes in nutrients and energy from the environment to produce two cells, is a particularly complicated example of autocatalysis.

A chemical reaction called the formose reaction, first discovered in 1861, is one of the best examples of an autocatalytic reaction that could have happened on the early Earth.

The formose reaction was discovered by Russian chemist Alexander Butlerov in 1861. Image Credit: Butlerov, A. M. 1828-1886 / Wikimedia

In essence, the formose reaction starts with one molecule of a simple compound called glycolaldehyde (made of hydrogen, carbon and oxygen) and ends with two. The mechanism relies on a constant supply of another simple compound called formaldehyde.

A reaction between glycolaldehyde and formaldehyde makes a bigger molecule, splitting off fragments that feed back into the reaction and keep it going. However, once the formaldehyde runs out, the reaction stops, and the products start to degrade from complex sugar molecules into tar.

The formose reaction shares some common ingredients with a well-known chemical pathway to make ribonucleotides, known as the Powner–Sutherland pathway. However, until now no one has tried to connect the two—with good reason.

The formose reaction is notorious for being “unselective.” This means it produces a lot of useless molecules alongside the actual products you want.

An Autocatalytic Twist in the Pathway to Ribonucleotides

In our study, we tried adding another simple molecule called cyanamide to the formose reaction. This makes it possible for some of the molecules made during the reaction to be “siphoned off” to produce ribonucleotides.

The reaction still does not produce a large quantity of ribonucleotide building blocks. However, the ones it does produce are more stable and less likely to degrade.

What’s interesting about our study is the integration of the formose reaction and ribonucleotide production. Previous investigations have studied each separately, which reflects how chemists usually think about making molecules.

Generally speaking, chemists tend to avoid complexity so as to maximize the quantity and purity of a product. However, this reductionist approach can prevent us from investigating dynamic interactions between different chemical pathways.

These interactions, which happen everywhere in the real world outside the lab, are arguably the bridge between chemistry and biology.

Industrial Applications

Autocatalysis also has industrial applications. When you add cyanamide to the formose reaction, another of the products is a compound called 2-aminooxazole, which is used in chemistry research and the production of many pharmaceuticals.

Conventional 2-aminooxazole production often uses cyanamide and glycolaldehyde, the latter of which is expensive. If it can be made using the formose reaction, only a small amount of glycolaldehyde will be needed to kickstart the reaction, cutting costs.

Our lab is currently optimizing this procedure in the hope we can manipulate the autocatalytic reaction to make common chemical reactions cheaper and more efficient, and their pharmaceutical products more accessible. Maybe it won’t be as big a deal as the creation of life itself, but we think it could still be worthwhile.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Sangharsh Lohakare / Unsplash 

Kategorie: Transhumanismus

Scientists 3D Print a Complex Robotic Hand With Bones, Tendons, and Ligaments

Singularity HUB - 24 Listopad, 2023 - 16:00

We don’t think twice about using our hands throughout the day for tasks that still thwart sophisticated robots—pouring coffee without spilling when half-awake, folding laundry without ripping delicate fabrics.

The complexity of our hands is partly to thank. They are wonders of biological engineering: Hard skeleton keeps their shape and integrity and lets fingers bear weight. Soft tissues, such as muscles and ligaments, give them dexterity. Thanks to evolution, all these “biomaterials” self-assemble.

Recreating them artificially is another matter.

Scientists have tried to use additive manufacturing—better known as 3D printing—to recreate complex structures from hands to hearts. But the technology stumbles when integrating multiple materials into one printing process. 3D printing a robotic hand, for example, requires multiple printers—one to make the skeleton, another for soft tissue materials—and the assembly of parts. These multiple steps increase manufacturing time and complexity.

Scientists have long sought to combine different materials into a single 3D printing process. A team from the soft robotics lab at ETH Zurich has found a way.

The team equipped a 3D inkjet printer—which is based on the same technology in normal office printers—with machine vision, allowing it to rapidly adapt to different materials. The approach, called vision-controlled jetting, continuously gathers information about a structure’s shape during printing to fine-tune how it prints the next layer, regardless of the type of material.

In a test, the team 3D printed a synthetic hand in one go. Complete with skeleton, ligaments, and tendons, the hand can grasp different objects when it “feels” pressure at its fingertips.

They also 3D printed a structure like a human heart, complete with chambers, one-way valves, and the ability to pump fluid at a rate roughly 40 percent of an adult human’s heart.

The study is “very impressive,” Dr. Yong Lin Kong at the University of Utah, who was not involved in the work but wrote an accompanying commentary, told Nature. 3D inkjet printing is already a mature technology, he added, but this study shows machine vision makes it possible to expand the technology’s capabilities to more complex structures and multiple materials.

The Problem With 3D Inkjet Printing

Recreating a structure using conventional methods is tedious and error-prone. Engineers cast a mold to form the desired shape—say, the skeleton of a hand—then combine the initial structure with other materials.

It’s a mind-numbing process requiring careful calibration. Like installing a cabinet door, any errors leave it lopsided. For something as complex as a robot hand, the results can be rather Frankenstein.

Traditional methods also make it difficult to incorporate materials with different properties, and they tend to lack the fine details required in something as complex as a synthetic hand. All these limitations kneecap what a robotic hand—and other functional structures—can do.

Then 3D inkjet printing came along. Common versions of these printers squeeze a liquid resin material through hundreds of thousands of individually controlled nozzles—like an office printer printing a photo at high resolution. Once a layer is printed, a UV light “sets” the resin, turning it from liquid to solid. Then the printer gets to work on the next layer. In this way, the printer builds a 3D object, layer by layer, at the microscopic level.

Although incredibly quick and precise, the technology has its problems. It isn’t great at binding different materials together, for instance. To 3D print a functional robot, engineers must either print parts with multiple printers and then assemble them after, or they can print an initial structure, cast around the part, and add additional types of materials with desired properties.

One main drawback is the thickness of each layer isn’t always the same. Differences in the speed of “ink,” interference between nozzles, and shrinkage during the “setting” process can all cause tiny differences. But these inconsistencies add up with more layers, resulting in malfunctioning objects and printing failure.

Engineers tackle this problem by adding a blade or roller. Like flattening newly laid concrete during roadwork, this step levels each layer before the next one starts. The solution, unfortunately, comes with other headaches. Because the rollers are only compatible with some materials—others gunk up the scraper—they limit the range of materials that can be used.

What if we don’t need this step at all?

Eyes on the Prize

The team’s solution is machine vision. Rather than scraping away extra material, scanning each layer as it’s printing helps the system detect and compensate for small mistakes in real time.

The machine vision system uses four cameras and two lasers to scan the entire printing surface at microscopic resolution.

This process helps the printer self-correct, explained the team. By understanding where there’s too much or too little material, the printer can change the amount of ink deposited in the next layer, essentially filling previous “potholes.” The result is a powerful 3D printing system in which extra material doesn’t need to be scraped off.

This isn’t the first time machine vision has been used in 3D printers. But the new system can scan 660 times faster than older ones, and it can analyze the growing structure’s physical shape in less than a second, wrote Kong. This allows the 3D printer to access a much larger library of materials, including substances that support complex structures during printing but are removed later.

Translation? The system can print a new generation of bio-inspired robots far faster than any previous technologies.

As a test, the team printed a synthetic hand with two types of materials: a rigid, load-bearing material to act as a skeleton and a soft bendable material to make tendons and ligaments. They printed channels throughout the hand to control its movement with air pressure and at the same time integrated a membrane to sense touch—essentially, the fingertips.

They hooked the hand to external electrical components and integrated it into a little walking robot. Thanks to its pressure-sensing fingertips, it could pick up different objects—a pen or an empty plastic water bottle.

The system also printed a human-like heart structure with multiple chambers. When pressurizing the synthetic heart, it pumped fluids like its biological counterpart.

Everything was printed in one go.

Next Steps

The results are fascinating because they feel like a breakthrough for a technology that’s already in a mature state, Kong said. Although commercially available for decades, just by adding machine vision gives the technology new life.

“Excitingly, these diverse examples were printed using just a few materials,” he added. The team aims to expand the materials they can print with and directly add electronic sensors for sensing and movement during printing. The system could also incorporate other fabrication methods—for example, spraying a coat of biologically active molecules to the surface of the hands.

Robert Katzschmann, a professor at ETH Zurich and an author on the new paper, is optimistic about the system’s broader use. “You could think of medical implants…[or] use this for prototyping things in tissue engineering,” he said. “The technology itself will only grow.”

Image Credit: ETH Zurich/Thomas Buchner

Kategorie: Transhumanismus

OpenAI Mayhem: What We Know Now, Don’t Know Yet, and What Could Be Next

Singularity HUB - 23 Listopad, 2023 - 02:47

If you’d never heard of OpenAI before last week, you probably have now. The level of attention given to recent mayhem at the company leading the AI boom underscores how much this moment has captured the collective imagination.

Last Friday, OpenAI’s board of directors fired the company’s cofounder, CEO, and fellow board member, Sam Altman. The decision was led by OpenAI cofounder and chief scientist, Ilya Sutskever, and three independent board members. Greg Brockman, cofounder and OpenAI president, was also forced out as chairman and chose to resign instead of remaining at the company. Altman was replaced by interim CEO Mira Murati, formerly the company’s CTO.

It was a shocking turn of events for the hottest thing in tech. And rumors swirled in the vacuum left by a vague statement explaining the decision. But these were only the first shots in the neck-snapping round of power ping-pong to come.

Over the weekend, details emerged that the board’s decision was not due to “malfeasance” on Altman’s part. Altman was said to be negotiating a return as CEO; then he was considering founding a new AI startup with Brockman; then the two were headed to Microsoft, after CEO Satya Nadella said he’d hire them to lead a new AI lab at his company.

On Monday, interim CEO Emmett Shear—the former CEO of Twitch who’d replaced Murati the night before—was facing open revolt. Over 95 percent of OpenAI employees signed a letter demanding the board resign and Altman be reinstated. If that didn’t happen, they would follow him to Microsoft, which was offering jobs and matching compensation. Also signing the letter were Murati and Sutskever, who now said he regretted his involvement in the board’s decision to remove Altman.

Finally, Tuesday night, after earlier rumors that negotiations were back on, the company announced the various parties had reached a tentative agreement to rehire Altman as CEO.

Two original board members would depart—Helen Toner, a director of strategy at Georgetown University’s Center for Security and Emerging Technology, and Tasha McCauley, an entrepreneur and researcher at RAND Corporation—while Quora CEO Adam D’Angelo would stay on. The company would also add two new board members—the economist Larry Summers and former Salesforce co-CEO Bret Taylor (as chairman)—and likely expand the board further at some point in the future. There would also be an independent investigation into Altman’s conduct and the process by which he was removed.

That’s what we know. But the unpredictability of events so far suggests the story isn’t over. Here’s what we still don’t know and what might be next.

How OpenAI Is Organized

The events of the last five days are extraordinary in the world of tech, not least because founders usually hold significant power on their own boards.

But OpenAI is different.

The company was originally founded in 2015 as a nonprofit with the audacious mission of building artificial general intelligence broadly beneficial to humankind. They hoped that by divorcing the organization’s mission from financial incentives, both goals could be achieved.

But in 2018, OpenAI leaders realized they needed a lot more computing power and financial backing to make progress. They created a capped-profit company—controlled by the nonprofit board and its mission—to work on commercializing products and attracting talent and investors. Microsoft led the way and has poured over $10 billion into the company.

Crucially, however, Microsoft and other investors had little control over the business in the traditional sense. The buck stopped with the non-profit board.

OpenAI organizational structure. Source: OpenAI

Then came ChatGPT. The chatbot sensation kicked off a hype cycle not seen in years (which is saying something). With Altman at the helm, OpenAI has pushed to commercialize ChatGPT at a rapid pace, culminating in its first developer conference earlier this month but also putting significant strain on the company’s non-profit mission.

What We Don’t Know

All this set the stage for Altman’s sudden ouster and comeback. But not every detail is in stone yet, and more of the story is likely to unfold in the days ahead.

Let’s lay some of that out.

The current agreement looks durable, but given recent history…

The wording used to describe Altman’s return as CEO isn’t ironclad. The phrase “agreement in principle” means it could yet unravel. But though the details are still being hammered out, given intense pressure from investors and employees, it appears very likely the path of least resistance will be to keep the team together and move on.

The board’s reasons for acting have not been confirmed in detail.

There’s been plenty of speculation and commentary from sources about why the board chose to fire Altman.

One explanation is the growing tension between the board’s mission to keep AI safe and the company’s commercial activity and pace of development boiled over. Sutskever and fellow board members Toner, McCauley, and D’Angelo were focused on minimizing AI risk, a crucial part of the non-profit’s mandate. They believe AI must be developed with the utmost care lest it cause irreparable damage to society at large. The heavy push to move fast and sell products is at odds with this view. It’s a schism that extends beyond OpenAI to the tech community more generally.

Reports also suggest Altman’s other activities in the area—like an AI chipmaking project and his rumored talks with Jonny Ive about an AI device—or recent breakthroughs that haven’t yet been announced may have contributed to the decision as well. But utimately, we don’t know, and so far, official details have yet to be shared by those involved.

How the company will be organized in the future is TBD.

The company called the new roster of board members “initial,” suggesting it could grow. If OpenAI’s organizational structure brought on the chaos, it’s reasonable to expect investors will demand change. After watching the company nearly evaporate, an expanded board may offer seats to those with financial stakes in the company, and its structure may be reworked. But again, the details have yet to be ironed out. For now, it remains an open question.

What’s Next?

The last five days seem to have blindsided pretty much everyone, but that, itself, is somewhat surprising. OpenAI’s organizational structure was no secret. Nor was the inherent tension between its mission and commercial activities. Now it seems nearly certain, however, that the tension between the two will yield to financial forces.

Altman has been vocal about his desire to keep AI safe: It’s a reason he helped found the company. But he’s also pushed OpenAI to do business in the name of progress. As the organization continues to work with Microsoft and court new investors—a deal with Thrive Capital valuing the company at $80 billion was in the works before the madness—guardrails, assurances, and more control will likely be pre-requisites.

“I think we definitely would want some governance changes,” Microsoft CEO, Satya Nadella, told Bloomberg News Monday. “Surprises are bad, and we just want to make sure that things are done in a way that will allow us continue to partner well.”

Perhaps this outcome is just confirmation of how things already stood, despite OpenAI’s organizational structure. That is, the company and nearly all involved were already operating as if it were a more traditional for-profit venture.

Meanwhile, though the board’s actions might have been motivated by a desire to slow things down, they may end up having the opposite effect. OpenAI is set to pick up where it left off: Same CEO, team, investors, products, and pace but perhaps fewer dissenting voices.

It also means those worried about the most advanced AI being controlled by a handful of corporations will push more urgently for regulation, call on governments to better fund academic research, or put their faith in open-source AI efforts as a counterbalance.

No matter the exact outcome—expect the wild ride in AI to continue.

Image Credit: OpenAI

Kategorie: Transhumanismus

‘Breakthrough’ CRISPR Treatment Slashes Cholesterol in First Human Clinical Trial

Singularity HUB - 21 Listopad, 2023 - 16:00

CRISPR-based therapies just hit another milestone.

In a small clinical trial with 10 people genetically prone to dangerously high levels of cholesterol, a single infusion of the precision gene editor slashed the artery-clogging fat by up to 55 percent. If all goes well, the one-shot treatment could last a lifetime.

The trial, led by Verve Therapeutics, is the first to explore CRISPR for a chronic disease that’s usually managed with decades of daily pills. It also marks the first use of a newer class of gene editors directly in humans. Called base editing, the technology is more precise—and potentially safer—than the original set of CRISPR tools. The new treatment, VERVE-101, uses a base editor to disable a gene encoding a liver protein that regulates cholesterol.

To be clear, these results are just a sneak peek into the trial, which was designed to test for safety, rather than the treatment’s efficacy. Not all participants responded well. Two people suffered severe heart issues, with one case potentially related to the treatment.

Nevertheless, “it is a breakthrough to have shown in humans that in vivo [in the body] base editing works efficiently in the liver,” Dr. Gerald Schwank at the University of Zurich, who wasn’t involved in the trial, told Science.

Give Your Heart a Break

CRISPR has worked wonders for previously untreatable cancers. Last week, it was also approved in the United Kingdom for the blood diseases sickle cell and beta thalassemia.

For these treatments, scientists extract immune cells or blood cells from the body, edit the cells using CRISPR to correct the genetic mistake, and reinfuse the treated cells into the patient. For edited cells to “take,” patients must undergo a grueling treatment to wipe out existing diseased cells in the bone marrow and open space for the edited replacements.

Verve is taking a different approach: Instead of isolating cells for gene editing, the tools are infused into the bloodstream where they edit genes directly inside the body. It’s a big gamble. Most of our cells contain the same DNA. Once injected, the tools could go on a rampage and edit the targeted gene throughout the body, causing dangerous side effects.

Verve tackled this concern head on by pairing base editing with nanoparticles.

The trial targeted PCSK9, a liver protein that keeps low-density lipoprotein (LDL), or “bad cholesterol,” levels at bay. In familial hypercholesterolemia, a single mutated letter in PCSK9 alters its function, causing LDL levels to grow dangerously. People with this inherited disorder are at risk of life-threatening heart problems by the age of 50 and need to take statin drugs to keep their cholesterol in check. But the lifelong regime is tough to maintain.

A Targeted CRISPR Torpedo

Verve designed a “one-and-done” treatment to correct the PCSK9 mutation in these patients.

The therapy employs two key strategies to boost efficacy.

The first is called base editing. The original CRISPR toolset acts like scissors, cutting both strands of DNA, making the edit, and patching the ends back together. The process often leaves room for mistakes, such as the unintended rearranging of sequences that could turn on cancer genes, leading some experts to call it “genetic vandalism.” Base editing, in contrast, is far more precise. Like a scalpel, base editors only nick one DNA strand, and are therefore far less likely to injure non-targeted parts of the genome.

Verve’s treatment encodes the base editor in two different RNA molecules. One instructs the cells to make the components of the gene editing tool—similar to how Covid-19 vaccines work. The other strand of RNA guides the tool to PCSK9. Once edited, the treated gene produces a shortened, non-functional version of the faulty protein responsible for the condition.

The delivery method also boosts efficacy. Base editing components can be encoded into harmless viruses or wrapped inside fatty nanoparticles for delivery. Verve took the second approach because these nanoparticles are often first shuttled into the liver—exactly where the treatment should go—and are less likely to cause an immune reaction than viruses.

There’s just one problem. Base editing has never been used to edit genes in the body before.

A non-human trial in 2021 showed the idea could work. In macaque monkeys, a single shot of the editor into the bloodstream reduced the gene’s function in the liver, causing LDL levels to drop 60 percent. The treatment lasted at least eight months with barely any side effects.

Safety First

The new trial built on previous results to assess the treatment’s safety in 10 patients with familial hypercholesterolemia. One patient dropped out before completing the trial.

The team was careful. To detect potential side effects, six patients were treated with a small dose unlikely to reverse the disorder.

Three patients received a higher dose of the base editor and saw dramatic effects. PCSK9 protein levels in their livers dropped between 47 and 84 percent. Circulating LDL fell to about half its prior levels—an effect that lasted at least six months. Follow-ups are ongoing.

The efficacy of the higher dose came at a price. At lower doses, the treatment was well tolerated overall with minimal side effects. But at higher doses, it seemed to temporarily tax the liver, bumping up markers for liver stress that gradually subsided.

More troubling were two severe events in patients with advanced heart blockage. One person receiving a low dose died from cardiac arrest about five weeks after the treatment. According to a review board, the death was likely due to underlying conditions, not the treatment.

Another patient infused with a higher dose suffered a heart attack a day after treatment, suggesting the episode could have been related. However, he had intermittent chest pains before the infusion that hadn’t been disclosed to the team. His symptoms would have excluded him from the trial.

A Promising Path

Overall, an independent board monitoring data and safety determined the treatment safe. Still, there are plenty of unknowns. Like other gene editing tools, base editing poses the risk of off-target snips—something this trial did not specifically examine. Long-term safety and efficacy of the treatment are also unknown.

But the team is encouraged by these early results. “We are excited to have reached this milestone of positive first-in-human data supporting the significant potential for in vivo liver gene editing as a treatment for patients with [familial hypercholesterolemia],” said Dr. Sekar Kathiresan, CEO and cofounder of Verve.

The trial was conducted in the United Kingdom and New Zealand. Recently, US regulators approved the therapy for testing. They plan to enroll roughly 40 more patients.

Meanwhile, a new version of the therapy, VERVE-102, is already in the works. The newcomer uses a similar base editing technology and an upgraded nanoparticle carrier with potentially better targeting.

If all goes well, the team will launch a randomized, placebo-controlled trial by 2025. So far, the company hasn’t released a price tag for the therapy. But the cost of existing gene therapies can run into the millions of dollars.

To Kathiresan, treatments like this one could benefit more than patients with familial hypercholesterolemia. High cholesterol is a leading health problem. A dose of the base editor in middle age could potentially nip cholesterol buildup in the bud—and in turn, lower risk of heart disease and death.

“That’s the ultimate vision,” he said.

Image Credit: Scientific Animations / Wikimedia Commons

Kategorie: Transhumanismus

DeepMind Says New Multi-Game AI Is a Step Toward More General Intelligence

Singularity HUB - 20 Listopad, 2023 - 16:00

AI has mastered some of the most complex games known to man, but models are generally tailored to solve specific kinds of challenges. A new DeepMind algorithm that can tackle a much wider variety of games could be a step towards more general AI, its creators say.

Using games as a benchmark for AI has a long pedigree. When IBM’s Deep Blue algorithm beat chess world champion Garry Kasparov in 1997, it was hailed as a milestone for the field. Similarly, when DeepMind’s AlphaGo defeated one of the world’s top Go players, Lee Sedol, in 2016, it led to a flurry of excitement about AI’s potential.

DeepMind built on this success with AlphaZero, a model that mastered a wide variety of games, including chess and shogi. But as impressive as this was, AlphaZero only worked with perfect information games where every detail of the game, other than the opponent’s intentions, is visible to both players. This includes games like Go and chess where both players can always see all the pieces on the board.

In contrast, imperfect information games involve some details being hidden from the other player. Poker is a classic example because players can’t see what hands their opponents are holding. There are now models that can beat professionals at these kinds of games too, but they use an entirely different approach than algorithms like AlphaZero.

Now, researchers at DeepMind have combined elements of both approaches to create a model that can beat humans at chess, Go, and poker. The team claims the breakthrough could accelerate efforts to create more general AI algorithms that can learn to solve a wide variety of tasks.

Researchers building AI to play perfect information games have generally relied on an approach known as tree search. This explores a multitude of ways the game could progress from its current state, with different branches mapping out potential sequences of moves. AlphaGo combined tree search with a machine learning technique in which the model refines its skills by playing itself repeatedly and learning from its mistakes.

When it comes to imperfect information games, researchers tend to instead rely on game theory, using mathematical models to map out the most rational solutions to strategic problems. Game theory is used extensively in economics to understand how people make choices in different situations, many of which involve imperfect information.

In 2016, an AI called DeepStack beat human professionals at no-limit poker, but the model was highly specialized for that particular game. Much of the DeepStack team now works at DeepMind, however, and they’ve combined the techniques they used to build DeepStack with those used in AlphaZero.

The new algorithm, called Student of Games, uses a combination of tree search, self-play, and game-theory to tackle both perfect and imperfect information games. In a paper in Science, the researchers report that the algorithm beat the best openly available poker playing AI, Slumbot, and could also play Go and chess at the level of a human professional, though it couldn’t match specialized algorithms like AlphaZero.

But being a jack-of-all-trades rather than a master of one is arguably a bigger prize in AI research. While deep learning can often achieve superhuman performance on specific tasks, developing more general forms of AI that can be applied to a wide range of problems is trickier. The researchers say a model that can tackle both perfect and imperfect information games is “an important step toward truly general algorithms for arbitrary environments.”

It’s important not to extrapolate too much from the results, Michael Rovatsos from the University of Edinburgh, UK, told New Scientist. The AI was still operating within the simple and controlled environment of a game, where the number of possible actions is limited and the rules are clearly defined. That’s a far cry from the messy realities of the real world.

But even if this is a baby step, being able to combine the leading approaches to two very different kinds of game in a single model is a significant achievement. And one that could certainly be a blueprint for more capable and general models in the future.

Image Credit: Hassan Pasha / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through November 18)

Singularity HUB - 18 Listopad, 2023 - 16:00

Google DeepMind’s AI Pop Star Clone Will Freak You Out
Angela Watercutter | Wired
“Two new tools using DeepMind’s music generation algorithm Lyria let anyone make YouTube shorts using the AI-generated vocals of Demi Lovato, T-Pain, Troye Sivan and others. …All anyone has to do is type in a topic and pick an artist off a carousel, and the tool writes the lyrics, produces the backing track, and sings the song in the style of the musician selected. It’s wild.”


The First CRISPR Medicine Just Got Approved
Emily Mullin | Wired
“The first medical treatment that uses CRISPR gene editing was authorized Thursday by the United Kingdom. The one-time therapy, which will be sold under the brand name Casgevy, is for patients with sickle cell disease and a related blood disorder called beta thalassemia, both of which are inherited. The UK approval marks a historic moment for CRISPR, the molecular equivalent of scissors that won its inventors a Nobel Prize in 2020.”


Google DeepMind Wants to Define What Counts as Artificial General Intelligence
Will Douglas Heaven | MIT Technology Review
“AGI, or artificial general intelligence, is one of the hottest topics in tech today. It’s also one of the most controversial. A big part of the problem is that few people agree on what the term even means. Now a team of Google DeepMind researchers has put out a paper that cuts through the cross talk with not just one new definition for AGI but a whole taxonomy of them.”


Why Tech Giants Are Hedging Their Bets on OpenAI
Michelle Cheng | Quartz
“Microsoft owns a 49% stake in OpenAI, having invested billions of dollars in the maker of ChatGPT. But the tech titan is also an investor in Inflection AI, which has a chatbot called Pi and is seen as a rival to OpenAI. …Last week, Reuters reported that Google plans to invest hundreds of millions in Character.AI, which builds personalized bots. In late October, Google said it had agreed to sink up to $2 billion into Anthropic, a key rival to OpenAI. What’s happening here?”


Start-Ups With Laser Beams: The Companies Trying to Ignite Fusion Energy
Kenneth Chang | The New York Times
“Take a smidgen of hydrogen, then blast it with lasers to set off a small thermonuclear explosion. Do it right, and maybe you can solve the world’s energy needs. A small group of start-ups has embarked on this quest, pursuing their own variations on this theme—different lasers, different techniques to set off the fusion reactions, different elements to fuse together. ‘There has been rapid growth,’ said Andrew Holland, chief executive of the Fusion Industry Association, a trade group lobbying for policies to speed the development of fusion.”


Young Children Trounce Large Language Models in a Simple Problem-Solving Task
Ross Pomeroy | Big Think
“Despite their genuine potential to change how society works and functions, large language models get trounced by young children in basic problem-solving tasks testing their ability to innovate, according to new research. The study reveals a key weakness of large language models: They do not innovate. If large language models can someday become innovation engines, their programmers should try to emulate how children learn, the authors contend.”


Sphere and Loathing in Las Vegas
Charlie Warzel | The Atlantic
“I wanted to be cynical about the Sphere and all it represents—our phones as appendages, screens as a mediated form of experiencing the world. There’s plenty to dislike about the thing—the impersonal flashiness of it all, its $30 tequila sodas, the likely staggering electricity bills. But it is also my solemn duty to report to you that the Sphere slaps, much in the same way that, say, the Super Bowl slaps. It’s gaudy, overly commercialized, and cool as hell: a brand-new, non-pharmaceutical sensory experience.”


Meta Brings Us a Step Closer to AI-Generated Movies
Kyle Wiggers | TechCruch
“Like ‘Avengers’ director Joe Russo, I’m becoming increasingly convinced that fully AI-generated movies and TV shows will be possible within our lifetimes. …Now, video generation tech isn’t new. Meta’s experimented with it before, as has Google. …But Emu Video’s 512×512, 16-frames-per-second clips are easily among the best I’ve seen in terms of their fidelity—to the point where my untrained eye has a tough time distinguishing them from the real thing.”


Joby, Volocopter Fly Electric Air Taxis Over New York City
Aria Alamalhodaei | TechCrunch
“Joby Aviation and Volocopter gave the public a vivid glimpse of what the future of aviation might look like [last] weekend, with both companies performing brief demonstration flights of their electric aircraft in New York City. The demonstration flights were conducted during a press conference on Sunday, during which New York City Mayor Eric Adams announced that the city would electrify two of the three heliports located in Manhattan—Downtown Manhattan Heliport and East 34th Street.”


Google’s ChatGPT Competitor Will Have to Wait
Maxwell Zeff | Gizmodo
“Google is having a hard time catching up with OpenAI. Google’s competitor to ChatGPT will not be ready until early 2024, after previously telling some cloud customers it would get to use Gemini AI in November of this year, sources told The Information Thursday. …Google’s Gemini was reportedly set to debut in 2023 with image and voice recognition capabilities. The chatbot would have been competitive with OpenAI’s GPT-4, and Anthropic’s Claude 2.”

Image Credit: Brian McGowanUnsplash

Kategorie: Transhumanismus

What Is Quantum Advantage? The Moment Extremely Powerful Quantum Computers Will Arrive

Singularity HUB - 17 Listopad, 2023 - 21:03

Quantum advantage is the milestone the field of quantum computing is fervently working toward, when a quantum computer can solve problems that are beyond the reach of the most powerful non-quantum, or classical, computers.

Quantum refers to the scale of atoms and molecules where the laws of physics as we experience them break down and a different, counterintuitive set of laws apply. Quantum computers take advantage of these strange behaviors to solve problems.

There are some types of problems that are impractical for classical computers to solve, such as cracking state-of-the-art encryption algorithms. Research in recent decades has shown that quantum computers have the potential to solve some of these problems. If a quantum computer can be built that actually does solve one of these problems, it will have demonstrated quantum advantage.

I am a physicist who studies quantum information processing and the control of quantum systems. I believe that this frontier of scientific and technological innovation not only promises groundbreaking advances in computation but also represents a broader surge in quantum technology, including significant advancements in quantum cryptography and quantum sensing.

The Source of Quantum Computing’s Power

Central to quantum computing is the quantum bit, or qubit. Unlike classical bits, which can only be in states of 0 or 1, a qubit can be in any state that is some combination of 0 and 1. This state of neither just 1 or just 0 is known as a quantum superposition. With every additional qubit, the number of states that can be represented by the qubits doubles.

This property is often mistaken for the source of the power of quantum computing. Instead, it comes down to an intricate interplay of superposition, interference , and entanglement.

Interference involves manipulating qubits so that their states combine constructively during computations to amplify correct solutions and destructively to suppress the wrong answers. Constructive interference is what happens when the peaks of two waves—like sound waves or ocean waves—combine to create a higher peak. Destructive interference is what happens when a wave peak and a wave trough combine and cancel each other out. Quantum algorithms, which are few and difficult to devise, set up a sequence of interference patterns that yield the correct answer to a problem.

Entanglement establishes a uniquely quantum correlation between qubits: The state of one cannot be described independently of the others, no matter how far apart the qubits are. This is what Albert Einstein famously dismissed as “spooky action at a distance.” Entanglement’s collective behavior, orchestrated through a quantum computer, enables computational speed-ups that are beyond the reach of classical computers.

Applications of Quantum Computing

Quantum computing has a range of potential uses where it can outperform classical computers. In cryptography, quantum computers pose both an opportunity and a challenge. Most famously, they have the potential to decipher current encryption algorithms, such as the widely used RSA scheme.

One consequence of this is that today’s encryption protocols need to be reengineered to be resistant to future quantum attacks. This recognition has led to the burgeoning field of post-quantum cryptography. After a long process, the National Institute of Standards and Technology recently selected four quantum-resistant algorithms and has begun the process of readying them so that organizations around the world can use them in their encryption technology.

In addition, quantum computing can dramatically speed up quantum simulation: the ability to predict the outcome of experiments operating in the quantum realm. Famed physicist Richard Feynman envisioned this possibility more than 40 years ago. Quantum simulation offers the potential for considerable advancements in chemistry and materials science, aiding in areas such as the intricate modeling of molecular structures for drug discovery and enabling the discovery or creation of materials with novel properties.

Another use of quantum information technology is quantum sensing: detecting and measuring physical properties like electromagnetic energy, gravity, pressure, and temperature with greater sensitivity and precision than non-quantum instruments. Quantum sensing has myriad applications in fields such as environmental monitoring, geological exploration, medical imaging, and surveillance.

Initiatives such as the development of a quantum internet that interconnects quantum computers are crucial steps toward bridging the quantum and classical computing worlds. This network could be secured using quantum cryptographic protocols such as quantum key distribution, which enables ultra-secure communication channels that are protected against computational attacks—including those using quantum computers.

Despite a growing application suite for quantum computing, developing new algorithms that make full use of the quantum advantage—in particular in machine learning—remains a critical area of ongoing research.

A prototype quantum sensor developed by MIT researchers can detect any frequency of electromagnetic waves. Image Credit: Guoqing Wang, CC BY-NC-ND Staying Coherent and Overcoming Errors

The quantum computing field faces significant hurdles in hardware and software development. Quantum computers are highly sensitive to any unintentional interactions with their environments. This leads to the phenomenon of decoherence, where qubits rapidly degrade to the 0 or 1 states of classical bits.

Building large-scale quantum computing systems capable of delivering on the promise of quantum speed-ups requires overcoming decoherence. The key is developing effective methods of suppressing and correcting quantum errors, an area my own research is focused on.

In navigating these challenges, numerous quantum hardware and software startups have emerged alongside well-established technology industry players like Google and IBM. This industry interest, combined with significant investment from governments worldwide, underscores a collective recognition of quantum technology’s transformative potential. These initiatives foster a rich ecosystem where academia and industry collaborate, accelerating progress in the field.

Quantum Advantage Coming Into View

Quantum computing may one day be as disruptive as the arrival of generative AI. Currently, the development of quantum computing technology is at a crucial juncture. On the one hand, the field has already shown early signs of having achieved a narrowly specialized quantum advantage. Researchers at Google and later a team of researchers in China demonstrated quantum advantage for generating a list of random numbers with certain properties. My research team demonstrated a quantum speed-up for a random number guessing game.

On the other hand, there is a tangible risk of entering a “quantum winter,” a period of reduced investment if practical results fail to materialize in the near term.

While the technology industry is working to deliver quantum advantage in products and services in the near term, academic research remains focused on investigating the fundamental principles underpinning this new science and technology. This ongoing basic research, fueled by enthusiastic cadres of new and bright students of the type I encounter almost every day, ensures that the field will continue to progress.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: xx / xx

Kategorie: Transhumanismus

Google DeepMind AI Nails Super Accurate 10-Day Weather Forecasts

Singularity HUB - 16 Listopad, 2023 - 21:41

This year was a nonstop parade of extreme weather events. Unprecedented heat swept the globe. This summer was the Earth’s hottest since 1880. From flash floods in California and ice storms in Texas to devastating wildfires in Maui and Canada, weather-related events deeply affected lives and communities.

Every second counts when it comes to predicting these events. AI could help.

This week, Google DeepMind released an AI that delivers 10-day weather forecasts with unprecedented accuracy and speed. Called GraphCast, the model can churn through hundreds of weather-related datapoints for a given location and generate predictions in under a minute. When challenged with over a thousand potential weather patterns, the AI beat state-of-the-art systems roughly 90 percent of the time.

But GraphCast isn’t just about building a more accurate weather app for picking wardrobes.

Although not explicitly trained to detect extreme weather patterns, the AI picked up several atmospheric events linked to these patterns. Compared to previous methods, it more accurately tracked cyclone trajectories and detected atmospheric rivers—sinewy regions in the atmosphere associated with flooding.

GraphCast also predicted the onset of extreme temperatures well in advance of current methods. With 2024 set to be even warmer and extreme weather events on the rise, the AI’s predictions could give communities valuable time to prepare and potentially save lives.

“GraphCast is now the most accurate 10-day global weather forecasting system in the world, and can predict extreme weather events further into the future than was previously possible,” the authors wrote in a DeepMind blog post.

Rainy Days

Predicting weather patterns, even just a week ahead, is an old but extremely challenging problem. We base many decisions on these forecasts. Some are embedded in our everyday lives: Should I grab my umbrella today? Other decisions are life-or-death, like when to issue orders to evacuate or shelter in place.

Our current forecasting software is largely based on physical models of the Earth’s atmosphere. By examining the physics of weather systems, scientists have written a number of equations from decades of data, which are then fed into supercomputers to generate predictions.

A prominent example is the Integrated Forecasting System at the European Center for Medium-Range Weather Forecasts. The system uses sophisticated calculations based on our current understanding of weather patterns to churn out predictions every six hours, providing the world with some of the most accurate weather forecasts available.

This system “and modern weather forecasting more generally, are triumphs of science and engineering,” wrote the DeepMind team.

Over the years, physics-based methods have rapidly improved in accuracy, in part thanks to more powerful computers. But they remain time consuming and costly.

This isn’t surprising. Weather is one the most complex physical systems on Earth. You might have heard of the butterfly effect: A butterfly flaps its wings, and this tiny change in the atmosphere alters the trajectory of a tornado. While just a metaphor, it captures the complexity of weather prediction.

GraphCast took a different approach. Forget physics, let’s find patterns in past weather data alone.

An AI Meteorologist

GraphCast builds on a type of neural network that’s previously been used to predict other physics-based systems, such as fluid dynamics.

It has three parts. First, the encoder maps relevant information—say, temperature and altitude at a certain location—onto an intricate graph. Think of this as an abstract infographic that machines can easily understand.

The second part is the processor which learns to analyze and pass information to the final part, the decoder. The decoder then translates the results into a real-world weather-prediction map. Altogether, GraphCast can predict weather patterns for the next six hours.

But six hours isn’t 10 days. Here’s the kicker. The AI can learn from its own forecasts. GraphCast’s predictions are fed back into itself as input, allowing it to progressively predict weather further out in time. It’s a method that’s also used in traditional weather prediction systems, the team wrote.

GraphCast was trained on nearly four decades of historical weather data. Taking a divide-and-conquer strategy, the team split the planet into small patches, roughly 17 by 17 miles at the equator. This resulted in more than a million “points” covering the globe.

For each point, the AI was trained with data collected at two times—one current, the other six hours ago—and included dozens of variables from the Earth’s surface and atmosphere—like temperature, humidity, and wind speed and direction at many different altitudes

The training was computationally intensive and took a month to complete.

Once trained, however, the AI itself is highly efficient. It can produce a 10-day forecast with a single TPU in under a minute. Traditional methods using supercomputers take hours of computation, explained the team.

Ray of Light

To test its abilities, the team pitted GraphCast against the current gold standard for weather prediction.

The AI was more accurate nearly 90 percent of the time. It especially excelled when relying only on data from the troposphere—the layer of atmosphere closest to the Earth and critical for weather forecasting—beating the competition 99.7 percent of the time. GraphCast also outperformed Pangu-Weather, a top competing weather model that uses machine learning.

The team next tested GraphCast in several dangerous weather scenarios: tracking tropical cyclones, detecting atmospheric rivers, and predicting extreme heat and cold. Although not trained on specific “warning signs,” the AI raised the alarm earlier than traditional models.

The model also had help from classic meteorology. For example, the team added existing cyclone tracking software to GraphCast’s forecasts. The combination paid off. In September, the AI successfully predicted the trajectory of Hurricane Lee as it swept up the East Coast towards Nova Scotia. The system accurately predicted the storm’s landfall nine days in advance—three precious days faster than traditional forecasting methods.

GraphCast won’t replace traditional physics-based models. Rather, DeepMind hopes it can bolster them. The European Center for Medium-Range Weather Forecasts is already experimenting with the model to see how it could be integrated into their predictions. DeepMind is also working to improve the AI’s ability to handle uncertainty—a critical need given the weather’s increasingly unpredictable behavior.

GraphCast isn’t the only AI weatherman. DeepMind and Google researchers previously built two regional models that can accurately forecast short-term weather 90 minutes or 24 hours ahead. However, GraphCast can look further ahead. When used with standard weather software, the combination could influence decisions on weather emergencies or guide climate policies. At the least, we might feel more confident about the decision to bring that umbrella to work.

“We believe this marks a turning point in weather forecasting,” the authors wrote.

Image Credit: Google DeepMind

Kategorie: Transhumanismus
Syndikovat obsah