Transhumanismus

AI Agents Could Collaborate on Far Grander Scales Than Humans, Study Says

Singularity HUB - 11 Říjen, 2024 - 22:04

Humans are social animals, but there appear to be hard limits to the number of relationships we can maintain at once. New research suggests AI may be capable of collaborating in much larger groups.

In the 1990s, British anthropologist Robin Dunbar suggested that most humans can only maintain social groups of roughly 150 people. While there is considerable debate about the reliability of the methods Dunbar used to reach this number, it has become a popular benchmark for the optimal size of human groups in business management.

There is growing interest in using groups of AIs to solve tasks in various settings, which prompted researchers to ask whether today’s large language models (LLMs) are similarly constrained when it comes to the number of individuals that can effectively work together. They found the most capable models could cooperate in groups of at least 1,000, an order of magnitude more than humans.

“I was very surprised,” Giordano De Marzo at the University of Konstanz, Germany, told New Scientist. “Basically, with the computational resources we have and the money we have, we [were able to] simulate up to thousands of agents, and there was no sign at all of a breaking of the ability to form a community.”

To test the social capabilities of LLMs the researchers spun up many instances of the same model and assigned each one a random opinion. Then, one by one, the researchers showed each copy the opinions of all its peers and asked if it wanted to update its own opinion.

The team found that the likelihood of the group reaching consensus was directly related to the power of the underlying model. Smaller or older models, like Claude 3 Haiku and GPT-3.5 Turbo, were unable to come to agreement, while the 70-billion-parameter version of Llama 3 reached agreement if there were no more than 50 instances.

But for GPT-4 Turbo, the most powerful model the researchers tested, groups of up to 1,000 copies could achieve consensus. The researchers didn’t test larger groups due to limited computational resources.

The results suggest that larger AI models could potentially collaborate at scales far beyond humans, Dunbar told New Scientist. “It certainly looks promising that they could get together a group of different opinions and come to a consensus much faster than we could do, and with a bigger group of opinions,” he said.

The results add to a growing body of research into “multi-agent systems” that has found groups of AIs working together could do better at a variety of math and language tasks. However, even if these models can effectively operate in very large groups, the computational cost of running so many instances may make the idea impractical.

Also, agreeing on something doesn’t mean it’s right, Philip Feldman at the University of Maryland, told New Scientist. It perhaps shouldn’t be surprising that identical copies of a model quickly form a consensus, but there’s a good chance that the solution they settle on won’t be optimal.

However, it does seem intuitive that AI agents are likely to be capable of larger scale collaboration than humans, as they are unconstrained by biological bottlenecks on speed and information bandwidth. Whether current models are smart enough to take advantage of that is unclear, but it seems entirely possible that future generations of the technology will be able to.

Image Credit: Ant RozetskyUnsplash

Kategorie: Transhumanismus

Conversations with the Future Symposium

Singularity Weblog - 11 Říjen, 2024 - 16:31
I am co-organizing an intimately small in-person event where we’ll dive into old-school, in-person, thought-provoking discussions about shaping our future. It will be held on January 22nd, 2025, in Toronto’s South Etobicoke area. Please join us for an evening that includes insightful talks, interactive Q&A with the speakers, delicious bites, and networking opportunities. This in-person […]
Kategorie: Transhumanismus

You’ll Soon Be Able to Book a Room at the World’s First 3D-Printed Hotel

Singularity HUB - 10 Říjen, 2024 - 19:10

The first 3D-printed house in the US was unveiled just over six years ago. Since then, homes have been printed all over the country and the world, from Virginia to California and Mexico to Kenya. If you’re intrigued by the concept but not sure whether you’re ready to jump on the bandwagon, you’ll soon be able to take a 3D-printed dwelling for a test run—by staying in the world’s first 3D-printed hotel.

The hotel is under construction in the city of Marfa, in the far west of Texas. It’s an expansion of an existing hotel called El Cosmico, which until now has really been more of a campground, offering accommodations in trailers, yurts, and tents. According to the property’s website, “the vision has been to create a living laboratory for artistic, cultural, and community experimentation.” The project is a collaboration between Austin, Texas-based 3D printing construction company Icon, architecture firm Bjarke Ingels Group, and El Cosmico’s owner, Liz Lambert.

El Cosmico will gain 43 new rooms and 18 houses, which will be printed using Icon’s gantry-style Vulcan printer. Vulcan is 46.5 feet (14.2 meters) wide by 15.5 feet (4.7 meters) tall, and it weighs 4.75 tons. It builds homes by pouring a proprietary concrete mixture called Lavacrete into a pattern dictated by software, squeezing out one layer at a time as it moves around on an axis set on a track. Its software, BuildOS, can be operated from a tablet or smartphone.

Image Credit: Icon

One of the benefits of 3D-printed construction is that it’s much easier to diverge from conventional architecture and create curves and other shapes. The hotel project’s designers are taking full advantage of this; far from traditional boxy hotel rooms, they’re aiming to create unique architecture that’s aligned with its natural setting.

Image Credit: Icon

“By testing the geometric boundaries of Icon’s 3D-printed construction, we have imagined fluid, curvilinear structures that enjoy the freedom of form in the empty desert. By using the sand, soils, and colors of the terroir as our print medium, the circular forms seem to emerge from the very land on which they stand,” Bjarke Ingels, the founder and creative director of Bjarke Ingels Group, said in a press release.

Renderings of the completed project and photos of the initial construction show circular, neutral-toned structures that look like they might have sprouted up out of the ground. Don’t let that fool you, though—the interiors, while maybe not outright fancy, will be tastefully decorated and are quite comfortable-looking.

Image Credit: Icon

At first glance, Marfa seems like an odd choice for something as buzzy as a 3D-printed hotel. The town sits in the middle of the hot, dry Texas desert; it has a population of 1,700 people; and the closest airport is in El Paso, a three-hour drive away. But despite its relative isolation, Marfa is a hotspot for artists and art lovers and has a unique vibe all its own that draws flocks of tourists (according to Vogue, an estimated 49,000 people visited Marfa in 2019).

El Cosmico is not only expanding, it’s relocating to a 60-acre site on the outskirts of Marfa. Along with the 3D-printed accommodations, the site will have a restaurant, pool, spa, and communal facilities. Most of the trailers and tents from the existing property will be preserved and moved to the new site.

The project broke ground last month, and El Cosmico 2.0 is slated to open in 2026.

How much will it cost you to give 3D-printed construction a test run? Similar to how the market prices of commercial 3D-printed homes haven’t been dramatically lower than conventional houses, it seems 3D-printed hotel rooms will cost about the same as regular hotel rooms, or maybe more: Reservations for the new rooms can’t yet be booked, but they’re predicted to cost between $200 and $450 per night.

Image Credit: Icon

Kategorie: Transhumanismus

In a First, Woman’s Type 1 Diabetes Reversed by a Stem Cell Transplant

Singularity HUB - 9 Říjen, 2024 - 16:00

The 25-year-old woman had suffered through decades of medical nightmares.

A native of Tianjin, a city roughly two hours west of Beijing, she was diagnosed with Type 1 diabetes 11 years ago. A chronic autoimmune disease, her body’s immune system savagely attacks the cells in her pancreas that produce insulin, spiking blood sugar to deadly levels—even after a few nibbles of rice.

Following two liver transplants and a whole pancreas transplant, her blood sugar levels remained unstable. The new pancreas had to be removed after causing life-threatening blood clots. At the end of her rope, she signed up for a highly experimental procedure. Scientists would remove fatty cells from her body, convert them into functional insulin-producing tissues, and insert them into her belly.

In just three months, her body began producing insulin to the point she no longer relied on external insulin shots to manage blood sugar levels. The transplanted cells lasted at least one year—when the study ended—with no signs of waning efficacy and few side effects.

The transplants restored the woman’s ability to process sugar and carbohydrates to that of non-diabetic people, according to a measure for long-term blood sugar stability. In a sense, it reversed her condition completely.

“I can eat sugar now…I enjoy eating everything, especially hotpot,” she told Nature.

Induced pluripotent stem cells, which transform mature cells into a stem cell-like state, are at the heart of the study. Using a chemical soup, scientists can then nudge these cells to grow into different types of tissues or organs.

The study, published in Cell, is the latest effort to use the technology to tackle diabetes.

“They’ve completely reversed diabetes in the patient, who was requiring substantial amounts of insulin beforehand,” said James Shapiro at the University of Alberta, who was not involved in the study.

Insulin on Call

Whenever we eat a carbohydrate-heavy meal—say pasta, rice, bread—or a tasty dessert, the body breaks the carbohydrates down into sugar. These sugars swirl around the bloodstream and cause the pancreas to release insulin, a hormone that helps the sugars enter cells as fuel.

In both types of diabetes, the process goes awry. Type 2 diabetes may be more familiar. Here, like a broken thermostat, cells that release insulin can no longer sense and respond to rising blood sugar levels. This type of diabetes is often managed with insulin shots.

Type 1 diabetes is more nefarious. Here, the body’s immune system attacks insulin-producing cells called islets. The disease often occurs in kids or in early adulthood, and there is no treatment. Regardless of type, chronically high blood sugar levels can severely damage eyesight, nerves, and blood vessels over time.

If broken islets are the problem, why not replace them?

Enter induced pluripotent stem cells (iPSCs). Scientists make iPSCs by transforming mature cells into stem cells, which have the ability to theoretically grow into any other type of cell. These cells have a unique advantage in that they come from the person’s own body—that is, used in transplants, they reduce the chance of rejection by the immune system.

Normally, scientists engineer iPSCs by adding genes for four specific proteins. However, this team developed a recipe to reprogram cells using a chemical bath. Compared to genetic engineering, the process offers far more control. Like cooking, it’s easier to tweak the process on demand.

They first took fatty cells from the woman, reprogrammed them into iPSCs, and transformed these into functional islet cells. As a sanity check, some of the engineered islets were tested in hundreds of mice to see if they produced insulin and to monitor any side effects.

Cancer was one worry. Reversing adult cells back into stem cells carries the risk the cells might grow out of control. The mice studies didn’t find cancer. The critters also had healthy hearts, livers, kidneys, lungs, and brains after the transplant, suggesting the approach was relatively safe.

In a simple 90-minute surgery, the team then injected the equivalent of 1.5 million islet cells into the woman’s belly muscles. The islets had been frozen, rethawed, and given a health check. Being able to freeze tissues means they might be transported between facilities and hospitals, allowing for more flexibility.

Inserting the islets into the belly was a strategical move. Previous attempts at cell therapy injected cells into the liver. This is easy enough in lab experiments but harder to monitor in clinical use. The belly is more readily examined with ultrasound and other imaging methods.

A Way Forward

Two weeks after transplantation, the woman’s daily insulin needs had fallen dramatically. Seventy-five days later she no longer depended on daily insulin shots and remained completely independent of them for at least a year. Her blood sugar levels remained within a normal range, similar to those of people without diabetes. Multiple measures of sugar metabolism improved after the treatment. The graft had few side effects and showed no signs of turning cancerous.

Most of us know someone with diabetes. While controllable with diet, exercise, and insulin injections, the condition still increases the chances of stroke, dementia, and other diseases. Personalized stem cell therapy may offer a longer-term alternative.

The current work builds on an earlier study that treated a person with Type 2 diabetes. Though the reprogrammed cells did turn into insulin-pumping cells in that study, they couldn’t generate enough insulin to keep the person’s blood sugar under control. By fine-tuning the procedure, the new study advances a long-term solution for a disease that plagues millions of people.

There are caveats. The woman was already on immunosuppressant drugs for her liver transplants. This makes it difficult to determine if the iPSC-derived transplants can trigger an immune reaction, and if so, how severe.

Also, because the strategy is highly personalized, scaling it up may prove difficult. Another approach, championed by Vertex Pharmaceuticals, uses islets derived from embryonic stem cells directly injected into the liver. Patients produced insulin within three months of the implant.

The new results are part of an ongoing clinical trial for people with both Type 1 and 2 diabetes. Launched in 2021, the trial is expected to have safety results by the end of next year.

The results here are from just one person over one year. Whether they extend to elderly people, who’ve lived with the condition longer, or those with other conditions such as high blood pressure will have to wait until the end of the trial.

Still, overall, the data show the potential of “personalized cell therapy,” wrote the team.

Image Credit: Diabetesmagazijn.nl / Unsplash

Kategorie: Transhumanismus

Witness 1.8 Billion Years of Earth’s Tectonic Dance in a New Animation

Singularity HUB - 8 Říjen, 2024 - 16:00

Using information from inside the rocks on Earth’s surface, my colleagues and I have reconstructed the plate tectonics of the planet over the last 1.8 billion years.

It is the first time Earth’s geological record has been used like this, looking so far back in time. This has enabled us to make an attempt at mapping the planet over the last 40 percent of its history, which you can see in the animation below.

The work, led by Xianzhi Cao from the Ocean University in China, was recently published in the open-access journal Geoscience Frontiers.

A Beautiful Dance

Mapping our planet through its long history creates a beautiful continental dance—mesmerizing in itself and a work of natural art.

It starts with the map of the world familiar to everyone. Then India rapidly moves south, followed by parts of Southeast Asia as the past continent of Gondwana forms in the Southern Hemisphere.

Around 200 million years ago (Ma or mega-annum in the reconstruction), when the dinosaurs walked the earth, Gondwana linked with North America, Europe, and northern Asia to form a large supercontinent called Pangaea.

Then, the reconstruction carries on back through time. Pangaea and Gondwana were themselves formed from older plate collisions. As time rolls back, an earlier supercontinent called Rodinia appears. It doesn’t stop here. Rodinia, in turn, is formed by the breakup of an even older supercontinent called Nuna about 1.35 billion years ago.

Why Map Earth’s Past?

Among the planets in the solar system, Earth is unique for having plate tectonics. Its rocky surface is split into fragments (plates) that grind into each other and create mountains or split away and form chasms that are then filled with oceans.

Apart from causing earthquakes and volcanoes, plate tectonics also pushes up rocks from the deep earth into the heights of mountain ranges. This way, elements which were far underground can erode from the rocks and wash into rivers and oceans. From there, living things can make use of these elements.

Among these essential elements is phosphorus, which forms the framework of DNA molecules, and molybdenum, which is used by organisms to strip nitrogen out of the atmosphere and make proteins and amino acids—building blocks of life.

Plate tectonics also exposes rocks that react with carbon dioxide in the atmosphere. Rocks locking up carbon dioxide is the main control on Earth’s climate over long time scales—much, much longer than the tumultuous climate change we are responsible for today.

A Tool for Understanding Deep Time

Mapping the past plate tectonics of the planet is the first stage in being able to build a complete digital model of Earth through its history.

Such a model will allow us to test hypotheses about Earth’s past. For example, why Earth’s climate has gone through extreme “Snowball Earth” fluctuations or why oxygen built up in the atmosphere when it did.

Indeed, it will allow us to much better understand the feedback between the deep planet and the surface systems of Earth that support life as we know it.

So Much More to Learn

Modeling our planet’s past is essential if we’re to understand how nutrients became available to power evolution. The first evidence for complex cells with nuclei—like all animal and plant cells—dates to 1.65 billion years ago.

This is near the start of this reconstruction and close to the time the supercontinent Nuna formed. We aim to test whether the mountains that grew at the time of Nuna formation may have provided the elements to power complex cell evolution.

Much of Earth’s life photosynthesizes and liberates oxygen. This links plate tectonics with the chemistry of the atmosphere, and some of that oxygen dissolves into the oceans. In turn, a number of critical metals—like copper and cobalt—are more soluble in oxygen-rich water. In certain conditions, these metals are then precipitated out of the solution: In short, they form ore deposits.

Many metals form in the roots of volcanoes that occur along plate margins. By reconstructing where ancient plate boundaries lay through time, we can better understand the tectonic geography of the world and assist mineral explorers in finding ancient metal-rich rocks now buried under much younger mountains.

In this time of exploration of other worlds in the solar system and beyond, it is worth remembering there’s so much about our own planet we are only just beginning to glimpse.

There are 4.6 billion years of it to investigate, and the rocks we walk on contain the evidence for how Earth has changed over this time.

This first attempt at mapping the last 1.8 billion years of Earth’s history is a leap forward in the scientific grand challenge to map our world. But it is just that—a first attempt. The next years will see considerable improvement from the starting point we have now made.

The author would like to acknowledge this research was largely done by Xianzhi Cao, Sergei Pisarevsky, Nicolas Flament, Derrick Hasterok, Dietmar Muller and Sanzhong Li; as a co-author, he is just one cog in the research network. The author also acknowledges the many students and researchers from the Tectonics and Earth Systems Group at The University of Adelaide and national and international colleagues who did the fundamental geological work this research is based on.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA/Goddard Space Flight Center Scientific Visualization Studio

Kategorie: Transhumanismus

DeepMind and BioNTech Bet AI Lab Assistants Will Accelerate Science

Singularity HUB - 7 Říjen, 2024 - 16:00

There has long been hope that AI could help accelerate scientific progress. Now, companies are betting the latest generation of chatbots could make useful research assistants.

Most efforts to accelerate scientific progress using AI have focused on solving fundamental conceptual problems, such as protein folding or the physics of weather modeling. But a big chunk of the scientific process is considerably more prosaic—deciding what experiments to do, coming up with experimental protocols, and analyzing data.

This can suck up an enormous amount of an academic’s time, distracting them from higher value work. That’s why both Google DeepMind and BioNTech are currently developing tools designed to automate many of these more mundane jobs, according to the Financial Times.

At a recent event, DeepMind CEO Demis Hassabis said his company was working on a science-focused large language model that could act as a research assistant, helping design experiments to tackle specific hypotheses and even predict the outcome. BioNTech also announced at an AI innovation day last week that it had used Meta’s open-source Llama 3.1 model to create an AI assistant called Laila with a “detailed knowledge of biology.”

“We see AI agents like Laila as a productivity accelerator that’s going to allow the scientists, the technicians, to spend their limited time on what really matters,” Karim Beguir, chief executive of the company’s InstaDeep AI-subsidiary, told the Financial Times.

The bot showed off its capabilities in a live demonstration, where scientists used it to automate the analysis of DNA sequences and visualize results. According to Constellation Research, the model comes in various sizes and is integrated with InstaDeep’s DeepChain platform, which hosts various other AI models specializing in things like protein design or analyzing DNA sequences.

BioNTech and DeepMind aren’t the first to try turning the latest AI tech into an extra pair of helping hands around the lab. Last year, researchers showed that combining OpenAI’s GPT-4 model with tools for searching the web, executing code, and manipulating laboratory automation equipment could create a “Coscientist” that could design, plan, and execute complex chemistry experiments.

There’s also evidence that AI could help decide what research direction to take. Scientists used Anthropic’s Claude 3.5 model to generate thousands of new research ideas, which the model then ranked on originality. When human reviewers assessed the ideas on criteria like novelty, feasibility, and expected effectiveness, they found they were on average more original and exciting than those dreamed up by human participants.

However, there are likely limits to how much AI can contribute to scientific process. A collaboration between academics and Tokyo-based startup Sakana AI made waves with an “AI scientist” focused on machine learning research. It was able to conduct literature reviews, formulate hypotheses, carry out experiments, and write up a paper. But the research produced was judged incremental at best, and other researchers suggested the output was likely unreliable due to the nature of large language models.

This highlights a central problem for using AI to accelerate science—simply churning out papers or research results is of little use if they’re not any good. As a case in point, when researchers dug into a collection of two million AI-generated crystals produced by DeepMind, they found almost none met the important criteria of “novelty, credibility, and utility.”

Academia is already blighted by paper mills that churn out large quantities of low-quality research, Karin Verspoor at the Royal Melbourne Institute of Technology in Australia, writes in The Conversation. Without careful oversight, new AI tools could turbocharge this trend.

However, it would be unwise to ignore the potential of AI to improve the scientific process. The ability to automate much of science’s grunt work could prove invaluable, and as long as these tools are deployed in ways that augment humans rather than replacing them, their contribution could be significant.

Image Credit: Shrinath / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through October 5)

Singularity HUB - 5 Říjen, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

MIT Spinoff Liquid Debuts Non-Transformer AI Models and They’re Already State-of-the-Art
Carl Franzen | VentureBeat
“Unlike most others of the current generative AI wave, these models are not based around the transformer architecture outlined in the seminal 2017 paper ‘Attention Is All You Need.’ Instead, Liquid states that its goal ‘is to explore ways to build foundation models beyond Generative Pre-trained Transformers (GPTs)’ and with the new LFMs, specifically building from ‘first principles…the same way engineers built engines, cars, and airplanes.'”

BIOTECH

In Medical First, Woman’s Type 1 Diabetes Seemingly Cured by Stem Cells
Ed Cara | Gizmodo
“We might someday be able to have replacement insulin-making cells on demand. Scientists in China have presented early clinical trial data suggesting that a person’s stem cells can be turned into a steady supply of the pancreatic cells responsible for producing insulin. If truly successful, such a treatment would essentially cure Type 1 diabetes.”

ARTIFICIAL INTELLIGENCE

Meta Unveils Instant AI Video Generator That Adds Sounds
Cade Metz and Mike Isaacs | The New York Times
“On Friday, the tech giant Meta unveiled a set of AI tools, called Meta Movie Gen, for automatically generating videos, instantly editing them, and synchronizing them with AI-generated sound effects, ambient noise, and background music. …The new system also let people upload photos of themselves and instantly weave these images to moving videos.”

ROBOTICS

AI-Generated Images Can Teach Robots How to Act
Rhiannon Williams | MIT Technology Review
“Researchers from Stephen James’s Robot Learning Lab in London are using image-generating AI models for a new purpose: creating training data for robots. They’ve developed a new system, called Genima, that fine-tunes the image-generating AI model Stable Diffusion to draw robots’ movements, helping guide them both in simulations and in the real world.”

ENERGY

Britain Shuts Down Last Coal Plant, ‘Turning Its Back on Coal Forever’
Somini Sengupta | The New York Times
“Britain, the nation that launched a global addiction to coal 150 years ago, is shutting down its last coal-burning power station on Monday. That makes Britain first among the world’s major, industrialized economies to wean itself off coal—all the more symbolic because it was also the first to burn tremendous amounts of it to fuel the Industrial Revolution, inspiring the rest of the world to follow suit.”

ROBOTICS

Four-Legged Robot Learns to Climb Ladders
Brian Heater | TechCrunch
“The proliferation of robots like Boston Dynamics’ Spot has showcased the versatility of quadrupeds. These systems have thrived at walking up stairs, traversing small obstacles, and navigating uneven terrain. Ladders, however, still present a big issue — especially given how ever present they are in factories and other industrial environments where the systems are deployed.”

FUTURE

Eight Scientists, a Billion Dollars, and the Moonshot Agency Trying to Make Britain Great Again
Matt Reynolds | Wired
“The whole point of ARIA is to push researchers beyond their comfort zones and towards ideas the typically risk-averse British science funding system would deem improbable or downright weird. …The plan should be just on the edge of impossible, Gur tells the room. Impactful enough that it’s worth a shot, but so ambitious that half of the scientists leave the workshop convinced it’ll never work.”

GADGETS

An ‘iPhone of AI’ Makes No Sense. What Is Jony Ive Really Building?
Sophie Charara | Wired
“LoveFrom is working with OpenAI to build AI devices that are less ‘socially disruptive’ than the iPhone. Is Ive looking for absolution or a new computing soul? …There is the sense…that LoveFrom has Apple-level talent, as close as it will get to Apple-level money—with plans to raise as much as $1 billion in funding by the end of this year—and, with Sam Altman involved, Apple-level ambitions.”

ART

Hidden ‘BopSpotter’ Microphone Is Constantly Surveilling San Francisco for Good Music
Jason Koebler | 404 Media
“Bop Spotter is a project by technologist Riley Walz in which he has hidden an Android phone in a box on a pole, rigged it to be solar powered, and has set it to record audio and periodically sends it to Shazam’s API to determine which songs people are playing in public. Walz describes it as ShotSpotter, but for music. ‘This is culture surveillance. No one notices, no one consents. But it’s not about catching criminals,’ Walz’s website reads. ‘It’s about catching vibes.'”

ETHICS

License Plate Readers Are Creating a US-Wide Database of More Than Just Cars
Matt Burgess and Dhruv Mehrotra | Wired
“Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people’s personal political views and their homes can be recorded into vast databases that can be queried. ‘It really reveals the extent to which surveillance is happening on a mass scale in the quiet streets of America,’ says Jay Stanley, a senior policy analyst at the American Civil Liberties Union.”

SPACE

Moon Time Is a Thing Now—Here’s Why It Matters
Rebecca Boyle | Atlas Obscura
“Moon time is a meaningful thing to understand, especially as countries and private companies are angling to return to the lunar surface this decade. To understand why moon time is so strange—and why scientists recently created a new and unique time zone just for the moon—we have to spend a moment with Einstein.”

Image Credit: Jigar Panchal / Unsplash

Kategorie: Transhumanismus

These Mini AI Models Match OpenAI With 1,000 Times Less Data

Singularity HUB - 4 Říjen, 2024 - 22:39

The artificial intelligence industry is obsessed with size. Bigger algorithms. More data. Sprawling data centers that could, in a few years, consume enough electricity to power whole cities.

This insatiable appetite is why OpenAI—which is on track to make $3.7 billion in revenue but lose $5 billion this year—just announced it’s raised $6.6 billion more in funding and opened a line of credit for another $4 billion.

Eye-popping numbers like these make it easy to forget size isn’t everything.

Some researchers, particularly those with fewer resources, are aiming to do more with less. AI scaling will continue, but those algorithms will also get far more efficient as they grow.

Last week, researchers at the Allen Institute for Artificial Intelligence (Ai2) released a new family of open-source multimodal models competitive with state-of-the-art models like OpenAI’s GPT-4o—but an order of magnitude smaller. Called Molmo, the models range from 1 billion to 72 billion parameters. GPT-4o, by comparison, is estimated to top a trillion parameters.

It’s All in the Data

Ai2 said it accomplished this feat by focusing on data quality over quantity.

Algorithms fed billions of examples, like GPT-4o, are impressively capable. But they also ingest a ton of low-quality information. All this noise consumes precious computing power.

To build their new multimodal models, Ai2 assembled a backbone of existing large language models and vision encoders. They then compiled a more focused, higher quality dataset of around 700,000 images and 1.3 million captions to train new models with visual capabilities. That may sound like a lot, but it’s on the order of 1,000 times less data than what’s used in proprietary multimodal models.

Instead of writing captions, the team asked annotators to record 60- to 90-second verbal descriptions answering a list of questions about each image. They then transcribed the descriptions—which often stretched across several pages—and used other large language models to clean up, crunch down, and standardize them. They found that this simple switch, from written to verbal annotation, yielded far more detail with little extra effort.

Tiny Models, Top Dogs

The results are impressive.

According to a technical paper describing the work, the team’s largest model, Molmo 72B, roughly matches or outperforms state-of-the-art closed models—including OpenAI’s GPT-4o, Anthropic’s Claude 3.5 Sonnet, and Google’s Gemini 1.5 Pro—across a range of 11 academic benchmarks as well as by user preference. Even the smaller Molmo models, which are a tenth the size of its biggest, compare favorably to state-of-the-art models.

Molmo can also point to the things it identifies in images. This kind of skill might help developers build AI agents that identify buttons or fields on a webpage to handle tasks like making a reservation at a restaurant. Or it could help robots better identify and interact with objects in the real world.

Ai2 CEO Ali Farhadi acknowledged it’s debatable how much benchmarks can tell us. But we can use them to make a rough model-to-model comparison.

“There are a dozen different benchmarks that people evaluate on. I don’t like this game, scientifically… but I had to show people a number,” Farhadi said at a Seattle release event. “Our biggest model is a small model, 72B, it’s outperforming GPTs and Claudes and Geminis on those benchmarks. Again, take it with a grain of salt; does this mean that this is really better than them or not? I don’t know. But at least to us, it means that this is playing the same game.”

Open-Source AI

In addition to being smaller, Molmo is open-source. This matters because it means people now have a free alternative to proprietary models.

There are other open models that are beginning to compete with the top dogs on some marks. Meta’s Llama 3.1 405B, for example, is the first scaled up open-weights large language model. But it’s not multimodal. (Meta released multimodal versions of its smaller Llama models last week. It may do the same for its biggest model in the months to come.)

Molmo is also more open than Llama. Meta’s models are best described as “open-weights” models, in that the company releases model weights but not the code or data used in training. The biggest Molmo model is based on Alibaba Cloud’s open-weights Qwen2 72B—which, like Llama, doesn’t include training data or code—but Ai2 did release the dataset and code they used to make their model multimodal.

Also, Meta limits commercial use to products with under 700 million users. In contrast, Molmo carries an Apache 2.0 license. This means developers can modify the models and commercialize products with few limitations.

“We’re targeting, researchers, developers, app developers, people who don’t know how to deal with these [large] models. A key principle in targeting such a wide range of audience is the key principle that we’ve been pushing for a while, which is: make it more accessible,” Farhadi said.

Nipping at the Heels

There are a few things of note here. First, while the makers of proprietary models try to monetize their models, open-source alternatives with similar capabilities are arriving. These alternatives, as Molmo shows, are also smaller, meaning they can run locally, and more flexible. They’re legitimate competition for companies raising billions on the promise of AI products.

“Having an open source, multimodal model means that any startup or researcher that has an idea can try to do it,” Ofir Press, a post-doc at Princeton University, told Wired.

At the same time, working with images and text is old hat for OpenAI and Google. The companies are pulling ahead again by adding advanced voice capabilities, video generation, and reasoning skills. With billions in new investment and access to a growing horde of quality data from deals with publishers, the next generation of models could raise the stakes again.

Still, Molmo suggests that even as the biggest companies plow billions into scaling the technology, open-source alternatives may not be far behind.

Image Credit: Resource Database / Unsplash

Kategorie: Transhumanismus

Groundbreaking Brain Map Reveals Fruit Fly Brain in Stunning Detail

Singularity HUB - 3 Říjen, 2024 - 21:05

With a brain the size of a sesame seed, the lowly fruit fly is often considered a kitchen pest. But to neuroscientists, the flies are a treasure trove of information detailing how the brain’s intricate connections guide thoughts, decisions, and memories—not just for the critters, but also for us.

Mapping these connections is the first step. With over 140,000 neurons and 54 million synapses—the connections between nerve cells—packed into such a tiny space, the fruit fly’s brain, however rudimentary compared to ours, is highly complex.

This week, in a tour de force, hundreds of scientists from the FlyWire consortium published the first complete map of an adult female fruit fly’s brain. A project roughly a decade in the making, the wiring diagram will be a rich scientific resource for years to come. The same techniques used to make the map—which heavily relied on artificial intelligence—could be used to chart more complex brains, such as zebrafish, mice, and perhaps even humans.

“Flies are important model systems…since their brains solve the same problems as we do,” said Mala Murthy at Princeton University in a press conference. Murthy co-led the project with Sebastian Seung, who has long championed mapping as a way to better understand the inner workings of our brains and potentially extract algorithms to power more flexible AI.

In one of nine articles on the project published by Nature, Clay Reid at the Allen Institute for Brain Science, who was not involved in the project, called the release a “huge deal.”

“It’s something that the world has been anxiously waiting for, for a long time,” he said.

The study’s data and images are freely available for anyone to explore. To Murthy, the project exemplifies the power of open science. The consortium welcomed help from both neuroscientists and citizen scientists, who don’t have formal training but are passionate about the brain.

This “openness drove the science forward,” resulting in the “first time we’ve had a complete map of any complex brain,” said Murthy.

A Brain Atlas

Why do we think, feel, remember, and forget? How do we make decisions, rethink biases, and empathize with others? Even simpler, what neural signals make my fingers type these words?

It’s all about wiring. Neurons connect with each other at specific points called synapses. These connections form the basis of circuits that control behaviors. Like tracing electrical wiring in a house, mapping the brain’s cables can help decipher which neural circuit controls what behaviors. Together, the entire brain wiring diagram is called the connectome.

Previously, scientists had only fully mapped the connectome of a tiny worm with just over 300 neurons. Even so, the feat launched a revolution in neuroscience by highlighting the role of neural circuits, rather than individual cells, in steering behavior.

The fruit fly brain is bigger and far more complex. It’s densely packed with hundreds of thousands of neurons, each intricately connected to others. A single faulty reconstruction could derail our understanding of the brain’s original instructions: Rather than sending a signal down one neural highway, it could be interpreted as a taking another road that leads to nowhere.

The project began over a decade ago, when Davi Bock and colleagues at the Janelia Research Campus imaged the entire fly brain at nanoscale resolution. They “fossilized” the brain of a female fly using a chemical soup, froze it to preserve its delicate connections, and sliced it into wafers.

Using a high-resolution microscope, the scientists took images of every slice. Overall, the project produced roughly 21 million images from over 7,000 brain slices.

This wealth of data was a triumph, but also a problem. Usually, each image had to be manually examined for potential connections—an obvious headache when analyzing millions of images.

Here’s where AI comes in. Seung has long championed using AI to untangle neural wiring from individual images and 3D recreations. With AI becoming increasingly sophisticated, it’s easier for different models to learn how to identify a synapse or the branches of a neuron.

But initial AI systems were imperfect. Overlapping neural wires from two circuits could be interpreted as one: Imagine a satellite view of a tricky highway interchange that confuses your phone’s GPS system. A giant tangle of neural connections from multiple sources could be labeled as a single source, rather than a hub directing the flow of information.

Scientists in the consortium spent years manually proofreading AI-generated results. But they had help. Seung and colleagues elicited crowd input. His earlier project, Eyewire, gamified the brain-mapping process by asking citizen scientists to detect neural connections critical for vision.

FlyWire built a similar online platform in 2022, allowing hundreds of people interested in the brain, but with no formal training, to proofread AI reconstructions and classify neurons based on their shape.

The project would have taken a single person 33 years. By sharing data and recruiting citizen scientists, the team constructed the entire connectome in a fraction of the time. According to study author Gregory Jefferis at the University of Cambridge, the volunteers and scientists made more than three million edits to the AI’s initial results. They also annotated the maps—for example, labeling different cells—providing context for the viewer.

Throughout the process, the consortium released versions of its data so researchers could tap into the expanding dataset. Even without the entire map, scientists have already begun exploring ideas about how the fly’s brain works.

Brain Cartographer

The final map captures over 54 million synapses between roughly 140,000 neurons. It also includes over 8,000 different types of neuron—far more than anyone expected. Incredibly, nearly half were newly discovered for the species.

To Seung, each new cell type poses “a question” about how it influences brain functions.

The fly’s brain was also interconnected to a surprising degree. Neurons that allowed the fly to see also received sound and touch cues, suggesting these senses are wired together.

The connectome data is already spurring new studies and ideas. One team made a digitized fly brain from all the mapped neurons and connections. They then activated artificial neurons that can detect honey or bitter flavors. The virtual brain responded by sticking out the fly’s “tongue” when it detected sweet flavors.

“For decades, we haven’t known what the taste neurons in the brain are,” study author Anita Devineni at Emory University told Science. “And then, all of a sudden in a small amount of time … you can figure it out.”

Other studies using the map found neural circuits for walking, grooming, and feeding—all of which are essential to the fly’s (and our) everyday routine.

The connectome does have some limitations though. It’s based on a single female fruit fly. Brains are highly individualized in their connections, especially across sexes and ages. The decade-long effort is just a snapshot of one brain at one moment in time.

However, the map could still help researchers discover fundamental ways the brain works—like, for example, how wiring between certain brain regions allows them “talk” more efficiently.

The team is already looking to expand the work to a mouse brain with roughly 500 times more neurons than the fly. Similar efforts have already charted synapses in the mouse brain, but the new study’s technology could yield comprehensive maps of neural connections across the entire brain.

“This achievement is not just remarkable, it’s outstanding,” Moritz Helmstaedter at the Max Planck Institute for Brain Research, who was not involved in the project, told Science. “In the next decade, we’ll see tremendous progress, and possibly the first full whole mammalian brain connectome.”

Image Credit: Amy Sterling, Murthy and Seung labs, Princeton University, (Baker et al., Current Biology, 2022)

Kategorie: Transhumanismus

Meta Has Launched the World’s ‘Most Advanced’ Glasses. Will They Replace Smartphones?

Singularity HUB - 1 Říjen, 2024 - 18:29

Humans are increasingly engaging with wearable technology as it becomes more adaptable and interactive. One of the most intimate ways gaining acceptance is through augmented reality glasses.

Last week, Meta debuted a prototype of the most recent version of their AR glasses—Orion. They look like reading glasses and use holographic projection to allow users to see graphics projected through transparent lenses into their field of view.

Meta chief Mark Zuckerberg called Orion “the most advanced glasses the world has ever seen.” He said they offer a “glimpse of the future” in which smart glasses will replace smartphones as the main mode of communication.

But is this true or just corporate hype? And will AR glasses actually benefit us in new ways?

Old Technology, Made New

The technology used to develop Orion glasses is not new.

In the 1960s, computer scientist Ivan Sutherland introduced the first augmented reality head-mounted display. Two decades later, Canadian engineer and inventor Stephen Mann developed the first glasses-like prototype.

Throughout the 1990s, researchers and technology companies developed the capability of this technology through head-worn displays and wearable computing devices. Like many technological developments, these were often initially focused on military and industry applications.

In 2013, after smartphone technology emerged, Google entered the AR glasses market. But consumers were disinterested, citing concerns about privacy, high cost, limited functionality, and a lack of a clear purpose.

This did not discourage other companies—such as Microsoft, Apple, and Meta—from developing similar technologies.

Looking inside

Meta cites a range of reasons for why Orion are the world’s most advanced glasses, such as their miniaturized technology with large fields of view and holographic displays. It said these displays provide “compelling AR experiences, creating new human-computer interaction paradigms […] one of the most difficult challenges our industry has ever faced.”

Orion also has an inbuilt smart assistant (Meta AI) to help with tasks through voice commands, eye and hand tracking, and a wristband for swiping, clicking, and scrolling.

With these features, it is not difficult to agree that AR glasses are becoming more user-friendly for mass consumption. But gaining widespread consumer acceptance will be challenging.

A Set of Challenges

Meta will have to address four types of challenges:

  1. How easy it is to wear, use, and integrate AR glasses with other glasses
  2. Physiological aspects such as the heat the glasses generate, comfort, and potential vertigo
  3. Operational factors such as battery life, data security, and display quality
  4. Psychological factors such as social acceptance, trust in privacy, and accessibility

These factors are not unlike what we saw in the 2000s when smartphones gained acceptance. Just like then, there are early adopters who will see more benefits than risks in adopting AR glasses, creating a niche market that will gradually expand.

Similar to what Apple did with the iPhone, Meta will have to build a digital platform and ecosystem around Orion.

This will allow for broader applications in education (for example, virtual classrooms), remote work, and enhanced collaboration tools. Already, Orion’s holographic display allows users to overlay digital content on the real world, and because it is hands-free, communication will be more natural.

Creative Destruction

Smart glasses are already being used in many industrial settings, such as logistics and healthcare. Meta plans to launch Orion for the general public in 2027.

By that time, AI will have likely advanced to the point where virtual assistants will be able to see what we see and the physical, virtual, and artificial will co-exist. At this point, it is easy to see that the need for bulky smartphones may diminish and that through creative destruction, one industry may replace another.

This is supported by research indicating the virtual and augmented reality headset industry will be worth $370 billion by 2034.

The remaining question is whether this will actually benefit us.

There is already much debate about the effect of smartphone technology on productivity and wellbeing. Some argue that it has benefited us, mainly through increased connectivity, access to information, and productivity applications.

But others say it has just created more work, distractions, and mental fatigue.

If Meta has its way, AR glasses will solve this by enhancing productivity. Consulting firm Deloitte agrees, saying the technology will provide hands-free access to data, faster communication and collaboration through data-sharing.

It also claims smart glasses will reduce human errors, enable data visualization, and monitor the wearer’s health and wellbeing. This will ensure a quality experience, social acceptance, and seamless integration with physical processes.

But whether or not that all comes true will depend on how well companies such as Meta address the many challenges associated with AR glasses.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Meta

Kategorie: Transhumanismus

This Biohybrid Robot Is Made of Human Cells and Controlled by a Machine ‘Mind’

Singularity HUB - 30 Září, 2024 - 23:55

In a tiny laboratory pond, a robotic stingray flaps its fins and swims around. Roughly the width of a dime, the bot dashes distances multiple times its body size. It easily navigates around corners and swims far longer than previous flapping microbots of a similar design.

Its secret? The robot is a biohybrid blend of living, human-derived neurons and muscle cells controlled by a programmable electronic “brain.” The cells cover a synthetic “skeleton” with fins and form dense connections like those that drive movement in our bodies.

Also onboard is a wireless electronic circuit with magnetic coils. The circuit controls the robot’s neurons—either amping up or damping their activity. In turn, the brain cells trigger muscle fibers. The robot can flap its fins separately or together with the flexibility of a stingray or a butterfly.

Watching the robot move is mesmerizing, but the study isn’t just about cool visuals.

Robots have long tapped into examples of movement in nature to increase their dexterity and reduce energy usage. For now, the biohybrid bots can only live and operate in a nutritious soup of chemicals. But unlike previous designs, the bots push the field into the “brain-to-motor frontier” and could lead to autonomous systems “capable of advanced adaptive motor control and learning,” wrote study author Su Ryon Shin at Harvard Medical School and colleagues.

The technology could be a boon for biomedicine. Because it’s often compatible with living bodies, “tissue-based biohybrid robotics offers additional interdisciplinary insights in human health, medicine, and fundamental research in biology,” wrote Nicole Xu at the University of Colorado Boulder, who was not involved in the research.

Nature’s Touch

Scientists have long sought to develop soft, agile, and flexible robots that can navigate different terrain while using minimal energy—a far cry from the rigid, mechanical Terminator.

Often, they look to nature for ideas.

Thanks to evolution, every species on Earth has a fine-tuned system of movement tailored to its survival. Although each system differs—the brain wiring behind a butterfly flapping its wings is hardly similar to that of a blue whale spreading its fins—one central concept connects them all.

Each species needs a system that connects movement to its environment and quickly responds to stimuli. While this comes naturally to living creatures, robots often stumble when faced with unexpected challenges.

“Animals typically have a higher performance—such as increased energy efficiency, agility, and damage tolerance—compared to their robotic counterparts because of evolutionary pressures driving biological adaptations,” wrote Xu.

It’s no wonder scientists look to nature to design bioinspired robots. Two favorites are ray fishes and butterflies, both of which use very little energy to flap their fins or wings.

Last year, one team engineered a butterfly-like underwater robot with a synthetic hydrogel. Using light as a controller, it could flap its wings to swim upwards. Another mostly silicone minibot swam at high speeds with a “snapping” action, like when closing hairpins.

Both bots used entirely engineered materials and needed actuators to sense stimuli, say, light or pressure, and alter the robot’s moving components. Though successful, these can often fail.

Brain Meets Machine

Enter biohybrid robots.

These bots use biological actuators to easily convert different types of energy used by the body—like, for example, automatically translating electricity or light into chemical energy.

The strategy has had successes, including ray-like robots that use muscle tissues to swim forward and turn using an external light source. Here, the light-controlled bots had a single layer of rat heart cells genetically engineered to respond to flashes of light. Compared to biobots built from purely synthetic materials, these could swim far longer.

The new study took this approach a step further by adding brain cells into the mix. Neurons form intricate connections with muscle cells to direct them when to flex.

The team used induced pluripotent stem cells (iPSCs) for their bot. Scientists make these cells by reverting skin cells into a stem cell-like state and then nudging them to form other cell types. In this case, they grew motor neurons, the brain cells that direct muscle movement, and muscle cells similar to those that keep the heart pumping. The cells linked up in a petri dish, allowing the neurons to control muscle contractions.

Living cells in hand, the team then assembled the robot’s two main components.

The first of these embeds neurons and muscle cells in a thin-film scaffold made of carbon nanotubes and gelatin—the main ingredient in Jello—and shaped into the robot’s body and fins.

The other is an “artificial brain” that controls the bot wirelessly using magnetic stimulation to change the electrical activity of the neurons, increasing or decreasing their activity.

Neuro-Bot

In several tests, the team showed they could control the biohybrid bot’s behavior as it navigated its pool. Using multiple frequencies, each activating neurons for either the left or right fin, they easily steered the bot in a direct line and made turns.

Depending on the input, the bot could also flap a single fin, both fins, or alternate fins. The latter increased its stamina for longer swims—a bit like alternating arms in kayaking.

The bot’s neurons and muscle cells took the team by surprise by forming a type of connection that relies on electricity alone to transmit data. Normally, these connections, called synapses, need an additional chemical messenger to bridge communications, and they’re only one-way.

In contrast, the networks formed in the bot could transmit data in both directions faster and longer, controlling muscles up to 150 seconds or roughly 7.5 times longer than standard chemical synapses. And compared to bio-inspired systems using only synthetic materials, the biohybrid bot slashed energy needs.

For now, the minibots can only survive in a nutrient-rich soup of chemicals. But they show living components can be seamlessly integrated with electronics and non-biological scaffolding. Living robots could form the next generation of organoids-on-a-chip for study of diseases related to the brain and muscles or to test new drug treatments. Using purely electrical connections, which are easier to implement than standard chemical synapses, could help scale up the production of biohybrid bots.

“The advent of this bioelectronic neuromuscular robotic swimmer suggests a potential frontier [where we can] build autonomous biohybrid robotic systems that can achieve adaptive motor control, sensing, and learning,” wrote the team.

Image Credit: Hiroyuki Tetsuka

Kategorie: Transhumanismus

AI Matters. Character and Culture Matter More!

Singularity Weblog - 30 Září, 2024 - 17:11
The Stoics believed that character is fate. If that is true for individuals, then for organizations, culture is destiny. While AI is undeniably a transformative force shaping our civilization, it is our character and culture that will ultimately define its impact. AI matters, but character and culture matter more. The Role of Character and Culture […]
Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through September 28)

Singularity HUB - 28 Září, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

A Tiny New Open-Source AI Model Performs as Well as Powerful Big Ones
Melissa Heikkiläarchive page | MIT Technology Review
“[The Allen Institute for Artificial Intelligence (Ai2)] claims that its biggest Molmo model, which has 72 billion parameters, outperforms OpenAI’s GPT-4o, which is estimated to have over a trillion parameters, in tests that measure things like understanding images, charts, and documents. Meanwhile, Ai2 says a smaller Molmo model, with 7 billion parameters, comes close to OpenAI’s state-of-the-art model in performance, an achievement it ascribes to vastly more efficient data collection and training methods.”

AUGMENTED REALITY

Hands On With Orion, Meta’s First Pair of AR Glasses
Alex Heath | The Verge
“They look almost like a normal pair of glasses. That’s the first thing I notice as I walk into a conference room at Meta’s headquarters in Menlo Park, California. The black Clark Kent-esque frames sitting on the table in front of me look unassuming, but they represent CEO Mark Zuckerberg’s multibillion-dollar bet on the computers that come after smartphones. They’re called Orion, and they’re Meta’s first pair of augmented reality glasses.”

COMPUTING

Startup Says It Can Make a 100x Faster CPU
Dina Genkina | IEEE Spectrum
“Instead of trying to speed up computation by putting 16 identical CPU cores into, say, a laptop, a manufacturer could put 4 standard CPU cores and 64 of Flow Computing’s so-called parallel processing unit (PPU) cores into the same footprint, and achieve up to 100 times better performance.”

TECH

OpenAI to Become For-Profit Company
Deepa Seetharaman, Berber Jin, Tom Dotan | The Wall Street Journal
“OpenAI is planning to convert from a nonprofit organization to a for-profit company at the same time it is undergoing major personnel shifts including the abrupt resignation Wednesday of its chief technology officer, Mira Murati. Becoming a for-profit would be a seismic shift for OpenAI, which was founded in 2015 to develop AI technology ‘to benefit humanity as a whole, unconstrained by a need to generate financial return,’ according to a statement it published when it launched.”

ROBOTICS

Detachable Robotic Hand Crawls Around on Finger-Legs
Evan Ackerman | IEEE Spectrum
“One of the great things about robots is that they don’t have to be constrained by our constraints, and at ICRA@40 in Rotterdam this week, we saw a novel new Thing: a robotic hand that can detach from its arm and then crawl around to grasp objects that would be otherwise out of reach, designed by roboticists from EPFL in Switzerland.”

SECURITY

Remember That DNA You Gave 23andMe?
Kristen V. Brown | The Atlantic
“23andMe is not doing well. Its stock is on the verge of being delisted. It shut down its in-house drug-development unit last month, only the latest in several rounds of layoffs. Last week, the entire board of directors quit, save for Anne Wojcicki, a co-founder and the company’s CEO. Amid this downward spiral, Wojcicki has said she’ll consider selling 23andMe—which means the DNA of 23andMe’s 15 million customers would be up for sale, too.”

BIOTECH

An Ultrathin Graphene Brain Implant Was Just Tested in a Person
Emily Mullin | Wired
“Twenty years [after its discovery], graphene is finally making its way into batteries, sensors, semiconductors, air conditioners, and even headphones. And now, it’s being tested on people’s brains. This [week], surgeons at the University of Manchester temporarily placed a thin, Scotch-tape-like implant made of graphene on the patient’s cortex—the outermost layer of the brain. Made by Spanish company InBrain Neuroelectronics, the technology is a type of brain-computer interface, a device that collects and decodes brain signals.”

3D PRINTING

First 3D-Printed Hotel Ever Is Underway in Texas
Evan Garcia | Reuters
“It looks like any other 3D printer—except it’s the size of a crane and is, layer by layer, building a hotel in the Texan desert. El Cosmico, an existing hotel and campground on the outskirts of the city of Marfa, is expanding. It is building 43 new hotel units and 18 residential homes over 60 acres (24 hectares)—all with a 3D printer.”

COMPUTING

AI Bots Now Beat 100% of Those Traffic-Image CAPTCHAs
Kyle Orland | Ars Technica
“While there have been previous academic studies attempting to use image-recognition models to solve reCAPTCHAs, they were only able to succeed between 68 to 71 percent of the time. The rise to a 100 percent success rate ‘shows that we are now officially in the age beyond captchas,’ according to the new paper’s authors.”

Image Credit: Victor / Unsplash

Kategorie: Transhumanismus

AI and Scientists Face Off to See Who Can Come Up With the Best Ideas

Singularity HUB - 28 Září, 2024 - 00:11

Scientific breakthroughs rely on decades of diligent work and expertise, sprinkled with flashes of ingenuity and, sometimes, serendipity.

What if we could speed up this process?

Creativity is crucial when exploring new scientific ideas. It doesn’t come out of the blue: Scientists spend decades learning about their field. Each piece of information is like a puzzle piece that can be reshuffled into a new theory—for example, how different anti-aging treatments converge or how the immune system regulates dementia or cancer to develop new therapies.

AI tools could accelerate this. In a preprint study, a team from Stanford pitted a large language model (LLM)—the type of algorithm behind ChatGPT—against human experts in the generation of novel ideas over a range of research topics in artificial intelligence. Each idea was evaluated by a panel of human experts who didn’t know if it came from AI or a human.

Overall, ideas generated by AI were more out-of-the-box than those by human experts. They were also rated less likely to be feasible. That’s not necessarily a problem. New ideas always come with risks. In a way, the AI reasoned like human scientists willing to try out ideas with high stakes and high rewards, proposing ideas based on previous research, but just a bit more creative.

The study, almost a year long, is one of the biggest yet to vet LLMs for their research potential.

The AI Scientist

Large language models, the AI algorithms taking the world by storm, are galvanizing academic research.

These algorithms scrape data from the digital world, learn patterns in the data, and use these patterns to complete a variety of specialized tasks. Some algorithms are already aiding research scientists. Some can solve challenging math problems. Others are “dreaming up” new proteins to tackle some of our worst health problems, including Alzheimer’s and cancer.

Although helpful, these only assist in the last stage of research—that is, when scientists already have ideas in mind. What about having an AI to guide a new idea in the first place?

AI can already help draft scientific articles, generate code, and search scientific literature. These steps are akin to when scientists first begin gathering knowledge and form ideas based on what they’ve learned.

Some of these ideas are highly creative, in the sense that they could lead to out-the-box theories and applications. But creativity is subjective. One way to gauge potential impact and other factors for research ideas is to call in a human judge, blinded to the experiment.

“The best way for us to contextualize such capabilities is to have a head-to-head comparison” between AI and human experts, study author Chenglei Si told Nature.

The team recruited over 100 computer scientists with expertise in natural language processing to come up with ideas, act as judges, or both. These experts are especially well-versed in how computers can communicate with people using everyday language. The team pitted 49 participants against a state-of-the-art LLM based on Anthropic’s Claude 3.5. The scientists earned $300 per idea plus an additional $1,000 if their idea scored in the top 5 overall.

Creativity, especially when it comes to research ideas, is hard to evaluate. The team used two measures. First, they looked at the ideas themselves. Second, they asked AI and participants to produce writeups simply and clearly communicating the ideas—a bit like a school report.

They also tried to reduce AI “hallucinations”—when a bot strays from the factual and makes things up.

The team trained their AI on a vast catalog of research articles in the field and asked it to generate ideas in each of seven topics. To sift through the generated ideas and choose the best ones, the team engineered an automatic “idea ranker” based on previous data reviews and acceptance for publication from a popular computer science conference.

The Human Critic

To make it a fair test, the judges didn’t know which responses were from AI. To disguise them, the team translated submissions from humans and AI into a generic tone using another LLM. The judges evaluated ideas on novelty, excitement, and—most importantly—if they could work.

After aggregating reviews, the team found that, on average, ideas generated by human experts were rated less exciting than those by AI, but more feasible. As the AI generated more ideas, however, it became less novel, increasingly generating duplicates. Digging through the AI’s nearly 4,000 ideas, the team found around 200 unique ones that warranted more exploration.

But many weren’t reliable. Part of the problem stems from the fact the AI made unrealistic assumptions. It hallucinated ideas that were “ungrounded and independent of the data” it was trained on, wrote the authors. The LLM generated ideas that sounded new and exciting but weren’t necessarily practical for AI research, often because of latency or hardware problems.

“Our results indeed indicated some feasibility trade-offs of AI ideas,” wrote the team.

Novelty and creativity are also hard to judge. Though the study tried to reduce the likelihood the judges would be able to tell which submissions were AI and which human by rewriting them with an LLM, like a game of telephone, changes in length or wording may have subtly influenced how the judges perceived submissions—especially when it comes to novelty. Also, the researchers asked to come up with ideas were given limited time to do so. They admitted their ideas were about average compared to their past work.

The team agrees there’s more to be done when it comes to evaluating AI generation of new research ideas. They also suggested AI tools carry risks worthy of attention.

“The integration of AI into research idea generation introduces a complex sociotechnical challenge,” they said. “Overreliance on AI could lead to a decline in original human thought, while the increasing use of LLMs for ideation might reduce opportunities for human collaboration, which is essential for refining and expanding ideas.”

That said, new forms of human-AI collaboration, including AI-generated ideas, could be useful for researchers as they investigate and choose new directions for their research.

Image Credit: Calculator LandPixabay

Kategorie: Transhumanismus

Scientists Say Net Zero Aviation Is Possible by 2050—If We Act Now

Singularity HUB - 27 Září, 2024 - 20:55

Aviation has proven to be one of the most stubbornly difficult industries to decarbonize. But a new roadmap outlined by University of Cambridge researchers says the sector could reach net zero by 2050 if urgent action is taken.

The biggest challenge when it comes to finding alternatives to fossil fuels in aviation is basic physics. Jet fuel is incredibly energy dense, which is crucial for a mode of transport where weight savings can dramatically impact range.

While efforts are underway to build planes powered by batteries, hydrogen, or methane, none can come close to matching kerosene, pound for pound, at present. Sustainable aviation fuel is another option, but so far, its uptake has been limited, and its green credentials are debatable.

Despite this, the authors of a new report from the University of Cambridge’s Aviation Impact Accelerator (AIA) say that with a concerted effort the industry can clean up its act. The report outlines four key sustainable aviation goals that, if implemented within the next five years, could help the sector become carbon neutral by the middle of the century.

“Too often the discussions about how to achieve sustainable aviation lurch between overly optimistic thinking about current industry efforts and doom-laden cataloging of the sector’s environmental evils,” Eliot Whittington, executive director at the Cambridge Institute for Sustainability Leadership, said in a press release.

“The Aviation Impact Accelerator modeling has drawn on the best available evidence to show that there are major challenges to be navigated if we’re to achieve net zero flying at scale, but that it is possible.”

The report notes that time is of the essence. Aviation is responsible for roughly 4 percent of global warming despite only 10 percent of the population flying, a figure that’s likely to rise as the world continues to develop. Despite global leaders pledging to make aviation net zero, current efforts to get there are not ambitious enough, the authors say.

After researching the interventions that could have the biggest impact and discussions at the inaugural meeting of the Transatlantic Sustainable Aviation Partnership at MIT last year, AIA came up with four focus areas that could put those goals within reach.

The first of these is to reduce contrails. While most of the focus is on emissions from burning jet fuel, the generation of persistent contrails can trap heat in atmosphere and add significantly to warming.

Contrails can be avoided by adjusting an aircraft’s altitude in areas where they’re most likely to be formed, but the underlying science is poorly understood as are potential strategies for adjusting air traffic. Therefore, the report suggests setting up several “living labs” in existing airspace to conduct data collection and experiments. These should be ready by the end of 2025, say the authors.

The second goal is to reduce the amount of fuel airplanes use by introducing new aircraft and engine designs, improving operational efficiency of the sector, or just getting aircraft to fly slower. To catalyze action, governments need to set clear policies, such as establishing fuel burn reduction targets, loan guarantees for new aircraft purchases, or incentives to scrap old airplanes.

The third goal is to ensure sustainable aviation fuel is actually sustainable, and its production is scalable. Most sustainable fuels rely on biomass, but limitations on production and competition from other sectors could mean they can’t realize the hoped for emissions reductions.

In the near term, the report suggests aviation will have to work with other industries to set best practices and limit total cross-sector emissions. And in the long run, the industry will have to make efforts to find alternative ways to develop synthetic sustainable fuels.

Lastly, the report argues the industry also needs to invest in “moonshot” technologies. By 2025, aviation should launch several high-risk, high-reward demonstration programs in technologies that could be truly transformative for the sector. These include the development of cryogenic hydrogen or methane fuels, hydrogen-electric propulsion technology, or the use of synthetic biology to dramatically lower the energy demands for sustainable fuel production.

The report’s authors stress that, although they are confident these interventions could have the desired impact, time is of the essence. History suggests that getting global leaders to take decisive action on climate issues is tricky, but at least they now have a concrete roadmap.

Image Credit: John McArthur / Unsplash

Kategorie: Transhumanismus

Cosmology Is at a Tipping Point—We May Be on the Verge of Discovering New Physics

Singularity HUB - 24 Září, 2024 - 20:57

For the past few years, a series of controversies have rocked the well-established field of cosmology. In a nutshell, the predictions of the standard model of the universe appear to be at odds with some recent observations.

There are heated debates about whether these observations are biased, or whether the cosmological model, which predicts the structure and evolution of the entire universe, may need a rethink. Some even claim that cosmology is in crisis. Right now, we do not know which side will win. But excitingly, we are on the brink of finding that out.

To be fair, controversies are just the normal course of the scientific method. And over many years, the standard cosmological model has had its share of them. This model suggests the universe is made up of 68.3 percent “dark energy” (an unknown substance that causes the universe’s expansion to accelerate), 26.8 percent dark matter (an unknown form of matter) and 4.9 percent ordinary atoms, very precisely measured from the cosmic microwave background—the afterglow of radiation from the Big Bang.

It explains very successfully multitudes of data across both large and small scales of the universe. For example, it can explain things like the distribution of galaxies around us and the amount of helium and deuterium made in the universe’s first few minutes. Perhaps most importantly, it can also perfectly explain the cosmic microwave background.

This has led to it gaining the reputation as the “concordance model.” But a perfect storm of inconsistent measurements—or “tensions” as they’re known as in cosmology—are now questioning the validity of this longstanding model.

Uncomfortable Tensions

The standard model makes particular assumptions about the nature of dark energy and dark matter. But despite decades of intense observation, we still seem no closer to working out what dark matter and dark energy are made of.

The litmus test is the so-called Hubble tension. This relates to the Hubble constant, which is the rate of expansion of the universe at the present time. When measured in our nearby, local universe, from the distance to pulsating stars in nearby galaxies, called Cepheids, its value is 73 km/s/megaparsec (Mpc is a unit of measure for distances in intergalactic space). However, when predicted theoretically, the value is 67.4 km/s/Mpc. The difference may not be large (only 8 percent), but it is statistically significant.

The Hubble tension became known about a decade ago. Back then, it was thought that the observations may have been biased. For example, the cepheids, although very bright and easy to see, were crowded together with other stars, which could have made them appear even brighter. This could have made the Hubble constant higher by a few percent compared to the model prediction, thus artificially creating a tension.

With the advent of the James Webb Space Telescope (JWST), which can separate the stars individually, it was hoped that we would have an answer to this tension.

Frustratingly, this hasn’t yet happened. Astronomers now use two other types of stars besides the cepheids (known as the tip of the red giant branch stars (TRGB) and the J-region asymptotic giant branch (JAGB) stars). But while one group has reported values from the JAGB and TRGB stars that are tantalizingly close to the value expected from the cosmological model, another group has claimed that they are still seeing inconsistencies in their observations. Meanwhile, the cepheids measurements continue to show a Hubble tension.

It’s important to note that although these measurements are very precise, they may still be biased by some effects uniquely associated with each type of measurement. This will affect the accuracy of the observations, in a different way for each type of stars. A precise but inaccurate measurement is like trying to have a conversation with a person who is always missing the point. To solve disagreements between conflicting data, we need measurements that are both precise and accurate.

The good news is that the Hubble tension is now a rapidly developing story. Perhaps we will have the answer to it within the next year or so. Improving the accuracy of data, for example by including stars from more far away galaxies, will help sort this out. Similarly, measurements of ripples in spacetime known as gravitational waves will also be able to help us pin down the constant.

This may all vindicate the standard model. Or it may hint that there’s something missing from it. Perhaps the nature of dark matter or the way that gravity behaves on specific scales is different to what we believe now. But before discounting the model, one has to marvel at its unmatched precision. It only misses the mark by at most a few percent, while extrapolating over 13 billion years of evolution.

To put it into perspective, even the clockwork motions of planets in the solar system can only be computed reliably for less than a billion years, after which they become unpredictable. The standard cosmological model is an extraordinary machine.

The Hubble tension is not the only trouble for cosmology. Another one, known as the “S8 tension,” is also causing trouble, albeit not on the same scale. Here the model has a smoothness problem, by predicting that matter in the universe should be more clustered together than we actually observe—by about 10 percent. There are various ways to measure the “clumpiness” of matter, for example by analyzing the distortions in the light from galaxies produced by the assumed dark matter intervening along the line of sight.

Currently, there seems to be a consensus in the community that the uncertainties in the observations have to be teased out before ruling out the cosmological model. One possible way to alleviate this tension is to better understand the role of gaseous winds in galaxies, which can push out some of the matter, making it smoother.

Understanding how clumpiness measurements on small scales relate to those on larger scales would help. Observations might also suggest there is a need to change how we model dark matter. For example, if instead of being made entirely of cold, slow moving particles, as the standard model assumes, dark matter could be mixed up with some hot, fast-moving particles. This could slow down the growth of clumpiness at late cosmic times, which would ease the S8 tension.

JWST has highlighted other challenges to the standard model. One of them is that early galaxies appear to be much more massive that expected. Some galaxies may weigh as much as the Milky Way today, even though they formed less than a billion years after the Big Bang, suggesting they should be less massive.

A region of star formation seen by JWST and the Chandra telescope. Image credit: Credit: X-ray: NASA/CXO/SAO; Infrared: NASA/ESA/CSA/STScI; Image processing: NASA/CXC/SAO/L. Frattare, CC BY

However, the implications against the cosmological model are less clear in this case, as there may be other possible explanations for these surprising results. Improving the measurement of stellar masses in galaxies is key to solving this problem. Rather than measuring them directly, which is not possible, we infer these masses from the light emitted by galaxies.

This step involves some simplifying assumptions, which could translate into overestimating the mass. Recently, it has also been argued that some of the light attributed to stars in these galaxies is generated by powerful black holes. This would imply that these galaxies may not be as massive after all.

Alternative Theories

So, where do we stand now? While some tensions may soon be explained by more and better observations, it is not yet clear whether there will be a resolution to all of the challenges battering the cosmological model.

There has been no shortage of theoretical ideas of how to fix the model though—perhaps too many, in the range of a few hundred and counting. That’s a perplexing task for any theorist who may wish to explore them all.

The possibilities are many. Perhaps we need to change our assumptions of the nature of dark energy. Perhaps it is a parameter that varies with time, which some recent measurements have suggested. Or maybe we need to add more dark energy to the model to boost the expansion of the universe at early times, or, on the contrary, at late times. Modifying how gravity behaves on large scales of the universe (differently than done in the models called Modified Newtonian Dynamics, or MOND) may also be an option.

So far, however, none of these alternatives can explain the vast array of observations the standard model can. Even more worrisome, some of them may help with one tension but worsen others.

The door is now open to all sorts of ideas that challenge even the most basic tenets of cosmology. For example, we may need to abandon the assumption that the universe is “homogeneous and isotropic” on very large scales, meaning it looks the same in all directions to all observers and suggesting there are no special points in the universe. Others propose changes to the theory of general relativity.

Some even imagine a trickster universe, which participates with us in the act of observation, or which changes its appearance depending on whether we look at it or not—something we know happens in the quantum world of atoms and particles.

In time, many of these ideas will likely be relegated to the cabinet of curiosities of theorists. But in the meantime, they provide a fertile ground for testing the “new physics.”

This is a good thing. The answer to these tensions will no doubt come from more data. In the next few years, a powerful combination of observations from experiments such as JWST, the Dark Energy Spectroscopic Instrument (DESI), the Vera Rubin Observatory and Euclid, among many others, will help us find the long-sought answers.

Tipping Point

On one side, more accurate data and a better understanding of the systematic uncertainties in the measurements could return us to the reassuring comfort of the standard model. Out of its past troubles, the model may emerge not only vindicated, but also strengthened, and cosmology will be a science that is both precise and accurate.

But if the balance tips the other way, we will be ushered into uncharted territory, where new physics will have to be discovered. This could lead to a major paradigm shift in cosmology, akin to the discovery of the accelerated expansion of the universe in the late 1990s. But on this path we may have to reckon, once and for all, with the nature of dark energy and dark matter, two of the big unsolved mysteries of the universe.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA, ESA, CSA, STScI, Webb ERO Production Team

Kategorie: Transhumanismus

First Panda Stem Cells Made in the Lab Bring New Hope for These Beloved Bears

Singularity HUB - 23 Září, 2024 - 23:28

Roughly 20 years ago, at the dawn of YouTube, a video of a sneezing baby panda—and its mom’s gasp in surprise—captured the internet’s heart.

With their distinctive black and white fur, giant pandas are known for their calm nature, playfulness, and utter cuteness. The gentle beasts are native to China, but their charm has enthralled the world, including bridging international relationships through “panda diplomacy.” The bear has been the logo for the World Wildlife Fund since its founding in 1961.

Despite conservation attempts, these cuddly bears are still incredibly vulnerable.

As of now, only 2,000 pandas remain in the wild. The animals live in small populations scattered across a few mountainous regions in midwestern China. They mostly eat bamboo, but in recent decades, bamboo forests have been decimated by human activities, like roadbuilding, logging, and the conversion of natural environments into pasture.

As the number of pandas dwindles, so do their chances of survival. A recent census showed pandas living in 33 isolated populations across their preferred landscapes. Roughly half of these groups could face up to 90 percent extinction in the years ahead.

Unfortunately, it’s a tale as old as time. “This iconic species faces substantial threats to its survival due to various human activates in its habitat,” wrote Jing Liu and colleagues at the Chinese Academy of Sciences in a recent study. Saving its habitat is one way to keep the species alive and thriving. But economic incentives make it a hard legislative hill to climb.

What about a backup plan?

Last week, in their study, Liu and his team took a page out of the de-extinction playbook to propose a new way to conserve pandas: Convert their skin cells into stem cells. These, in theory, can then be turned into any cell type in the body—including reproductive cells for breeding.

It “is really a great breakthrough in the field of giant panda conservation,” Thomas Hildebrandt at the Free University of Berlin, who was not involved in the research, told Science News.

Panda Academy

Pandas thrive in several provinces of China, where forests are lush with bamboo, their preferred food. The bears, with their signature black and white coats, are remarkably distinct from grizzlies or black bears. Their forepaws are especially agile. Like people lounging on couches, they use a thumb-like structure to grab and bring bamboo directly into their mouths, while keeping their bodies relatively still on the ground.

Although they have large teeth and a strong jaw, pandas are generally pacificists with a jolly personality. In nature, mothers raise their pups for up to two years before sending them into the wild under a watchful eye.

Panda numbers rapidly dwindled in the 1980s. Deforestation, poaching, and loss of bamboo forests slashed their population to near extinction. Thanks to the World Wildlife Fund, their numbers have recently rebounded. Greater awareness of their plight garnered support, and their numbers have slowly grown in captivity and in the wild.

But their small population still poses a genetic conundrum. Inbreeding among groups can lead to genetic disease, loss of genetic diversity, and potentially less resilience against infections.

Genetic Reprise

A potential way to combat these problems is to develop induced pluripotent stem cells (iPSCs) in pandas. The Noble Prize-winning technology has taken the biomedical field by storm over the last two decades by showing skin cells can be reverted into a stem cell-like state.

The trick already works in human and mouse skin cells. Researchers use it to grow iPSCs into mini-brains, embryo-like structures, and early reproductive cells.

The technology has “shown promising outcomes in the conservation” of genes in multiple endangered species too, the authors wrote. Among these are the northern white rhinoceros, the Tasmanian devil, the Sumatran rhinoceros, and others.

But the recipe for making iPSCs differs between species. Reprogramming genes that work in mice and human cells doesn’t always work in other cell types or species.

“The recipe from the mouse is not necessarily directly applicable to other species, even within mammalian species,” the Smithsonian’s Pierre Comizzoli, who was not involved in the study, said in an interview with The Scientist.

Panda-monium

A few years back, researchers found a way to transform cells from the soft part of a panda’s cheek into little bulbs of a particular stem cell type. These could be coaxed into some types of skin and other cells, but they lacked the flexibility to generate any tissues.

The new study aimed to remedy this by reprogramming skin cells into iPSCs.

The team took skin samples from a male and female named Xingrong and Loubao. The procedure involved painlessly scraping off skin cells, a bit like a daily skincare routine.

After collecting the cells, the team bathed them in a chemical soup to help the cells grow and divide. A few additional genes transformed them into giant panda iPSCs.

“The clones were very beautiful. We were so excited,” Liu told The Scientist.

The engineered panda stem cells were close to those normally developed inside the body. Although not yet an exact mimic, the engineered cells form a foundation for how panda cells develop. The library of genetic changes, in turn, could help with their preservation.

The team also tested the engineered stem cells on a hallmark of development. Stem cells form three different layers of cells, each of which can develop into various tissues and organs. In petri dishes, the panda iPSCs mimicked the process, generating cells and protein communication that roughly copied early stages in the formation of reproductive cells.

The results show how reprogramming cells could help us preserve and study endangered species. Adding panda iPSCs to our evolutionary library is another step toward conserving the lovable bears. With more work, the cells could potentially be turned into sperm and eggs in a lab, without harming any pandas in the process. The reprogrammed cells may also become a useful proxy scientists can use to test therapies that increase panda fertility.

But realizing those ideas is still off in the future.

“The most immediate applications are in regenerative medicine to treat sick pandas and to better understand the embryology or fetal development of these animals,” said Comizzoli.

Image Credit: Pascal Müller / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through September 21)

Singularity HUB - 21 Září, 2024 - 16:00
TECH

AI’s Hungry Maw Drives Massive $100B Investment Plan by Microsoft and BlackRock
Benj Edwards | Ars Technica
“The partnership initially aims to raise $30 billion in private equity capital, which could later turn into $100 billion in total investment when including debt financing. The group will invest in data centers and supporting power infrastructure for AI development. ‘The capital spending needed for AI infrastructure and the new energy to power it goes beyond what any single company or government can finance,’ Microsoft President Brad Smith said in a statement.”

COMPUTING

Challengers Are Coming for Nvidia’s Crown
Matthew S. Smith | IEEE Spectrum
“[Nvidia has] a deep, broad moat with which to defend its business, but that doesn’t mean it lacks competitors ready to storm the castle, and their tactics vary widely. While decades-old companies like Advanced Micro Devices (AMD) and Intel are looking to use their own GPUs to rival Nvidia, upstarts like Cerebras and SambaNova have developed radical chip architectures that drastically improve the efficiency of generative AI training and inference. These are the competitors most likely to challenge Nvidia.”

BIOTECH

Moderna’s ‘Off-the-Shelf’ Cancer Vaccine Shows Promise in Early Human Trial Data
Ed Cara | Gizmodo
“The future of cancer treatment is continuing to look bright. Over the weekend, researchers in the UK announced encouraging results from an early trial testing an mRNA vaccine against advanced solid cancers. The vaccine, developed by Moderna, is designed to help people’s immune systems better recognize and kill cancerous cells.”

ROBOTICS

1X Releases Generative World Models to Train Robots
Ben Dickson| VentureBeat
“1X’s new system is inspired by innovations such as OpenAI Sora and Runway, which have shown that with the right training data and techniques, generative models can learn some kind of world model and remain consistent through time. However, while those models are designed to generate videos from text, 1X’s new model is part of a trend of generative systems that can react to actions during the generation phase.”

GENE THERAPY

First Day of a ‘New Life’ for a Boy With Sickle Cell
Gina Kolata | The New York Times
“Kendric Cromer, 12, is among the first patients to be treated with gene therapy just approved by the FDA that many other patients face obstacles to receiving. …Last December, the Food and Drug Administration gave approval to two companies, Bluebird Bio of Somerville, Mass., and Vertex Pharmaceuticals of Boston, to sell the first gene therapies approved for sickle cell disease. After nine months, Kendric remains the first Bluebird patient to progress this far, with at least a few others advancing toward his pace.”

ARTIFICIAL INTELLIGENCE

OpenAI’s New Model Is Better at Reasoning and, Occasionally, Deceiving
Kylie Robison | The Verge
“While AI models have been able to ‘lie’ in the past, and chatbots frequently output false information, o1 had a unique capacity to ‘scheme’ or ‘fake alignment.’ That meant it could pretend it’s following the rules to complete a given task, but it isn’t actually. To the model, the rules could be too much of a burden, and it seems to have the ability to disregard them if it means it can more easily complete a task.”

GOVERNANCE

AI-Generated Content Doesn’t Seem to Have Swayed Recent European Elections
Melissa Heikkiläarchive page | MIT Technology Review
“Since the beginning of the generative-AI boom, there has been widespread fear that AI tools could boost bad actors’ ability to spread fake content with the potential to interfere with elections or even sway the results. Such worries were particularly heightened this year, when billions of people were expected to vote in over 70 countries. Those fears seem to have been unwarranted, says Sam Stockwell, the researcher at the Alan Turing Institute who conducted the study.”

ENERGY

Every Fusion Startup That Has Raised Over $300M
Tim De Chant | TechCrunch
“Over the last several years, fusion power has gone from the butt of jokes—always a decade away!—to an increasingly tangible and tantalizing technology that has drawn investors off the sidelines. …Founders have built on that momentum in recent years, pushing the private fusion industry forward at a rapid pace. Fusion startups have raised $7.1 billion to date, according to the Fusion Industry Association, with the majority of it going to a handful of companies.”

TECH

I Stared Into the AI Void With the SocialAI App
Lauren Goode | Wired
“Even the app’s creator, Michael Sayman, admits that the premise of SocialAI may confuse people. His announcement this week of the app read a little like a generative AI joke: ‘A private social network where you receive millions of AI-generated comments offering feedback, advice, and reflections.’ But, no, SocialAI is real, if ‘real’ applies to an online universe in which every single person you interact with is a bot.”

AUGMENTED REALITY

Here’s What I Made of Snap’s New Augmented-Reality Spectacles
Mat Honan | The Guardian
“Before I get to Snap’s new Spectacles, a confession: I have a long history of putting goofy new things on my face and liking it. …[I spent] the better part of [a] year with Google’s ridiculous Glass on my face and thought it was the future. Microsoft HoloLens? Loved it. Google Cardboard? Totally normal. Apple Vision Pro? A breakthrough, baby. …I got to try [Snap’s new AR glasses] out a couple of weeks ago. They are pretty great! (But also: See above.)”

GOVERNANCE

There Are More Than 120 AI Bills in Congress Right Now
Scott J. Mulligan | MIT Technology Review
“US policymakers have an ‘everything everywhere all at once’ approach to regulating artificial intelligence, with bills that are as varied as the definitions of AI itself. …That’s why, with help from the Brennan Center for Justice, which created a tracker with all the AI bills circulating in various committees in Congress right now, MIT Technology Review has taken a closer look to see if there’s anything we can learn from this legislative smorgasbord.”

Image Credit: Shubham Dhage / Unsplash

Kategorie: Transhumanismus

DNA Computing Evolves: New System Stores Data, Plays Chess, and Solves Sudoku Puzzles

Singularity HUB - 20 Září, 2024 - 16:00

DNA is nature’s computing device.

Unlike data centers, DNA is incredibly compact. These molecules package an entire organism’s genetic blueprint into tiny but sophisticated structures inside each cell. Kept cold—say, inside a freezer or in the Siberian tundra—DNA and the data encoded within can last millennia.

But DNA is hardly just a storage device. Myriad molecules turn genes on and off—a bit like selectively running bits of code—to orchestrate everyday cellular functions. The body “reads” bits of the genetic code in a particular cell at a specific time and, together, compiles the data into a smoothly operating, healthy life.

Scientists have long eyed DNA as a computing device to complement everyday laptops. With the world’s data increasing at an exponential rate, silicon chips are struggling to meet the demands of data storage and computation. The rise of large language models and other modes of artificial intelligence is further pushing the need for alternative solutions.

But the problem with DNA storage is it often gets destroyed after “reading” the data within.

Last month, a team from North Carolina State University and Johns Hopkins University found a workaround. They embedded DNA molecules, encoding multiple images, into a branched gel-like structure resembling a brain cell.

Dubbed “dendricolloids,” the structures stored DNA files far better than those freeze-dried alone. DNA within dendricolloids can be repeatedly dried and rehydrated over roughly 170 times without damaging stored data. According to one estimate, each DNA strand could last over two million years at normal freezer temperatures.

Unlike previous DNA computers, the data can be erased and replaced like memory on classical computers to solve multiple problems—including a simple chess game and sudoku.

Until now, DNA was mainly viewed as a long-term storage device or single-use computer. Developing DNA technology that can store, read, “rewrite, reload, or compute specific data files” repeatedly seemed difficult or impossible, said study author Albert Keung in a press release.

However, “we’ve demonstrated that these DNA-based technologies are viable, because we’ve made one,” he said.

A Grain of Sand

This is hardly the first attempt to hijack the code of life to increase storage and computation.

The first steps taken were in data storage. Our computers run on binary bits of information encoded in zeros and ones. DNA, in contrast, uses four different molecules typically represented by the letters A, T, C, and G. This means that different pairs of zeros and ones—00, 01, 10, 11—can be encoded into different DNA letters. Because of the way it’s packaged in cells, DNA can theoretically store far more data in less space than digital devices.

“You could put a thousand laptops’ worth of data into DNA-based storage that’s the same size as a pencil eraser,” said Keung.

With any computer, we need to be able to search and retrieve information. Our cells have evolved mechanisms that read specific parts of a DNA strand on demand—a sort of random access memory that extracts a particular piece of data. Previous studies have tapped into these systems to store and retrieve books, images, and GIFs inside DNA files. Scientists have also used microscopic glass beads with DNA “labels” as a kind of filing system for easy extraction.

But storing and extracting data is only half of the story. A computer needs to, well, compute.

Last year, a team developed a programmable DNA computer that can run billions of different circuits with minimal energy. Traditionally, these molecular machines work by allowing different strands to grab onto each other depending on calculation needs. Different pairs could signal “and,” “or,” and “not” logic gates—recapitulating the heart of today’s digital computers.

But reading and computing often destroys the original DNA data, making most DNA-based systems single-use. Scientists have also developed another type of DNA computer, which monitors changes in the molecule’s structures. These can be rewritten. Similar to standard hard drives, they can encode multiple rounds of data, but they’re also harder to scale.

DNA Meets Data

The new study combined the best of both worlds. The team engineered a DNA computer that can store information, perform computations, and reset the system for another round.

The core of the system relies on a central dogma in biology. DNA sits in a small cage within cells. When genes are turned on, their data is translated into RNA, which converts the genetic blueprint into proteins. If DNA is safely stored, adding protein “switches” that turn genes up or down changes the genetic readout in RNA but keeps the original genetic sequences intact.

Because the original data doesn’t change, it’s possible to run multiple rounds of RNA-based calculations from a single DNA-encoded dataset—with improvements.

Based on these ideas, the team engineered a jelly-like structure with branches similar to a brain cell. Dubbed “dendricolloids,” the soft materials allowed each DNA strand to grab onto surrounding material “without sacrificing the data density that makes DNA attractive for data storage in the first place,” said study author Orlin Velev.

“We can copy DNA information directly from the material’s surface without harming the DNA. We can also erase targeted pieces of DNA and then rewrite to the same surface, like deleting and rewriting information stored on the hard drive,” said study author Kevin Lin.

To test out their system, the team embedded a synthetic DNA sequence of 200 letters into the material. Adding a molecular cocktail that converts DNA sequences into RNA, the material was able to generate RNA repeatedly over 10 rounds. In theory, the resulting RNA could encode 46 terabytes of data stored at normal fridge and freezer temperatures.

The dendricolloids could also absorb over 2,700 different DNA strands, each nearly 250 letters long to protect their data. In one test, the team encoded three different JPEG files into the structures, translating digital data into biological data. In simulations that mimicked accessing the DNA files, the team could reconstruct the data 10 times without losing it in the process.

Game On

The team next took inspiration from a biological “eraser” of sorts. These proteins eat away at RNA without damaging the DNA blueprint. This process controls how a cell performs its usual functions—for example, by destroying RNA strands detrimental to health.

As a proof of concept, the team developed 1,000 different DNA snippets to solve multiple puzzles. For a simple game of chess, each DNA molecule encoded nine potential positions. The molecules were pooled, with each representing a potential configuration. This data allowed the system to learn. For example, one gene, when turned on, could direct a move on the chessboard by replicating itself in RNA. Another could lower RNA levels detrimental to the game.

These DNA to RNA processes were controlled by an engineered protein whose job it was to keep the final results in check. As a last step, all RNA strands violating the rules were destroyed, leaving behind only those representing the final, expected solution. In addition to chess, the team implemented this process to solve simple sudoku puzzles too.

The DNA computer is still in its infancy. But unlike previous generations, this one captures storage and compute in one system.

“There’s a lot of excitement about molecular data storage and computation, but there have been significant questions about how practical the field may be,” said Keung. “We wanted to develop something that would inspire the field of molecular computing.”

Image Credit: Luke Jones / Unsplash

Kategorie: Transhumanismus

There Are Now More Electric Vehicles Than Gas-Powered Cars in Norway

Singularity HUB - 19 Září, 2024 - 18:31

Norway’s sizable oil and gas deposits have made it one of the wealthiest countries in the world. That’s why it might come as a surprise that it’s the first country to have more electric vehicles than gasoline-powered ones.

Transportation is the single biggest contributor to climate change in the US—accounting for 28 percent of total greenhouse gas emissions, according to the Environmental Protection Agency. So, the rise of electric vehicles has been one of the biggest success stories in the effort to clean up the economy.

Slowing sales growth for battery-powered cars has some worried there might be a ceiling to the number of people willing to adopt the technology. But Norway shows that with the right incentives, the goal of a completely electrified road network is a tangible possibility.

Earlier this week, the Norwegian Road Federation (OFV) announced that of the 2.8 million private cars that are registered in the country, 754,303 are all-electric compared to 753,905 that run on gasoline.

“This is historic. A milestone few saw coming 10 years ago,” OFV director Øyvind Solberg Thorsen told The Guardian. “The electrification of the fleet of passenger cars is going quickly, and Norway is thereby rapidly moving towards becoming the first country in the world with a passenger car fleet dominated by electric cars.”

This tipping point had been long anticipated, as electric vehicle sales in Norway have massively outpaced gasoline cars for some time. Roughly 85 percent of new vehicles registered in 2024 so far have been zero-emissions, which refers to fully battery-powered vehicles and excludes hybrids.

It’s no secret how the country got here. The Norwegian government has given generous subsidies to promote adoption, including tax rebates that bring the cost of electric vehicles down to similar levels as conventional vehicles, exemptions from some tolls, and an extensive public network of free chargers.

Despite overtaking gasoline-powered cars, electric vehicles are still lagging diesel ones, which account for more than a million of Norway’s existing stock. But the government has an ambitious goal to end the sale of new gasoline and diesel cars by next year, so it may not be long before they catch up.

How easily other countries can mimic their success remains to be seen though—tax exemptions on electric vehicles cost 43 billion kroner ($4.1 billion) in 2023. Norway has been able to pay for this thanks to the country’s massive $1.7 trillion sovereign wealth fund, which, ironically, was built using the profits from its enormous fossil fuel reserves.

Electric vehicle sales have been highly concentrated in three main markets—Europe, the US, and China—accounting for roughly 95 percent of all purchases. In the US, new registrations grew 40 percent last year to hit 1.4 million, while Europe saw a 20 percent increase to 3.2 million.

However, sales have been flagging in recent months, even as production capacity continues to ramp up. This has some worried that concerns around pricing and charging infrastructure could cap consumers’ willingness to make the switch. A brewing trade war over electric vehicles between the West and China also threatens to further dent adoption.

While it might not come cheap, if we’re committed to decarbonizing our transportation system, other governments may need to follow Norway’s lead when it comes to incentivizing cleaner cars.

Image Credit: Emil Dosen / Unsplash

Kategorie: Transhumanismus
Syndikovat obsah