Singularity HUB


The World’s Biggest Guaranteed Income Trial Will Launch in India This Year
Guaranteed income schemes are often dismissed as being too expensive to implement on a large scale, but several cities are trying them out among small subsets of their populations. Giving people even a small financial leg up can go a long way towards bridging the gap between surviving and thriving. The biggest guaranteed income pilot in the US is currently underway in Chicago, where 500 families are receiving $500 per month, with no strings attached, for 12 months.
A far bigger guaranteed income trial will be launching in India later this year. Announced last week by finance minister Palanivel Thiaga Rajan, the trial will take place in Tamil Nadu, the country’s southernmost state and its seventh-most populous with 81.5 million people.
Called Magalir Urimai Thogai, which in Tamil means “Women’s Right to Assistance,” the trial will give the female heads of eligible households 1,000 rupees per month. That’s somewhere between $12 to $13. It doesn’t sound like much, but the average annual per capita income in Tamil Nadu is around 225,000 rupees ($2,733). That breaks down to $52 a week, and it’s an average; the lowest-income families earn far less.
Specific eligibility guidelines for the program haven’t been finalized yet, but it’s geared towards families living below the poverty line. Recipients will be selected from the state’s TIPPS system (Tamil Nadu Integrated Poverty Portal Service), where data from income and population surveys is stored.
The state’s minister for social welfare and female empowerment, P Geetha Jeevan, said, “The income benefit aimed at supporting impoverished families will not cover the rich, government employees, and a few others. Approximately 80 to 90 lakh women are expected to avail themselves of this benefit.” (A lakh is a unit in the Indian numbering system equal to 100,000—so the pilot could benefit up to 9 million women).
While guaranteed income pilots in the US and other countries tend not to be gender-specific, the decision to give these payments exclusively to women in India was very intentional. In short, it’s part of an ongoing effort by the government to reduce long-standing, pronounced gender inequality in the country.
While gender norms won’t change overnight simply because female heads of households receive payments instead of males, this will help break down old stereotypes, as well as giving women a sense of agency and an incentive to understand more about finance. Similarly, a program called Ujjwala was launched in 2016 to provide gas stoves and subsidized cooking gas to poor families—but only women could receive the payments.
Since then, the price of gas has shot up, making refills too expensive for many families (even with the subsidy) and causing some to return to traditional wood-fired stoves, which emit toxic fumes that are harmful to human health. Speaking about the guaranteed income program, Rajan said, “This will be of great help for women heads of families who have been affected adversely by the steep increase in cooking gas prices by the union government and the overall inflation.”
Details about how the results of the payment program will be monitored haven’t been released yet, but as with all such trials, the hope is that by giving families a small extra cushion of financial security, more of their basic needs will be covered, freeing up time and resources to devote to additional pursuits.
Tamil Nadu’s guaranteed income pilot is slated to launch in September of this year.
Image Credit: Joshuva Daniel on Unsplash
Mice With Two Dads Were Born From Eggs Made Out of Male Skin Cells
Seven mice just joined the pantheon of offspring created from same-sex parents—and opened the door to offspring born from a single parent.
In a study published in Nature, researchers described how they scraped skin cells from the tails of male mice and used them to create functional egg cells. When fertilized with sperm and transplanted into a surrogate, the embryos gave rise to healthy pups, which grew up and had babies of their own.
The study is the latest in a decade-long attempt to rewrite reproduction. Egg meets sperm remains the dogma. What’s at play is how the two halves are generated. Thanks to iPSC (induced pluripotent stem cell) technology, scientists have been able to bypass nature to engineer functional eggs, reconstruct artificial ovaries, and give rise to healthy mice from two mothers. Yet no one has been able to crack the recipe of healthy offspring born from two dads.
Enter Dr. Katsuhiko Hayashi at Kyushu University, who has led the ambitious goal to engineer gametes—sperm and egg—outside the body. His solution came from a clever hack. When grown inside petri dishes, iPSC cells tend to lose bundles of their DNA, called chromosomes. Normally, this is a massive headache because it disrupts the cell’s genetic integrity.
Hayashi realized he could hijack the mechanism. Selecting for cells that shed the Y chromosome, the team nurtured the cells until they fully developed into mature egg cells. The cells—which started as male skin cells—eventually developed into normal mice after fertilization with normal sperm.
“Murakami and co-workers’ protocol opens up new avenues in reproductive biology and fertility research,” said Drs. Jonathan Bayerl and Diana Laird at the University of California, San Francisco (UCSF), who were not involved in the study.
Whether the strategy will work in humans remains to be seen. The success rate in mice was very low at just a snippet over one percent. Yet the study is a proof of concept that further pushes the boundaries of the reproductive realm of possibilities. And perhaps more immediately, the underlying technology can help tackle some of our most prevalent chromosomal disorders, such as Down syndrome.
“This is a very important breakthrough for the generation of eggs and sperm from stem cells,” said Dr. Rod Mitchell at the MRC Centre for Reproductive Health, University of Edinburgh, who was not involved in the study.
A Reproductive RevolutionHayashi is a long-time veteran at transforming reproductive technologies. In 2020, his team described genetic alterations that help cells mature into egg cells inside a dish. A year later, they reconstructed ovary cells that nurtured fertilized eggs into healthy mouse pups.
At the core of these technologies are iPSCs. Using a chemical bath, scientists can transform mature cells, such as skin cells, back into a stem-cell-like state. iPSCs are basically biological playdough: with a soup of chemical “kneading,” they can be coaxed and fashioned into nearly any type of cell.
Because of their flexibility, iPSCs are also hard to control. Like most cells, they divide. But when kept inside a petri dish for too long, they rebel and either shed—or duplicate—some of their chromosomes. This teenage anarchy, called aneuploidy, is the bane of scientists’ work when trying to keep a uniform population of cells.
But as the new study shows, that molecular rebellion is a gift for generating eggs from male cells.
X Meets Y and…Meets O?Let’s talk sex chromosomes.
Most people have either XX or XY. Both X and Y are chromosomes, which are large bundles of DNA—picture threads wrapped around a spool. Biologically, XX usually generates eggs, whereas XY normally produces sperm.
But here’s the thing: scientists have long known that both type of cells start from the same stock. Dubbed primordial germ cells, or PGCs, these cells don’t rely on either X or Y chromosomes, but rather on their surrounding chemical environment for their initial development, explained Bayerl and Laird.
In 2017, for example, Hayashi’s team transformed embryonic stem cells into PGCs, which when mixed with fetal ovary or testes cells matured into either artificial eggs or sperm.
Here, the team took on the harder task of transforming an XY cell into an XX one. They started with a group of embryonic stem cells from mice that shed their Y chromosomes—a rare and controversial resource. Using a glow-in-the-dark tag that grabs only onto X chromosomes, they could monitor how many copies there were inside a cell based on light intensity (remember, XX will shine brighter than XY).
After growing the cells for eight rounds inside petri dishes, the team found that roughly six percent of the cells sporadically lost their Y chromosome. Rather than XY, they now only harbored one X—like missing half of a chopstick pair. The team then selectively coaxed these cells, dubbed XO, to divide.
The reason? Cells duplicate their chromosomes before splitting into two new ones. Because the cells only have one X chromosome, after duplication some of the daughter cells will end up with XX—in other words, biologically female. Adding a drug called reversine helped the process along, increasing the number of XX cells.
The team then tapped into their previous work. They converted XX cells into PGC-like cells—the ones that can develop into egg or sperm—and then added fetal ovary cells to push the transformed male skin cells into mature eggs.
As the ultimate test, they injected sperm from a normal mouse into the lab-made eggs. With the help of a female surrogate, the blue-sky experiment produced over a half-dozen pups. Their weights were similar to mice born the traditional way, and their surrogate mom developed a healthy placenta. All of the pups grew into adulthood and had babies of their own.
Pushing BoundariesThe tech is still in its early days. For one, its success rate is extremely low: only 7 out of 630 transferred embryos lived to be full-grown adults. With a mere 1.1 percent chance at succeeding—especially in mice—it’s a tough sell for bringing the technology to male human couples. Although the baby mice seemed relatively normal in terms of weight and could reproduce, they could also harbor genetic or other deficiencies—something that the team wants to further investigate.
“There are big differences between a mouse and the human,” said Hayashi at an earlier conference.
That said, reproduction aside, the study may immediately help to understand chromosomal disorders. Down syndrome, for example, is caused by an extra copy of chromosome 21. In the study, the team found that treating mouse embryonic stem cells harboring a similar defect with reversine—the drug that helps convert XY to XX cells—rid the mice of the extra copy without affecting other chromosomes. It’s far from being ready for human use. However, the technology could help other scientists hunt down preventative or screening measures for similar chromosomal disorders.
But perhaps what’s most intriguing is where the technology can take reproductive biology. In an audacious experiment, the team showed that cells from a single male iPSC line can birth offspring—pups that grew into adulthood.
With the help of surrogate mothers, “it also suggests that a single man could have a biological child…in the far future,” said Dr. Tetsuya Ishii, a bioethicist at Hokkaido University. The work could also propel bioconservation, propagating endangered mammals from just a single male.
Hayashi is well aware of the ethics and social implications of his work. But for now, his focus is on helping people and deciphering—and rewriting—the rules of reproduction.
The study marks “a milestone in reproductive biology,” said Bayerl and Laird.
Image Credit: Katsuhiko Hayashi, Osaka University
Cultured Chicken Is a Step Closer as a Second US Company Gets FDA Approved
At the end of last year, Israeli cultured meat company Believer Meats broke ground on a 200,000-square-foot factory outside Raleigh, North Carolina. The facility will be the biggest cultured meat factory in the world (well, unless a bigger one goes up before it’s done, which is unlikely).
However, the sale of cultured meat isn’t fully legal in the US yet (in fact, the only countries where the meat can be sold right now are Singapore and Israel), so regulations are going to need to keep pace with production capacity to make such facilities worth building. Last week California-based Good Meat took a step in this direction, receiving a crucial FDA approval for sale of its cultured chicken in the US.
Cultured meat is made by taking muscle cells from a live animal (without harming it) and feeding those cells a mixture of nutrients and growth factors to make them multiply, differentiate, and grow to form muscle tissue. The harvested tissue then needs to be refined and shaped into a final product, which can involve extrusion cooking, molding, or 3D printing.
Good Meat was the first company in the world to start selling cultured meat, with its chicken hitting the Singaporean market in 2020. This past January the company hit another milestone when the Singapore Food Agency granted them approval to sell serum-free meat in Singapore (“serum-free” means they can use synthetic ingredients in their production process, specifically eliminating fetal bovine serum, which makes animal cells duplicate).
Now Good Meat has made headway in what it hopes will be its biggest market, the US. They received an FDA approval called a No Questions letter, which states that after conducting a thorough evaluation of the company’s meat, the agency concluded it’s safe for consumers to eat. Besides meeting microbiological and purity standards (the press release notes that cultured chicken’s microbiological levels are “significantly cleaner” than conventional chicken), the evaluation found that Good Meat’s chicken contains “high protein content, a well-balanced amino acid profile, and is a rich source of minerals.”
Good Meat isn’t the first company to receive this approval in the US. Its competitor Upside Foods got a No Questions letter for its cultured chicken last November. Their 53,000-square-foot production center in the Bay Area will eventually be able to produce more than 400,000 pounds of meat, poultry, and seafood per year. Before becoming available in grocery stores, Upside’s chicken will be introduced to consumers in restaurants, starting with an upscale restaurant in San Francisco whose chef is Michelin-starred.
Similarly, Good Meat plans to launch its cultured chicken at a Washington DC restaurant owned by celebrity chef José Andrés. Before that can happen, though, the company has to work with the US Department of Agriculture to receive additional approvals for its production facilities and its product.
The company is building a demonstration plant in Singapore, and announced plans last year to build a large-scale facility in the US with an annual production capacity of 30 million pounds of meat (which means it will be bigger than the Believer Meats plant in North Carolina).
Good Meat will have its work cut out for it, as there are more than 80 other companies vying for a slice of the lab-grown meat market, which is projected to reach a value of $12.7 billion by 2030. Given that all of its competitors will have to go through the FDA and USDA approvals process, though, Good Meat has a leg up.
Image Credit: Good Meat
This Chipmaking Step Is Crucial to the Future of Computing—and Just Got 40x Faster Thanks to Nvidia
If computer chips make the modern world go around, then Nvidia and TSMC are flywheels keeping it spinning. It’s worth paying attention when the former says they’ve made a chipmaking breakthrough, and the latter confirms they’re about to put it into practice.
At Nvidia’s GTC developer conference this week, CEO Jensen Huang said Nvidia has developed software to make a chipmaking step, called inverse lithography, over 40 times faster. A process that usually takes weeks can now be completed overnight, and instead of requiring some 40,000 CPU servers and 35 megawatts of power, it should only need 500 Nvidia DGX H100 GPU-based systems and 5 megawatts.
“With cuLitho, TSMC can reduce prototype cycle time, increase throughput and reduce the carbon footprint of their manufacturing, and prepare for 2nm and beyond,” he said.
Nvidia partnered with some of the biggest names in the industry on the work. TSMC, the largest chip foundry in the world, plans to qualify the approach in production this summer. Meanwhile, chip designer, Synopsis, and equipment maker, ASML, said in a press release they will integrate cuLitho into their chip design and lithography software.
What Is Inverse Lithography?To fabricate a modern computer chip, makers shine ultraviolet light through intricate “stencils” to etch billions of patterns—like wires and transistors—onto smooth silicon wafers at near-atomic resolutions. This step, called photolithography, is how every new chip design, from Nvidia to Apple to Intel, is manifested physically in silicon.
The machines that make it happen, built by ASML, cost hundreds of millions of dollars and can produce near-flawless works of nanoscale art on chips. The end product, an example of which is humming away near your fingertips as you read this, is probably the most complex commodity in history. (TSMC churns out a quintillion transistors every six months—for Apple alone.)
To make more powerful chips, with ever-more, ever-smaller transistors, engineers have had to get creative.
Remember that stencil mentioned above? It’s the weirdest stencil you’ve ever seen. Today’s transistors are smaller than the wavelength of light used to etch them. Chipmakers have to use some extremely clever tricks to design stencils—or technically, photomasks—that can bend light into interference patterns whose features are smaller than the light’s wavelength and perfectly match the chip’s design.
Whereas photomasks once had a more one-to-one shape—a rectangle projected a rectangle—they’ve necessarily become more and more complicated over the years. The most advanced masks these days are more like mandalas than simple polygons.
“Stencils” or photomasks have become more and more complicated as the patterns they etch have shrunk into the atomic realm. Image Credit: NvidiaTo design these advanced photomask patterns, engineers reverse the process.
They start with the design they want, then stuff it through a wicked mess of equations describing the physics involved to design a suitable pattern. This step is called inverse lithography, and as the gap between light wavelength and feature size has increased, it’s become increasingly crucial to the whole process. But as the complexity of photomasks increases, so too does the computing power, time, and cost required to design them.
“Computational lithography is the largest computation workload in chip design and manufacturing, consuming tens of billions of CPU hours annually,” Huang said. “Massive data centers run 24/7 to create reticles used in lithography systems.”
In the broader category of computational lithography—the methods used to design photomasks—inverse lithography is one of the newer, more advanced approaches. Its advantages include greater depth of field and resolution and should benefit the entire chip, but due its heavy computational lift, it’s currently only used sparingly.
A Library in ParallelNvidia aims to reduce that lift by making the computation more amenable to graphics processing units, or GPUs. These powerful chips are used for tasks with lots of simple computations that can be completed in parallel, like video games and machine learning. So it isn’t just about running existing processes on GPUs, which only yields a modest improvement, but modifying those processes specifically for GPUs.
That’s what the new software, cuLitho, is designed to do. The product, developed over the last four years, is a library of algorithms for the basic operations used in inverse lithography. By breaking inverse lithography down into these smaller, more repetitive computations, the whole process can now be split and parallelized on GPUs. And that, according to Nvidia, significantly speeds everything up.
A new library of inverse lithography algorithms can speed up the process by breaking it down into smaller tasks and running them in parallel on GPUs. Image Credit: Nvidia“If [inverse lithography] was sped up 40x, would many more people and companies use full-chip ILT on many more layers? I am sure of it,” said Vivek Singh, VP of Nvidia’s Advanced Technology Group, in a talk at GTC.
With a speedier, less computationally hungry process, makers can more rapidly iterate on experimental designs, tweak existing designs, make more photomasks per day, and generally, expand the use of inverse lithography to more of the chip, he said.
This last detail is critical. Wider use of inverse lithography should reduce print errors by sharpening the projected image—meaning chipmakers can churn out more working chips per silicon wafer—and be precise enough to make features at 2 nanometers and beyond.
It turns out making better chips isn’t all about the hardware. Software improvements, like cuLitho or the increased use of machine learning in design, can have a big impact too.
Image Credit: Nvidia
This Week’s Awesome Tech Stories From Around the Web (Through March 25)
OpenAI Connects ChatGPT to the Internet
Kyle Wiggers | TechCrunch
“[This week, OpenAI] launched plugins for ChatGPT, which extend the bot’s functionality by granting it access to third-party knowledge sources and databases, including the web. Easily the most intriguing plugin is OpenAI’s first-party web-browsing plugin, which allows ChatGPT to draw data from around the web to answer the various questions posed to it.”
Nvidia Speeds Key Chipmaking Computation by 40x
Samuel K. Moore | IEEE Spectrum
“Called inverse lithography, it’s a key tool that allows chipmakers to print nanometer-scale features using light with a longer wavelength than the size of those features. Inverse lithography’s use has been limited by the massive size of the needed computation. Nvidia’s answer, cuLitho, is a set of algorithms designed for use with GPUs, turns what has been two weeks of work into an overnight job.”
Epic’s New Motion-Capture Animation Tech Has to Be Seen to Be Believed
Kyle Orland | Ars Technica
“Epic’s upcoming MetaHuman facial animation tool looks set to revolutionize [the]…labor- and time-intensive workflow [of motion-capture]. In an impressive demonstration at Wednesday’s State of Unreal stage presentation, Epic showed off the new machine-learning-powered system, which needed just a few minutes to generate impressively real, uncanny-valley-leaping facial animation from a simple head-on video taken on an iPhone.”
United to Fly Electric Air Taxis to O’Hare Beginning in 2025
Stefano Esposito | Chicago Sun Times
“The trip between O’Hare and the Illinois Medical District is expected to take about 10 minutes, according to California-based Archer Aviation, which is partnering with United Airlines. …An Archer spokesman said they hope to make the fare competitive with Uber Black, a ride-hailing service that provides luxury vehicles and top-rated drivers to customers. On Thursday afternoon, an Uber Black ride from for Vertiport to O’Hare was $101.”
These New Tools Let You See for Yourself How Biased AI Image Models Are
Melissa Heikkilä | MIT Technology Review
“Popular AI image-generating systems notoriously tend to amplify harmful biases and stereotypes. But just how big a problem is it? You can now see for yourself using interactive new online tools. (Spoiler alert: it’s big.) The tools, built by researchers at AI startup Hugging Face and Leipzig University and detailed in a non-peer-reviewed paper, allow people to examine biases in three popular AI image-generating models: DALL-E 2 and the two recent versions of Stable Diffusion.”
BMW’s New Factory Doesn’t Exist in Real Life, but It Will Still Change the Car Industry
Jesus Diaz | Fast Company
“Before construction on [a new car] factory begins, thousands of engineers draw millions of CAD drawings and meet for thousands of hours. Worse yet, they know that no amount of planning will prevent a long list of bugs once the factory finally opens, which can result in millions of dollars lost every day until the bugs are resolved. At least, that’s how it used to work. This is all about to change thanks to the world’s first virtual factory, a perfect digital twin of BMW’s future 400-hectare plant in Debrecen, Hungary, which will reportedly produce around 150,000 vehicles every year when it opens in 2025.”
Fusion Power Is Coming Back Into Fashion
Editorial Staff | The Economist
“[Forty two companies] think they can succeed, where others failed, in taking fusion from the lab to the grid—and do so with machines far smaller and cheaper than the latest intergovernmental behemoth, ITER, now being built in the south of France at a cost estimated by America’s energy department to be $65bn. In some cases that optimism is based on the use of technologies and materials not available in the past; in others, on simpler designs.”
Plastic Paving: Egyptian Startup Turns Millions of Bags Into Tiles
Editorial Staff | Reuters
“An Egyptian startup is aiming to turn more than 5 billion plastic bags into tiles tougher than cement as it tackles the twin problems of tons of waste entering the Mediterranean Sea and high levels of building sector emissions. ‘So far, we have recycled more than 5 million plastic bags, but this is just the beginning,’ TileGreen co-founder Khaled Raafat told Reuters. ‘We aim that by 2025, we will have recycled more than 5 billion plastic bags.’ ”
Image Credit: BoliviaInteligente / Unsplash
AI Could Make More Work for Us, Instead of Simplifying Our Lives
There’s a common perception that artificial intelligence (AI) will help streamline our work. There are even fears that it could wipe out the need for some jobs altogether.
But in a study of science laboratories I carried out with three colleagues at the University of Manchester, the introduction of automated processes that aim to simplify work—and free people’s time—can also make that work more complex, generating new tasks that many workers might perceive as mundane.
In the study, published in Research Policy, we looked at the work of scientists in a field called synthetic biology, or synbio for short. Synbio is concerned with redesigning organisms to have new abilities. It is involved in growing meat in the lab, in new ways of producing fertilizers, and in the discovery of new drugs.
Synbio experiments rely on advanced robotic platforms to repetitively move a large number of samples. They also use machine learning to analyze the results of large-scale experiments.
These, in turn, generate large amounts of digital data. This process is known as “digitalization,” where digital technologies are used to transform traditional methods and ways of working.
Some of the key objectives of automating and digitalizing scientific processes are to scale up the science that can be done while saving researchers time to focus on what they would consider more “valuable” work.
Paradoxical ResultHowever, in our study, scientists were not released from repetitive, manual, or boring tasks as one might expect. Instead, the use of robotic platforms amplified and diversified the kinds of tasks researchers had to perform. There are several reasons for this.
Among them is the fact that the number of hypotheses (the scientific term for a testable explanation for some observed phenomenon) and experiments that needed to be performed increased. With automated methods, the possibilities are amplified.
Scientists said it allowed them to evaluate a greater number of hypotheses, along with the number of ways that scientists could make subtle changes to the experimental set-up. This had the effect of boosting the volume of data that needed checking, standardizing, and sharing.
Also, robots needed to be “trained” in performing experiments previously carried out manually. Humans, too, needed to develop new skills for preparing, repairing, and supervising robots. This was done to ensure there were no errors in the scientific process.
Scientific work is often judged on output such as peer-reviewed publications and grants. However, the time taken to clean, troubleshoot, and supervise automated systems competes with the tasks traditionally rewarded in science. These less valued tasks may also be largely invisible—particularly because managers are the ones who would be unaware of mundane work due to not spending as much time in the lab.
The synbio scientists carrying out these responsibilities were not better paid or more autonomous than their managers. They also assessed their own workload as being higher than those above them in the job hierarchy.
Wider LessonsIt’s possible these lessons might apply to other areas of work too. ChatGPT is an AI-powered chatbot that “learns” from information available on the web. When prompted by questions from online users, the chatbot offers answers that appear well-crafted and convincing.
According to Time magazine, in order for ChatGPT to avoid returning answers that were racist, sexist, or offensive in other ways, workers in Kenya were hired to filter toxic content delivered by the bot.
There are many often invisible work practices needed for the development and maintenance of digital infrastructure. This phenomenon could be described as a “digitalization paradox.” It challenges the assumption that everyone involved or affected by digitalization becomes more productive or has more free time when parts of their workflow are automated.
Concerns over a decline in productivity are a key motivation behind organizational and political efforts to automate and digitalize everyday work. But we should not take promises of gains in productivity at face value.
Instead, we should challenge the ways we measure productivity by considering the invisible types of tasks humans can accomplish, beyond the more visible work that is usually rewarded.
We also need to consider how to design and manage these processes so that technology can more positively add to human capabilities.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Gerd Altmann from Pixabay
The First 3D-Printed Rocket Launch Is a Step Toward Even Greater Access to Space
Reducing the cost of space launches will be critical if we want humanity to have a more permanent presence beyond orbit. The partially successful launch of the first 3D-printed rocket could be a significant step in that direction.
Getting stuff into space is dramatically cheaper than it used to be thanks to a wave of innovation in the private space industry led by SpaceX. More affordable launches have brought on a rapid expansion in access to space and made a host of new space-based applications feasible. But costs are still a major barrier.
That’s largely because rockets are incredibly expensive and difficult to build. A promising way round this is to use 3D printing to simplify the design and manufacturing process. SpaceX has experimented with the idea for years, and the engines on Rocket Lab’s Electron launch vehicle are almost entirely 3D-printed.
But one company wants to take things even further. Relativity Space has built one of the largest metal 3D printers in the world and uses it to fabricate almost all of its Terran 1 rocket. The rocket blasted off for the first time yesterday, and while the launch vehicle didn’t quite make orbit, it survived max-q, or the part of flight when the rocket is subjected to maximum mechanical stress.
“Today is a huge win, with many historic firsts,” the company said in a tweet following the launch. “We successfully made it through max-q, the highest stress state on our printed structures. This is the biggest proof point for our novel additive manufacturing approach.”
This was the company’s third bite at the cherry after two previous launches were called off earlier in the month. The rocket lifted off from a launchpad at the US Space Force’s launch facility in Cape Canaveral, Florida at 8:25 pm (EST) and flew for about three minutes.
Shortly after making it through max-q and the successful separation of the second stage from the booster, the rocket’s engine cut out due to what the company cryptically referred to as “an anomaly,” though it promised to provide updates once flight data has been analyzed.
While that meant Terran 1 didn’t make it into orbit, the launch is nonetheless likely to be seen as a success. It’s fairly common for the first launch of a new rocket to go awry—Space X’s first three launches failed—so getting off the launch pad and passing key milestones like max-q and first stage separation are significant achievements.
This is particularly important for Relativity Space, which is taking a radically different approach to manufacturing its rockets compared to competitors. Prior to the launch, cofounder Tim Ellis said the company’s main goal was to prove the structural integrity of their 3D-printed design.
“We have already proven on the ground what we hope to prove in-flight—that when dynamic pressures and stresses on the vehicle are highest, 3D printed structures can withstand these forces,” he said in a tweet. “This will essentially prove the viability of using additive manufacturing tech to produce products that fly.”
There is a lot that is novel about Relativity’s design. At present, roughly 85 percent of the structure by mass is 3D-printed, but the company hopes to push that to 95 percent in future iterations. This has allowed Relativity to use 100 times fewer parts than traditional rockets and go from raw materials to a finished product in just 60 days.
The engines also run on a mixture of liquid methane and liquid oxygen, which is the same technology SpaceX is pursuing for its massive Starship rocket. This fuel mix is seen as the most promising for Mars exploration as it can be produced on the red planet itself, eliminating the need to carry fuel for the return journey.
But while the 110-foot-tall Terran 1 can carry up to 2,756 pounds to low-Earth orbit, and Relativity is selling rides on the rocket for around $12 million, it is really a test bed for a more advanced rocket. That rocket, the Terran R, will be 216 feet tall and able to carry 44,000 pounds when it makes it onto the launchpad as early as 2024.
Relativity isn’t the only company working hard to bring 3D printing to the space industry.
California startup, Launcher, has created a satellite platform called Orbiter that’s powered by 3D-printed rocket engines, and Colorado-based Ursa Major is 3D printing rocket engines it hopes others will use in their vehicles. At the same time, UK-based Orbex is using metal 3D printers from German manufacturer EOS to manufacture entire rockets.
Now that 3D-printed rockets have passed their first true test and made it into space, don’t be surprised to see more companies following in the footsteps of these early pioneers.
Image Credit: Relativity Space
AI Won’t Kill Our Jobs, It Will Kill Our Job Descriptions—and Leave Us Better Off
The hype around artificial intelligence has been building for years, and you could say it reached a crescendo with OpenAI’s recent release of ChatGPT (and now GPT-4). It only took two months for ChatGPT to reach 100 million users, making it the fastest-growing consumer application in history (it took Instagram two and a half years to gain the same user base, and TikTok nine months).
In Ian Beacraft’s opinion, we’re in an AI hype bubble, way above the top of the peak of inflated expectations on the Gartner Hype Cycle. But it may be justified, because the AI tools we’re seeing really do have the power to overhaul the way we work, learn, and create value.
Beacraft is the founder of the strategic foresight agency Signal & Cipher and co-owner of a production studio that designs virtual worlds. In a talk at South by Southwest last week, he shared his predictions of how AI will shape society in the years and decades to come.
A Revolution in Knowledge WorkBeacraft pointed out that with the Industrial Revolution we were able to take skills of human labor and amplify them far beyond what the human body is capable of. “Now we’re doing the same thing with knowledge work,” he said. “We’re able to do so much more, put so much more power behind it.” The Industrial Revolution mechanized skills, and today we’re digitizing skills. Digitized skills are programmable, composable, and upgradeable—and AI is taking it all to another level.
Say you want to write a novel in the style of a specific writer. You could prompt ChatGPT to do so, be it by the sentence, paragraph, or chapter, then tweak the language to your liking (whether that’s cheating or some form of plagiarism is another issue, and a pretty significant one); you’re programming the algorithm to extract years worth of study and knowledge—years that you don’t have to put in. Composable means you can stack skills on top of each other, and upgradeable means anytime anytime an AI gets an upgrade, so do you. “You didn’t have to go back to school for it, but all of a sudden you have new skills that came from the upgrade,” Beacraft said.
The Era of the GeneralistDue to these features, he believes AI is going to turn us all into creative generalists. Right now we’re told to specialize from an early age and build expertise in one area—but what happens once AI can quickly outpace us in any domain? Will it still make sense to become an expert in a single field?
“Those who have expertise and depth in several domains, and interest and passion and curiosity across a broad swathe—those are the people who are going to dominate the next era,” Beacraft said. “When you have an understanding of how something works, you can now produce for it. You don’t have to have expertise in all the different layers to make that happen. You can know how the general territory or field operate, then have machines abstract the rest of the skills.”
For example, a graphic designer who draws a comic book could use AI-powered design tools to turn that comic book into a 3D production, and he doesn’t have to know 3D modeling, camera movement, blending, or motion capture; AI now enables just one person to perform all of the virtual production elements. “This wouldn’t have been possible a couple years ago, and now—with some effort—it is,” Beacraft said. The video below was created entirely by one person using generative AI, including the imagery, sound, motion, and talk track.
Generative AI tools are also starting to learn how to use other tools themselves, and they’re only going to get better at it. ChatGPT, for example, isn’t very good at hard science, but it could pass those kinds of questions off to something like WolframAlpha and include the tool’s answer in its reply.
This is not only going to change our work, Beacraft said, it’s going to change our relationship with work. Right now, organizations expect incremental employee improvement in narrowly-defined roles. Job titles like designer, accountant, or project manager have key performance indicators that typically improve two to three percent per year. “But if employees only grow incrementally, how can organizations expect exponential growth?” Beacraft asked.
AI will take our traditional job roles and make them horizontal, giving us the ability to flex in any direction. As a result, we’ll have just-in-time skills and expertise on demand. “We will not lose our jobs, we will lose our job descriptions,” Beacraft said. “When organizations have teams of people working horizontally, all that new capability is net new, not incremental—and all of a sudden you have exponential growth.”
More Work, Not LessThat growth could do the opposite of what the predominant narrative tells us: that AI, robotics, and automation will take over various kinds of work and do away with our jobs. But AI could very well end up creating more work for us.
For example, teams of scientists using AI to help them run experiments more efficiently could increase the number of experiments they perform—but then they have more results, more data to analyze, and more work sifting through all this information to ultimately draw a conclusion or find what they’re looking for. But hey—AI is getting good at handling extra administrative work, too.
We may be in an AI hype bubble, but this technology is reaching more people than it ever has before. While there are certainly nefarious uses for generative AI—just look at all the students trying to turn in essays written by ChatGPT, or how deepfakes are becoming harder to pinpoint—there are as many or more productive uses that will impact society, the economy, and our lives in positive ways.
“It’s not just about data and information, it’s about how these AIs can help us shape the world,” Beacraft said. “It’s about how we project what we want to create onto the world around us.”
Meta’s New ChatGPT-Like AI Is Fluent in the Language of Proteins—and Has Already Modeled 700 Million of Them
The race to solve every protein structure just welcomed another tech giant: Meta AI.
A research offshoot of Meta, known for Facebook and Instagram, the team came onto the protein shape prediction scene with an ambitious goal: to decipher the “dark matter” of the protein universe. Often found in bacteria, viruses, and other microorganisms, these proteins lounge in our everyday environments but are complete mysteries to science.
“These are the structures we know the least about. These are incredibly mysterious proteins. I think they offer the potential for great insight into biology,” said senior author Dr. Alexander Rives to Nature.
In other words, they’re a treasure trove of inspiration for biotechnology. Hidden in their secretive shapes are keys for designing efficient biofuels, antibiotics, enzymes, or even entirely new organisms. In turn, the data from protein predictions could further train AI models.
At the heart of Meta’s new AI, dubbed ESMFold, is a large language model. It might sound familiar. These machine learning algorithms have taken the world by storm with the rockstar chatbot ChatGPT. Known for its ability to generate beautiful essays, poems, and lyrics with simple prompts, ChatGPT—and the recently-launched GPT-4—are trained with millions of publicly-available texts. Eventually the AI learns to predict letters, words, and even write entire paragraphs and, in the case of Bing’s similar chatbot, hold conversations that sometimes turn slightly unnerving.
The new study, published in Science, bridges the AI model with biology. Proteins are made of 20 “letters.” Thanks to evolution, the sequence of letters help generate their ultimate shapes. If large language models can easily construe the 26 letters of the English alphabet into coherent messages, why can’t they also work for proteins?
Spoiler: they do. ESM-2 blasted through roughly 600 million protein structure predictions in just two weeks using 2,000 graphic processing units (GPUs). Compared to previous attempts, the AI made the process up to 60 times faster. The authors put every structure into the ESM Metagenomic Atlas, which you can explore here.
To Dr. Alfonso Valencia at the Barcelona National Supercomputing Center (BCS), who was not involved in the work, the beauty of using large language systems is a “conceptual simplicity.” With further development, the AI can predict “the structure of non-natural proteins, expanding the known universe beyond what evolutionary processes have explored.”
Let’s Talk EvolutionESMFold follows a simple guideline: sequence predicts structure.
Let’s backtrack. Proteins are made from 20 amino acids—each one a “letter”—and strung up like spiky beads on a string. Our cells then shape them up into delicate features: some look like rumpled bed sheets, others like a swirly candy cane or loose ribbons. The proteins can then grab onto each other to form a multiplex—for example, a tunnel that crosses the brain cell membrane that controls its actions, and in turn controls how we think and remember.
Scientists have long known that amino acid letters help shape the final structure of a protein. Similar to letters or characters in a language, only certain ones when strung together make sense. In the case of proteins, these sequences make them functional.
“The biological properties of a protein constrain the mutations to its sequence that are selected through evolution,” the authors said.
Similar to how different letters in the alphabet converge to create words, sentences, and paragraphs without sounding like complete gibberish, the protein letters do the same. There is an “evolutionary dictionary” of sorts that helps string up amino acids into structures the body can comprehend.
“The logic of the succession of amino acids in known proteins is the result of an evolutionary process that has led them to have the specific structure with which they perform a particular function,” said Valencia.
Mr. AI, Make Me a ProteinLife’s relatively limited dictionary is great news for large language models.
These AI models scour readily available texts to learn and build up predictions of the next word. The end result, as seen in GPT-3 and ChatGPT, are strikingly natural conversations and fantastical artistic images.
Meta AI used the same concept, but rewrote the playbook for protein structure predictions. Rather than feeding the algorithm with texts, they gave the program sequences of known proteins.
The AI model—called a transformer protein language model—learned the general architecture of proteins using up to 15 billion “settings.” It saw roughly 65 million different protein sequences overall.
In their next step the team hid certain letters from the AI, prompting it to fill in the blanks. In what amounts to autocomplete, the program eventually learned how different amino acids connect to (or repel) each other. In the end, the AI formed an intuitive understanding of evolutionary protein sequences—and how they work together to make functional proteins.
Into the UnknownAs a proof of concept, the team tested ESMFold using two well-known test sets. One, CAMEO, involved nearly 200 structures; the other, CASP14, has 51 publicly-released protein shapes.
Overall, the AI “provides state-of-the-art structure prediction accuracy,” the team said, “matching AlphaFold2 performance on more than half the proteins.” It also reliably tackled large protein complexes—for example, the channels on neurons that control their actions.
The team then took their AI a step further, venturing into the world of metagenomics.
Metagenomes are what they sound like: a hodgepodge of DNA material. Normally these come from environmental sources such as the dirt under your feet, seawater, or even normally inhospitable thermal vents. Most of the microbes can’t be artificially grown in labs, yet some have superpowers such as resisting volcanic-level heat, making them a biological dark matter yet to be explored.
At the time the paper was published, the AI had predicted over 600 million of these proteins. The count is now up to over 700 million with the latest release. The predictions came fast and furious in roughly two weeks. In contrast, previous modeling attempts took up to 10 minutes for just a single protein.
Roughly a third of the protein predictions were of high confidence, with enough detail to zoom into the atomic-level scale. Because the protein predictions were based solely on their sequences, millions of “aliens” popped up—structures unlike anything in established databases or those previously tested.
“It’s interesting that more than 10 percent of the predictions are for proteins that bear no resemblance to other known proteins,” said Valencia. It might be due to the magic of language models, which are far more flexible at exploring—and potentially generating—previously unheard of sequences that make up functional proteins. “This is a new space for the design of proteins with new sequences and biochemical properties with applications in biotechnology and biomedicine,” he said.
As an example, ESMFold could potentially help suss out the consequences of single-letter changes in a protein. Called point mutations, these seemingly benign edits wreak havoc in the body, causing devastating metabolic syndromes, sickle cell anemia, and cancer. A lean, mean, and relatively simple AI brings results to the average biomedical research lab, while scaling up protein shape predictions thanks to the AI’s speed.
Biomedicine aside, another fascinating idea is that proteins may help train large language models in a way texts can’t. As Valencia explained, “On the one hand, protein sequences are more abundant than texts, have more defined sizes, and a higher degree of variability. On the other hand, proteins have a strong internal ‘meaning’—that is, a strong relationship between sequence and structure, a meaning or coherence that is much more diffuse in texts,” bridging the two fields into a virtuous feedback loop.
Image Credit: Meta AI
Solar Panels Floating on Reservoirs Could Provide a Third of the World’s Electricity
Solar power is going to play a major role in combating climate change, but it requires huge amounts of land. Floating solar panels on top of reservoirs could provide up to a third of the world’s electricity without taking up extra space, and also save trillions of gallons of water from evaporating.
So called “floating photovoltaic” systems have a lot going for them. The surface of reservoirs can’t be used for much else, so it’s comparatively cheap real estate, and it also frees up land for other important purposes. And because these bodies of water are designed to service major urban centers, they’re normally close to where the power will be needed, making electricity distribution simpler.
By shielding the water from the sun, floating solar panels can also significantly reduce evaporation, which can be a major concern in the hot dry climates where solar works best. And what evaporation does occur can actually help to cool the panels, which operate more efficiently at lower temperatures and therefore squeeze out extra power.
Just how promising the approach could be had remained unclear, as so far analyses had been limited to individual countries or regions. A new study in Nature Sustainability has now provided a comprehensive assessment of the global potential of floating solar power, finding that it could provide between a fifth and half of the world’s electricity needs while saving 26 trillion gallons of water from evaporating.
The new research was made possible by combining several databases mapping reservoirs around the world. This allowed the researchers to identify a total of 114,555 water bodies with a total area of 556,111 square kilometers (214,716 square miles).
They then used a model developed at the US Department of Energy’s Sandia National Laboratory that can simulate solar panel performance in different climatic conditions. Finally, they used regional hydrology simulations to predict how much the solar panels would reduce evaporation based on local climate data.
In their baseline study, the researchers assumed that solar panels would only cover 30 percent of a reservoir’s surface, or 30 square kilometers (11.6 square miles), depending on which is lower. This was done to take into account the practical difficulties of building larger arrays and also the potential ecological impact of completely covering up the body of water.
Given these limitations, the researchers calculated that the global generating potential for floating solar panels was a massive 9,434 terawatt-hours a year, which is roughly 40 percent of the 22,848 terawatt-hours the world consumes yearly, according to the International Energy Agency’s latest figures.
If the total coverage was limited to a much more reasonable 10 percent, the researchers found floating solar power could still generate as much as 4,356 terawatt-hours a year. And if the largest reservoirs were allowed to have up to 50 square kilometers (19 square miles) of panels then the total capacity rose to 11,012 terawatt-hours, almost half of global electricity needs.
The authors note that this capacity isn’t evenly distributed, and some countries stand to gain more than others. With more than 25,000 reservoirs, the US has the most to gain and could generate 1,911 terawatt-hours a year, almost half its total consumption. China, India, and Brazil could also source a significant amount of their power this way.
But most interestingly, the analysis showed that as many as 6,256 cities could theoretically meet all of their electricity demands with floating solar power. Most have a population below 50,000, but as many as 150 are cities with more than a million people.
It’s important to note that this study was simply assessing the potential of the idea. Floating solar panels have been around for some time, but they are more expensive to deploy than land-based panels, and there are significant concerns about what kind of impact blocking out sunlight could have on reservoir ecosystems.
But given the need to rapidly scale up renewable energy generation, and the scarcity of land for large solar installations, turning our reservoirs into power stations could prove to be a smart idea.
OpenAI Says GPT-4 Is Better in Nearly Every Way. What Matters More Is Millions Will Use It
In 2020, artificial intelligence company OpenAI stunned the tech world with its GPT-3 machine learning algorithm. After ingesting a broad slice of the internet, GPT-3 could generate writing that was hard to distinguish from text authored by a person, do basic math, write code, and even whip up simple web pages.
OpenAI followed up GPT-3 with more specialized algorithms that could seed new products, like an AI called Codex to help developers write code and the wildly popular (and controversial) image-generator DALL-E 2. Then late last year, the company upgraded GPT-3 and dropped a viral chatbot called ChatGPT—by far, its biggest hit yet.
Now, a rush of competitors is battling it out in the nascent generative AI space, from new startups flush with cash to venerable tech giants like Google. Billions of dollars are flowing into the industry, including a $10-billion follow-up investment by Microsoft into OpenAI.
This week, after months of rather over-the-top speculation, OpenAI’s GPT-3 sequel, GPT-4, officially launched. In a blog post, interviews, and two reports (here and here), OpenAI said GPT-4 is better than GPT-3 in nearly every way.
More Than a Passing GradeGPT-4 is multimodal, which is a fancy way of saying it was trained on both images and text and can identify, describe, and riff on what’s in an image using natural language. OpenAI said the algorithm’s output is higher quality, more accurate, and less prone to bizarre or toxic outbursts than prior versions. It also outperformed the upgraded GPT-3 (called GPT 3.5) on a slew of standardized tests, placing among the top 10 percent of human test-takers on the bar licensing exam for lawyers and scoring either a 4 or a 5 on 13 out of 15 college-level advanced placement (AP) exams for high school students.
To show off its multimodal abilities—which have yet to be offered more widely as the company evaluates them for misuse—OpenAI president Greg Brockman sketched a schematic of a website on a pad of paper during a developer demo. He took a photo and asked GPT-4 to create a webpage from the image. In seconds, the algorithm generated and implemented code for a working website. In another example, described by The New York Times, the algorithm suggested meals based on an image of food in a refrigerator.
The company also outlined its work to reduce risk inherent in models like GPT-4. Notably, the raw algorithm was complete last August. OpenAI spent eight months working to improve the model and rein in its excesses.
Much of this work was accomplished by teams of experts poking and prodding the algorithm and giving feedback, which was then used to refine the model with reinforcement learning. The version launched this week is an improvement on the raw version from last August, but OpenAI admits it still exhibits known weaknesses of large language models, including algorithmic bias and an unreliable grasp of the facts.
By this account, GPT-4 is a big improvement technically and makes progress mitigating, but not solving, familiar risks. In contrast to prior releases, however, we’ll largely have to take OpenAI’s word for it. Citing an increasingly “competitive landscape and the safety implications of large-scale models like GPT-4,” the company opted to withhold specifics about how GPT-4 was made, including model size and architecture, computing resources used in training, what was included in its training dataset, and how it was trained.
Ilya Sutskever, chief technology officer and cofounder at OpenAI, told The Verge “it took pretty much all of OpenAI working together for a very long time to produce this thing” and lots of other companies “would like to do the same thing.” He went on to suggest that as the models grow more powerful, the potential for abuse and harm makes open-sourcing them a dangerous proposition. But this is hotly debated among experts in the field, and some pointed out the decision to withhold so much runs counter to OpenAI’s stated values when it was founded as a nonprofit. (OpenAI reorganized as a capped-profit company in 2019.)
The algorithm’s full capabilities and drawbacks may not become apparent until access widens further and more people test (and stress) it out. Before reining it in, Microsoft’s Bing chatbot caused an uproar as users pushed it into bizarre, unsettling exchanges.
Overall, the technology is quite impressive—like its predecessors—but also, despite the hype, more iterative than GPT-3. With the exception of its new image-analyzing skills, most abilities highlighted by OpenAI are improvements and refinements of older algorithms. Not even access to GPT-4 is novel. Microsoft revealed this week that it secretly used GPT-4 to power its Bing chatbot, which had recorded some 45 million chats as of March 8.
AI for the MassesWhile GPT-4 may not to be the step change some predicted, the scale of its deployment almost certainly will be.
GPT-3 was a stunning research algorithm that wowed tech geeks and made headlines; GPT-4 is a far more polished algorithm that’s about to be rolled out to millions of people in familiar settings like search bars, Word docs, and LinkedIn profiles.
In addition to its Bing chatbot, Microsoft announced plans to offer services powered by GPT-4 in LinkedIn Premium and Office 365. These will be limited rollouts at first, but as each iteration is refined in response to feedback, Microsoft could offer them to the hundreds of millions of people using their products. (Earlier this year, the free version of ChatGPT hit 100 million users faster than any app in history.)
It’s not only Microsoft layering generative AI into widely used software.
Google said this week it plans to weave generative algorithms into its own productivity software—like Gmail and Google Docs, Slides, and Sheets—and will offer developers API access to PaLM, a GPT-4 competitor, so they can build their own apps on top of it. Other models are coming too. Facebook recently gave researchers access to its open-source LLaMa model—it was later leaked online—while a Google-backed startup, Anthropic, and China’s tech giant Baidu rolled out their own chatbots, Claude and Ernie, this week.
As models like GPT-4 make their way into products, they can be updated behind the scenes at will. OpenAI and Microsoft continually tweaked ChatGPT and Bing as feedback rolled in. ChatGPT Plus users (a $20/month subscription) were granted access to GPT-4 at launch.
It’s easy to imagine GPT-5 and other future models slotting into the ecosystem being built now as simply, and invisibly, as a smartphone operating system that upgrades overnight.
Then What?If there’s anything we’ve learned in recent years, it’s that scale reveals all.
It’s hard to predict how new tech will succeed or fail until it makes contact with a broad slice of society. The next months may bring more examples of algorithms revealing new abilities and breaking or being broken, as their makers scramble to keep pace.
“Safety is not a binary thing; it is a process,” Sutskever told MIT Technology Review. “Things get complicated any time you reach a level of new capabilities. A lot of these capabilities are now quite well understood, but I’m sure that some will still be surprising.”
Longer term, when the novelty wears off, bigger questions may loom.
The industry is throwing spaghetti at the wall to see what sticks. But it’s not clear generative AI is useful—or appropriate—in every instance. Chatbots in search, for example, may not outperform older approaches until they’ve proven to be far more reliable than they are today. And the cost of running generative AI, particularly at scale, is daunting. Can companies keep expenses under control, and will users find products compelling enough to vindicate the cost?
Also, the fact that GPT-4 makes progress on but hasn’t solved the best-known weaknesses of these models should give us pause. Some prominent AI experts believe these shortcomings are inherent to the current deep learning approach and won’t be solved without fundamental breakthroughs.
Factual missteps and biased or toxic responses in a fraction of interactions are less impactful when numbers are small. But on a scale of hundreds of millions or more, even less than a percent equates to a big number.
“LLMs are best used when the errors and hallucinations are not high impact,” Matthew Lodge, the CEO of Diffblue, recently told IEEE Spectrum. Indeed, companies are appending disclaimers warning users not to rely on them too much—like keeping your hands on the steering wheel of that Tesla.
It’s clear the industry is eager to keep the experiment going though. And so, hands on the wheel (one hopes), millions of people may soon begin churning out presentation slides, emails, and websites in a jiffy, as the new crop of AI sidekicks arrives in force.
Image Credit: Luke Jones / Unsplash
This Week’s Awesome Tech Stories From Around the Web (Through March 18)
You Can Now Run a GPT-3-Level AI Model on Your Laptop, Phone, and Raspberry Pi
Benj Edwards | Ars Technica
“On Friday, a software developer named Georgi Gerganov created a tool called “llama.cpp” that can run Meta’s new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly). If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it.”
A Gene Therapy Cure for Sickle Cell Is on the Horizon
Emily Mullin | Wired
“[Evie] Junior…is one of dozens of sickle cell patients in the US and Europe who have received gene therapies in clinical trials—some led by universities, others by biotech companies. Two such therapies, one from Bluebird Bio and the other from CRISPR Therapeutics and Vertex Pharmaceuticals, are the closest to coming to market. The companies are now seeking regulatory approval in the US and Europe. If successful, more patients could soon benefit from these therapies, although access and affordability could limit who gets them.”
This Couple Just Got Married in the Taco Bell Metaverse
Tanya Basu | MIT Technology Review
“The chapel at the company’s Taco Bell Cantina restaurant in Las Vegas has married 800 couples so far. There were copycat virtual weddings, too. ‘Taco Bell saw fans of the brand interact in the metaverse and decided to meet them quite literally where they were,’ a spokesperson said. That meant dancing hot sauce packets, a Taco Bell–themed dance floor, a turban for Mohnot, and the famous bell branding everywhere.”
Inside the Global Race to Turn Water Into Fuel
Max Bearak | The New York Times
“A consortium of energy companies led by BP plans to cover an expanse of land eight times as large as New York City with as many as 1,743 wind turbines, each nearly as tall as the Empire State Building, along with 10 million or so solar panels and more than a thousand miles of access roads to connect them all. But none of the 26 gigawatts of energy the site expects to produce, equivalent to a third of what Australia’s grid currently requires, will go toward public use. Instead, it will be used to manufacture a novel kind of industrial fuel: green hydrogen.”
Has the 3D Printing Revolution Finally Arrived?
Tim Lewis | The Guardian
“i‘What happened 10 years ago, when there was this massive hype, was there was so much nonsense being written: “You’ll print anything with these machines! It’ll take over the world!”‘ says Hague. ‘But it’s now becoming a really mature technology, it’s not an emerging technology really any more. It’s widely implemented by the likes of Rolls-Royce and General Electric, and we work with AstraZeneca, GSK, a whole bunch of different people. Printing things at home was never going to happen, but it’s developed into a multibillion-dollar industry.’i”
AI-Imager Midjourney v5 Stuns With Photorealistic Images—and 5-Fingered Hands
Benj Edwards | Ars Technica
“Midjourney v5 is available now as an alpha test for customers who subscribe to the Midjourney service, which is available through Discord. ‘MJ v5 currently feels to me like finally getting glasses after ignoring bad eyesight for a little bit too long,’ said Julie Wieland, a graphic designer who often shares her Midjourney creations on Twitter. ‘Suddenly you see everything in 4k, it feels weirdly overwhelming but also amazing.’i”
AI-Generated Images From Text Can’t Be Copyrighted, US Government Rules
Kris Holt | Engadget
“That’s according to the US Copyright Office (USCO), which has equated such prompts to a buyer giving directions to a commissioned artist. ‘They identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output,’ the USCO wrote in new guidance it published to the Federal Register. ‘When an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user,’ the office stated.”
GPT-4 Has the Memory of a Goldfish
Jacob Stern | The Atlantic
“By this point, the many defects of AI-based language models have been analyzed to death—their incorrigible dishonesty, their capacity for bias and bigotry, their lack of common sense. …But large language models have another shortcoming that has so far gotten relatively little attention: their shoddy recall. These multibillion-dollar programs, which require several city blocks’ worth of energy to run, may now be able to code websites, plan vacations, and draft company-wide emails in the style of William Faulkner. But they have the memory of a goldfish.”
Microsoft Lays Off an Ethical AI Team as It Doubles Down on OpenAI
Rebecca Bellan | TechCrunch
“The move calls into question Microsoft’s commitment to ensuring its product design and AI principles are closely intertwined at a time when the company is making its controversial AI tools available to the mainstream. Microsoft still maintains its Office of Responsible AI (ORA), which sets rules for responsible AI through governance and public policy work. But employees told Platformer that the ethics and society team was responsible for ensuring Microsoft’s responsible AI principles are actually reflected in the design of products that ship.”
It’s Official: No More Crispr Babies—for Now
Grace Browne | Wired
“After several days of experts chewing on the scientific, ethical, and governance issues associated with human genome editing, the [Third International Summit on Human Genome Editing’s] organizing committee put out its closing statement. Heritable human genome editing—editing embryos that are then implanted to establish a pregnancy, which can pass on their edited DNA—’remains unacceptable at this time,’ the committee concluded. ‘Public discussions and policy debates continue and are important for resolving whether this technology should be used.’i”
Image Credit: Kenan Alboshi / Unsplash
A ‘Goldilocks’ Star Reveals a Previously Hidden Step in How Water Gets to Earth
Without water, life on Earth could not exist as it does today. Understanding the history of water in the universe is critical to understanding how planets like Earth come to be.
Astronomers typically refer to the journey water takes from its formation as individual molecules in space to its resting place on the surfaces of planets as “the water trail.” The trail starts in the interstellar medium with hydrogen and oxygen gas and ends with oceans and ice caps on planets, with icy moons orbiting gas giants and icy comets and asteroids that orbit stars. The beginnings and ends of this trail are easy to see, but the middle has remained a mystery.
I am an astronomer who studies the formation of stars and planets using observations from radio and infrared telescopes. In a new paper, my colleagues and I describe the first measurements ever made of this previously hidden middle part of the water trail and what these findings mean for the water found on planets like Earth.
Star and planet formation is an intertwined process that starts with a cloud of molecules in space. Image Credit: Bill Saxton, NRAO/AUI/NSF, CC BY How Planets Are Formed
The formation of stars and planets is intertwined. The so-called “emptiness of space”—or the interstellar medium—in fact contains large amounts of gaseous hydrogen, smaller amounts of other gases, and grains of dust. Due to gravity, some pockets of the interstellar medium will become more dense as particles attract each other and form clouds. As the density of these clouds increases, atoms begin to collide more frequently and form larger molecules, including water that forms on dust grains and coats the dust in ice.
Stars begin to form when parts of the collapsing cloud reach a certain density and heat up enough to start fusing hydrogen atoms together. Since only a small fraction of the gas initially collapses into the newborn protostar, the rest of the gas and dust forms a flattened disk of material circling around the spinning, newborn star. Astronomers call this a proto-planetary disk.
As icy dust particles collide with each other inside a proto-planetary disk, they begin to clump together. The process continues and eventually forms the familiar objects of space like asteroids, comets, rocky planets like Earth and gas giants like Jupiter or Saturn.
Two Theories for the Source of WaterThere are two potential pathways that water in our solar system could have taken. The first, called chemical inheritance, is when the water molecules originally formed in the interstellar medium are delivered to proto-planetary disks and all the bodies they create without going through any changes.
The second theory is called chemical reset. In this process, the heat from the formation of the proto-planetary disk and newborn star breaks apart water molecules, which then reform once the proto-planetary disk cools.
To test these theories, astronomers like me look at the ratio between normal water and a special kind of water called semi-heavy water. Water is normally made of two hydrogen atoms and one oxygen atom. Semi-heavy water is made of one oxygen atom, one hydrogen atom and one atom of deuterium—a heavier isotope of hydrogen with an extra neutron in its nucleus.
The ratio of semi-heavy to normal water is a guiding light on the water trail—measuring the ratio can tell astronomers a lot about the source of water. Chemical models and experiments have shown that about 1,000 times more semi-heavy water will be produced in the cold interstellar medium than in the conditions of a protoplanetary disk.
This difference means that by measuring the ratio of semi-heavy to normal water in a place, astronomers can tell whether that water went through the chemical inheritance or chemical reset pathway.
V883 Orionis is a young star system with a rare star at its center that makes measuring water in the proto-planetary cloud, shown in the cutaway, possible. Image Credit: ALMA (ESO/NAOJ/NRAO), B. Saxton (NRAO/AUI/NSF), CC BY Measuring Water During the Formation of a Planet
Comets have a ratio of semi-heavy to normal water almost perfectly in line with chemical inheritance, meaning the water hasn’t undergone a major chemical change since it was first created in space. Earth’s ratio sits somewhere in between the inheritance and reset ratio, making it unclear where the water came from.
To truly determine where the water on planets comes from, astronomers needed to find a goldilocks proto-planetary disk—one that is just the right temperature and size to allow observations of water. Doing so has proved to be incredibly difficult. It is possible to detect semi-heavy and normal water when water is a gas; unfortunately for astronomers, the vast majority of proto-plantary disks are very cold and contain mostly ice, and it is nearly impossible to measure water ratios from ice at interstellar distances.
A breakthrough came in 2016, when my colleagues and I were studying proto-planetary disks around a rare type of young star called FU Orionis stars. Most young stars consume matter from the proto-planetary disks around them. FU Orionis stars are unique because they consume matter about 100 times faster than typical young stars and, as a result, emit hundreds of times more energy. Due to this higher energy output, the proto-planetary disks around FU Orionis stars are heated to much higher temperatures, turning ice into water vapor out to large distances from the star.
Using the Atacama Large Millimeter/submillimeter Array, a powerful radio telescope in northern Chile, we discovered a large, warm proto-planetary disk around the sunlike young star V883 Ori, about 1,300 light years from Earth in the constellation Orion.
V883 Ori emits 200 times more energy than the sun, and my colleagues and I recognized that it was an ideal candidate to observe the semi-heavy to normal water ratio.
The proto-planetary disk around V883 Ori contains gaseous water, shown in the orange layer, allowing astronomers to measure the ratio of semi-heavy to normal water. Image Credit: ALMA (ESO/NAOJ/NRAO), J. Tobin, B. Saxton (NRAO/AUI/NSF), CC BY Completing the Water Trail
In 2021, the Atacama Large Millimeter/submillimeter Array took measurements of V883 Ori for six hours. The data revealed a strong signature of semi-heavy and normal water coming from V883 Ori’s proto-planetary disk. We measured the ratio of semi-heavy to normal water and found that the ratio was very similar to ratios found in comets as well as the ratios found in younger protostar systems.
These results fill in the gap of the water trail forging a direct link between water in the interstellar medium, protostars, proto-planetary disks, and planets like Earth through the process of inheritance, not chemical reset.
The new results show definitively that a substantial portion of the water on Earth most likely formed billions of years ago, before the sun had even ignited. Confirming this missing piece of water’s path through the universe offers clues to origins of water on Earth. Scientists have previously suggested that most water on Earth came from comets impacting the planet. The fact that Earth has less semi-heavy water than comets and V883 Ori, but more than chemical reset theory would produce, means that water on Earth likely came from more than one source.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: A. Angelich (NRAO/AUI/NSF)/ALMA (ESO/NAOJ/NRAO), CC BY
Could Brain-Computer Interfaces Lead to ‘Mind Control for Good’?
Of all the advanced technologies currently under development, one of the most fascinating and frightening is brain-computer interfaces. They’re fascinating because we still have so much to learn about the human brain, yet scientists are already able to tap into certain parts of it. And they’re frightening because of the sinister possibilities that come with being able to influence, read, or hijack peoples’ thoughts.
But the worst-case scenarios that have been played out in science fiction are just one side of the coin, and brain-computer interfaces could also be a tremendous boon to humanity—if we create, manage, and regulate them correctly. In a panel discussion at South by Southwest this week, four experts in the neuroscience and computing field discussed how to do this.
Panelists included Ben Hersh, a staff interaction designer at Google; Anna Wexler, an assistant professor of medical ethics and health policy at the University of Pennsylvania; Afshin Mehin, the founder of a creative studio that helps companies give form to the future called Card79; and Jacob Robinson, an associate professor in electrical and computer engineering at Rice University and co-founder of Motif Neurotech, a company creating minimally invasive electronic therapies for mental health.
“This is a field that has a lot of potential for good, and there’s a lot that we don’t know yet,” Hersh said. “It’s also an area that has a lot of expectations that we’ve absorbed from science fiction.” In his opinion, “mind control for good” is not only a possibility, it’s an imperative.
The Mysterious BrainOf all the organs in our bodies, the brain is by far the most complex—and the one we know the least about. “Two people can perceive the same stimuli and have a very different subjective experience, and there are no real rules to help us understand what translates your experience of the world into your subjective reality,” Robinson said.
But, he added, if we zoom in on the fundamental aspect of what’s happening in our brains, it is governed by physical processes. Could it be possible to control aspects of the brain and our subjective experiences with the level of precision we have in fields like physics and engineering?
“Part of why we’ve struggled with treating mental health conditions is that we don’t have a fundamental understanding of what leads to these disorders,” Robinson said. “But we know that they are network-level problems…we’re beginning to interface with the networks that are underlying these types of conditions, and help to restore them.”
BCIs TodayElon Musk’s Neuralink has brought BCIs into the public eye more than they’re ever been before, but there’s been a consumer neurotechnoloy market since the mid-2000s. Electroencephalography (EEG) uses electrodes placed on the head to record basic measures of brain wave activity. Consumer brain stimulation devices are marketed for cognitive enhancement, such as improving focus, memory, or attention.
More advanced neural interfaces are being used as assistive technology for people with conditions like ALS or paralysis, helping them communicate or move in ways they otherwise wouldn’t be able to: translating thoughts into text, movements, speech, or written sentences. One brain implant succeeded in alleviating treatment-resistant depression via small, targeted doses of electrical stimulation.
“Some of the things that are coming up are actually kind of extraordinary,” Hersh said. “People are working on therapies where electronics are implanted in the brain and can help deal with illnesses beyond the reach of modern medicine.”
Dystopian PossibilitiesThis sounds pretty great, so what could go wrong? Well, unfortunately, lots. The idea of someone tapping into your brain and being able to control it is terrifying, and we’re not just talking dramatic scenarios like The Matrix; what if you had a brain implant for a medical purpose, but someone was able to subtly influence your choices around products or services you purchase? What if a record of your emotional state was released to someone you didn’t want to have it, or your private thoughts were made public? (I know what you’re thinking: ‘Wait—isn’t that what Twitter’s for?’)
Even tools with a positive intent could have unwanted impacts. Mehin’s company created a series of video vignettes imagining what BCI tech could do in day-to-day life. “The scenarios we imagined were spread between horrifying—imagine having an AI chatbot living inside your head—to actually useful, like being able to share how you’re feeling with a friend so they can help you sort through a difficult time.”
He shared that upon showing the videos at a design conference where there were students in the audience, a teacher spoke up and said, “This is horrible, kids will never be able to communicate with each other.” But then a student got up and said “We already can’t communicate with each other, this would actually be really useful.”
Would you want to live in a world where we need brain implants to communicate our emotions to one another? Where you wouldn’t sit and have coffee with a friend to talk about your career stress or marital strife, you’d just let them tap straight into your thoughts?
No thanks.
BCI UtopiaA brain-computer interface utopia sounds like an oxymoron; the real utopia would be one where we’re healthy, productive, and happy without the need for invasive technology tapping into the networks that dictate our every thought, feeling, and action.
But reality is that the state of mental health in the US is far from ideal. Millions of people suffer from conditions like PTSD, ADHD, anxiety, and depression, and pharmaceuticals haven’t been able to come up with a great cure for any of these. Pills like Adderall, Xanax, or Prozac come with unwanted side effects, and for some people they don’t work at all.
“One in ten people in the US suffer from a mental health disorder that’s not effectively treated by their drugs,” said Robinson. “Our hope is that BCIs could offer a 20-minute outpatient procedure that would provide therapeutic benefit for conditions like treatment-resistant depression, PTSD, or ADHD, and could last the rest of your life.”
He envisions a future where everyone has the ability to communicate rapidly and seamlessly, regardless of any disability, and where BCIs actually let us get back some of the humanity that has been stolen by social media and smartphones. “Maybe BCIs could help us rebalance the neural circuits we need to have control over our focus and our mood,” he said. “We would feel better, do better, and everyone could communicate.”
In the near term, the technology will continue to advance most in medical applications. Robinson believes we should keep moving BCIs forward despite the risks, because they can help people.
“There’s a risk that people see that vision of the dystopian future and decide to stop building these things because something bad could happen,” he said. “My hope is that we don’t do that. We should figure out how to go forward responsibly, because there’s a moral obligation to the people who need these things.”
Image Credit: Gerd Altmann from Pixabay
This Startup Says Nuclear Power Could Be Our Most Effective Climate Solution
From 1850 to 2019, human activity released 2.4 trillion tons of CO2 into the atmosphere. In 2022 alone, we released 37 billion more tons. While renewable energy is making a difference, it’s small: last year it offset a mere 230 million tons of emissions—less than one percent of the global total.
Energy demand is expected to triple by 2050. Amid calls for emissions reductions and net-zero targets, we need a reality check: how are we going to reverse climate change if energy is in everything we do, and energy itself contributes to the problem?
We need solutions that will help us pull trillions of tons of carbon from the air without adding more in the process—a tool far more powerful than solar panels or wind turbines. This tool already exists, and it’s nuclear power.
In a talk at South by Southwest this week, Bret Kugelmass, founder and CEO of Last Energy, explained how nuclear power has been misunderstood and devalued for decades, and the price we’ve paid as a result. “Infinitely abundant, carbon-free, always on, and incredibly energy-dense, nuclear energy could meet and exceed our energy needs,” he said.
Instead, this powerful technology has stagnated for decades, leaving us scrambling for other forms of energy that won’t keep pumping CO2 into the atmosphere. Kugelmass left a career in Silicon Valley with the sole purpose of finding a keystone technology to combat climate change. He visited 15 countries and all kinds of facilities to learn about nuclear power and compare it to other forms of energy. His conclusion was that if it’s done right, nuclear can enable continued growth—and a cleaner planet—in a way that no other power source can.
How Did We Get Here?So why did a power source with so much potential stagnate? In 1963, then-President John F. Kennedy said nuclear power would account for half of all US energy production by end of that decade. His administration put together a perspective for rapid development of nuclear power production, and he had the Atomic Energy Commission conduct a study on the role civilian nuclear power could play in the US economy.
According to Kugelmass, the effort stalled not because of public perception or safety fears, but due to economic malfeasance. Rather than focusing on standardization, “We pursued ever-larger, ever more complex construction projects…from 1968 to 1970, we saw a 10-fold increase in the cost to build gigawatt-scale plants,” he said. Most of the cost of nuclear energy, he added, is in the interest accrued during the construction process. “It accounts for 60 percent of the delivered cost of energy,” he said.
The result, unsurprisingly, was that nuclear simply became too expensive to compete with other power sources. The US is now close to completing its first new nuclear project in decades—and at 10 years late and $20 billion over budget, it’s still not done.
If we had built out nuclear in a viable way starting in the 1960s, we’d live in a very different world today: less pollution, less panic about carbon emissions, more energy security, cheaper end prices for consumers. Is it too late to turn things around? “There is nothing broken with the nuclear technology we have today,” Kugelmass said. “What’s broken is the business model, and the delivery model. What nuclear needs to scale isn’t novel: productize, modularize, and mass-manufacture.”
Bringing Nuclear BackKugelmass founded a non-profit research organization called the Energy Impact Center (EIC), which in 2020 launched the OPEN100 project to provide open-source blueprints for the design, construction, and financing of a 100-megawatt nuclear reactor. EIC’s for-profit spinoff is Last Energy, which aims to connect private investors with opportunities to develop new nuclear projects around the world.
Rather than experimenting with newer technology, Last Energy’s sticking with tried-and-true pressurized water reactors (the kind used over the last several decades), but bringing their costs down by making the technology modular and standardized. They’re taking a play from the oil and gas industry, which can build entire power plants in a factory then deploy them to their final location.
“There’s a whole avenue of innovation related to constructability, rather than your underlying technology,” Kugelmass said. “If you deviate too much from the standard supply chain you’re going to see hidden costs everywhere.” He estimated, for example, that building a pump to move the salt for molten salt reactors, which use molten salt as a coolant instead of pressurized water, requires a billion dollars in research and development costs.
Building standardized small modular reactors, though, can be done for less than $1,000 per kilowatt. Making nuclear power affordable would mean it could be used for energy-intensive industrial applications that will become increasingly necessary in coming years, like water desalination and carbon removal.
Time for a Revival?Energy underlies everything we do, and it’s essential for modern societies to grow and thrive. It enables human well-being, entrepreneurship, geopolitical independence, security, and opportunity. Given our current geopolitical situation and the unsustainable energy costs in Europe, could now be the time for a nuclear revival?
Kugelmass is hopeful. “Every 10 to 15 years the industry thinks it’s going to have a renaissance, but then it falls flat,” he said. “Now global macro issues have granted nuclear the opportunity to have another shot.”
In fact, Last Energy is looking to launch in Europe, where the need for affordable energy is dire. The company has signed deals in Romania, Poland, and the UK, and its first set of reactors is slated to come online in the next two years. Kugelmass noted that negotiating with utilities and governments in these countries is far more straightforward than in the US. “Maybe we’ll come to US someday, but we could be selling hundreds of gigawatts in Europe before that happens,” he said.
There may be hope for the US yet: in 2020 the Department of Energy launched its Advanced Reactor Demonstration Program, investing $230 million in research and development for small modular reactors.
Kugelmass is focused on making a solid product, no matter where it ends up being used. “We are an American company and we build the reactors here in Texas,” he said. “What previously took decades to build and cost billions is now a scalable product that can be pre-fabricated and deployed in under two years.”
Image Credit: Albrecht Fietz / Pixabay
The First Complete Brain Map of an Insect May Reveal Secrets for Better AI
Breakthroughs don’t often happen in neuroscience, but we just had one. In a tour-de-force, an international team released the full brain connectivity map of the young fruit fly, described in a paper published last week in Science. Containing 3,016 neurons and 548,000 synapses, the map—called a connectome—is the most complex whole-brain wiring diagram to date.
“It’s a ‘wow,’” said Dr. Shinya Yamamoto at Baylor College of Medicine, who was not involved in the work.
Why care about a fruit fly? Far from uninvited guests at the dinner table, Drosophila melanogaster is a neuroscience darling. Although its brain is smaller than a poppy seed—a far cry from the 100 billion neurons that power human brains—the fly’s neural system shares similar principles to those that underlie our own brains.
This makes them excellent models to hone in on ideas of how our neural circuits wire to encode memories, make difficult decisions, or navigate social situations like flirting with a potential partner or hanging with a swarm of new friends.
To lead author Dr. Marta Zlatic at the University of Cambridge, MRC Laboratory of Molecular Biology and Janelia Research Campus,“All brains are similar—they are all networks of interconnected neurons—and all brains of all species have to perform many complex behaviors: they all need to process sensory information, learn, select actions, navigate their environments, choose food, escape from predators, etc.”
With the new connectome map, “we now have a reference brain,” she said.
A Behemoth AtlasConnectomes are precious resources. Popularized by Sebastian Seung, the maps draw out neural connections within and across brain regions. Similar to tracing computer wires to reverse-engineer how different chips and processors fit together, the connectome is a valuable resource to crack the brain’s “neural code”—that is, the algorithms underlying its computations.
In other words, the connectome is essential to understanding the brain’s functions. It’s why similar work is underway in mice and humans, though at a much smaller scale or with far less detail.
Until now, scientists have only mapped three full-brain connectomes, all in worms—including the first animal to gain the honor, the nematode C. elegans. With just over 300 neurons, the project took over a decade, with an update released for both sexes in 2019.
Drosophila represents a far larger challenge with roughly ten times the number of neurons as C. elegans. But it’s also an ideal next candidate. For one, scientists have already sequenced its entire genome, making it possible to match genetic information to the fly’s neural wiring. This could especially come in handy for, say, deciphering how genes contributing to Alzheimer’s disease alters neural circuits. For another, fruit fly larvae have transparent bodies, making them far easier to image under a microscope.
Not all brain-wiring maps are created equal. Here, the team went for the highest resolution: mapping the whole brain at the synapse level. Synapses are junctions between neurons where they connect: picture two mushroom-shaped structures hovering near each other with a gap. Although neurons are often touted as the basic component of computing, synapses are where the magic happens—their connectivity helps functionally wire up neural circuits.
Neuron connectivity in the brain. Each dot represents a neuron, and those with more similar connectivity are closer. The lines show how neurons connect. Image Credit: Benjamin Pedigo Slice and Dice and…Robots?To map out synapses, the team turned to the big guns of microscopy: the electron microscope. Compared to microscopes in high school biology, this hardware can capture images at the nanoscale—roughly a tenth the width of a human hair.
The whole process sounds a bit like a wild dinner recipe. The team first soaked a single six-hour-old larvae brain inside a solution packed with heavy metals, which marinated into the neurons’ membranes and proteins inside synapses. The brains were then painstakingly sliced into ultra-thin sections with a diamond blade—imagine a deli-meat slicer—and put under a microscope.
The resulting images—all 21 million of them—were stitched together using software. The whole process took over a year and a half, with many hours spent manually checking the reconstructed neurons and synapses.
The final brain map didn’t just contain the location of neurons and their synapses—it also highlighted wiring quirks that could support highly efficient neural computations.
Winding RoadsThe beauty of the new map is that it provides bird’s-eye information on brain connectivity, supercharged with the power of zoom-and-enhance.
“The most challenging aspect of this work was understanding and interpreting what we saw,” said Zlatic.
In one analysis, the team found that neurons can be grouped into 93 different types based on their connectivity, even if they share the same physical structure. It’s a drastic departure from the most common way of categorizing neurons. Rather than clustering them based on appearance or function, it may be more useful to focus on their connectivity “social network” instead.
Digging down to synapses, the team ran into another surprise. Let me explain: neurons have two main branches. One is the larger input cable—the axon—and the other is a tree-shaped output—the dendrite. Neurons usually “wire up” when synapses connect those two cables.
More recent studies, however, show that synapses on axons can connect with other synapses on axons; the same goes for dendrites. Analyzing the reconstructed brain, the team found evidence of these non-traditional connections.
“Now we need to reconsider them: we probably need to think about creating a new computational model of the nervous system,” said Dr. Chung-Chuang Lo at the National Tsing Hua University in Taiwan.
On a broader scale, the map showed that neurons are eager to chat with others a half-world away. Almost 93 percent of neurons connected with a partner neuron in the other brain hemisphere, suggesting that long-range connections are incredibly common. Even more surprising was a peculiar population that didn’t reach out: dubbed Kenyon cells, these neurons mostly populate the fly’s learning and memory center. Why this happens is still unclear, but it illustrates the brain map’s ability to generate new insights and hypotheses.
Although the neurons and synapses are wired in a nicely compact “nested” multilayered structure, the connectome showed that some developed connections that jumped through layers—a shortcut that hooks up otherwise separate circuits.
Even more fascinating was how much the brain “talks” to itself. Nearly 41 percent of neurons received recurrent input—that is, feedback from other parts of the brain. Each region had its own feedback program. For example, information generally flows from sensory areas of the brain to motor regions, although the reverse also happens and creates a feedback loop.
But perhaps the most socially adept neurons are those that pump out dopamine. Well known for encoding reward and driving learning, these neurons also had some of the most complex recurrent wirings compared to other types.
From shortcuts to recurrent wirings, these biological hardware structures could increase the brain’s computational capacity and compensate for the limited number of neurons and their biological restraints.
“None of us expected this at all,” said study author Dr. Michael Winding.
From Fly to AIThe study isn’t the first to map the Drosophila brain. Previously, a team led by Dr. Davi Bock at the Janella Research Campus targeted a small nub of the adult fruit fly brain responsible for learning and remembering smells with synapse-level detail. Zlatic’s team has also tracked a sensory circuit in the fruit fly larvae for making decisions by mapping only 138 neurons.
The full-brain connectome is a game-changer. For one, scientists now have a sophisticated reference brain to test out theories for neural computation. For another, the connectome map and its inferred computation resembles state-of-the-art machine learning.
“That’s really quite nice because we know that recurrent neural networks are pretty powerful in artificial intelligence,” said Zlatic. “By comparing this biological system, we can potentially also inspire better artificial networks.”
Image Credit: Michael Winding
Life on a Reforested Planet: How the World Will Look if We Plant a Trillion Trees
Many stories about the future are formed by imagining worst-case scenarios, then extracting lessons from them about what we should try to avoid. Much of the best science fiction takes this angle, and it makes for good reading (or watching or listening). But there can be as much value—if not more—in the opposite approach; what if we imagine a world where our efforts to fix today’s biggest problems have paid off, and both humanity and the planet are flourishing? Then we can take steps towards making that vision a reality.
In a discussion at South by Southwest this week titled Life on a Reforested Planet, the panelists took such a future retrospective point of view. What, they asked, will the world look like decades from now if we succeed in cleaning up the environment, bringing carbon emissions down, and restoring degraded forests? What opportunities are there around these scenarios? And how will we get there?
The discussion was led by Yee Lee, the VP of growth at a company called Terraformation whose mission is to accelerate natural carbon capture by resolving bottlenecks to forest restoration. Lee spoke with Jad Daley, president and CEO of American Forests, the oldest national nonprofit conservation organization in the US; Clara Rowe, CEO of a global network of restoration and conservation sites called Restor; and Josh Parrish, VP of carbon origination at Pachama, which uses remote sensing and AI to protect and restore natural carbon sinks.
There are about three trillion trees on Earth today. That’s more trees than there are stars in the Milky Way, but it’s only about half as many as there were at the dawn of human civilization. Scientists have estimated we can bring back one trillion trees on degraded lands we aren’t using for agriculture. If those trillion trees were to be planted all together, they’d cover the entire continental US—but every continent except Antarctica has reforestable lands. Furthermore, if we restore one trillion trees, they’d be able to sequester around 30 percent of the carbon we’ve put into the atmosphere since the industrial revolution.
Planting a trillion trees is obviously no small task. It requires the right kind of seeds, well-trained forestry professionals, collaboration with local and national governments, and multiple levels of in-depth research and planning—not to mention a lot of time, space, and hard work. In outlining what the world will look like if we make it happen, the panelists highlighted current challenges that would be resolved as well as opportunities we’d encounter along the way. Here are a few of the changes we’ll see in our lives and the environment if we can make this vision a reality.
Nature EquityWe think of nature and trees as having blanket benefits across society: they’re beautiful, they clean the air, they provide shade and habitats for wildlife. But the unfortunate reality we’re living in has an unequal distribution of access to nature across populations. “Tree equity isn’t about trees, it’s about people,” Daley said. “In neighborhoods with a lot of trees, people are healthier—including mental health benefits—and there’s less crime. People relate to each other differently.” This isn’t because trees cause prosperity, but because prosperous communities are more likely to invest in landscaping and tree cover, and to have the funds to do so.
The opposite side of the coin shows the drawbacks that non-green areas experience, all of which are only slated to worsen in coming years. “Today in America, extreme heat kills more than 12,000 people per year,” Daley said. Research projects that number could rise to 110,000 people per year by the end of this century, with the hardest-hit being those who don’t have air conditioning, don’t have good healthcare—and don’t have trees in their neighborhoods.
“Trees have incredible cooling power and every neighborhood needs that, but especially places where people are already most at risk,” Daley said. He pointed out that tree distribution maps are often also maps of income and race, with the lowest-income neighborhoods having 40 percent less tree coverage than the wealthiest neighborhoods.
In a future where we’ve succeeded in planting a trillion trees, cities will have equitable tree cover. There are already steps in this direction: the US Congress invested $1.5 billion in tree cover for cities as part of the Inflation Reduction Act.
Incentives Align With the Needs of the Natural WorldCapitalism likely won’t be replaced by another economic system anytime soon, but non-financial incentives will take on a larger role in influencing business and consumer decisions, and regulators will likely step in and change financial incentives too. Carbon credits are one early example of this (though there’s a lot of debate about their effectiveness), as are the subsidies around electric vehicles and solar and wind energy.
Could we implement similar subsidies or other means of incentive around reforestation? Some countries have already done so. Costa Rica, Rowe said, has been paying farmers to conserve and restore forests on their land for decades, making Costa Rica the first tropical country to reverse deforestation. “People are getting paid to do something that’s good for the Earth, and it has changed the relationship that a lot of the country has to nature,” she said. “So then it’s not just about the money; because we’ve created an economy that allows us to benefit from nature, we can love nature in a different way.”
A Shift in Consumerist CultureManufacturing—of everything from cars to cell phones to clothing—not only uses energy and creates emissions, it creates a lot of waste. When the newest iPhone comes out, millions of people tuck their old phone into the back of a drawer and go out and buy the new one, even though the old one still worked perfectly. We give old clothes to Goodwill (or throw them away) and buy new ones long before the old clothes are unwearable or out of style. We trade in our 10-year-old cars for the new model, even though the car has 10 more years of drivability in it.
Having the newest things is a status symbol and a way to introduce some occasional novelty into our lives and routines. But what if we flipped that on its head, reversing what’s “cool” and high-status to align with the needs of the environment? What if we bragged about having an old car or phone or bike, and thereby not having contributed to the continuous manufacture and disposal of still-useful goods?
A shift to conscious consumerism has already begun, with people paying attention to the business practices of companies they buy from and seeking out brands that are more Earth-friendly. But this movement will need to grow far beyond its current state and include a much broader chunk of the population to really make a difference.
Rowe believes that in the not-too-distant future, products will have labeling with information about their supply chain and their impact on the local environment. “There are ways to weave forests into the daily fabric of our lives, and one of those is understanding what we consume,” she said. “Think about the cereal you had for breakfast. In 2050 the label will have information about the species of trees restored in the place where the wheat is grown, and the tons of carbon that were sequestered by the regenerative agriculture in this area.”
She envisions us gaining a completely new perspective on what we’re a part of and how we’re having impact. “We’re touching nature in every part of our lives, but we aren’t empowered to know it,” she added. “We don’t have the tools to take the action that we really want to take. In 2050, when we’ve reforested our planet, the way we have impact will be visible.”
Job Growth in Forestry and Related IndustriesPlanting a trillion trees—and making sure they’re healthy and growing—will require a massive mobilization of funds and people, and will spur creation of all sorts of jobs. Not to mention, reforestation will enable new industries to sprout where before there could be none. One example Lee gave was if you restore a mangrove, a shrimping industry can then be built there. “When we’re fostering a new forestry team, the lightbulb moment isn’t just about forests and trees,” he said. “There’s a whole economic livelihood that’s created. The blocker is often, how do we skill new communities and train them to have an entrepreneurial mindset?”
Parrish envisions the creation of “superhighways for nature,” an undertaking that would entail significant job creation in itself. “As the climate changes, as we get warmer, nature needs the ability to adapt and migrate and move around,” he said. “We need to create a network of connections with forests that provide for that and have a diverse ecological framework.” This would apply not only to primary forests, he said, but to suburban and even urban green spaces too.
Daley mentioned that his organization is seeing job creation on the front end of the reforestation pipeline, with one example being people who are employed to collect the seeds that’ll be used to plant trees. “We partner with the state of California and an organization called the Cone Core,” he said. “People collect cones to collect seeds they’ll use to reforest the burned acres in California.”
A Reforested WorldWill these visions become reality? We’re a long way from it right now, but planting a trillion trees isn’t impossible. In Daley’s opinion, the two variables that will most help the cause are innovation and mobilization, and both awareness and buy-in around reforestation are steadily growing. As more people feel empowered to take part, they’ll also find new ways to make a difference. “Hope comes from agency,” Daley said. To engage with a problem, “you need to feel like you can do something about it.”
Image Credit: Chris Lawton / Unsplash
The Limits of Computing: Why Even in the Age of AI, Some Problems Are Just Too Difficult
Empowered by artificial intelligence technologies, computers today can engage in convincing conversations with people, compose songs, paint paintings, play chess and go, and diagnose diseases, to name just a few examples of their technological prowess.
These successes could be taken to indicate that computation has no limits. To see if that’s the case, it’s important to understand what makes a computer powerful.
There are two aspects to a computer’s power: the number of operations its hardware can execute per second and the efficiency of the algorithms it runs. The hardware speed is limited by the laws of physics. Algorithms—basically sets of instructions—are written by humans and translated into a sequence of operations that computer hardware can execute. Even if a computer’s speed could reach the physical limit, computational hurdles remain due to the limits of algorithms.
These hurdles include problems that are impossible for computers to solve and problems that are theoretically solvable but in practice are beyond the capabilities of even the most powerful versions of today’s computers imaginable. Mathematicians and computer scientists attempt to determine whether a problem is solvable by trying them out on an imaginary machine.
An Imaginary Computing MachineThe modern notion of an algorithm, known as a Turing machine, was formulated in 1936 by British mathematician Alan Turing. It’s an imaginary device that imitates how arithmetic calculations are carried out with a pencil on paper. The Turing machine is the template all computers today are based on.
To accommodate computations that would need more paper if done manually, the supply of imaginary paper in a Turing machine is assumed to be unlimited. This is equivalent to an imaginary limitless ribbon, or “tape,” of squares, each of which is either blank or contains one symbol.
The machine is controlled by a finite set of rules and starts on an initial sequence of symbols on the tape. The operations the machine can carry out are moving to a neighboring square, erasing a symbol, and writing a symbol on a blank square. The machine computes by carrying out a sequence of these operations. When the machine finishes, or “halts,” the symbols remaining on the tape are the output or result.
Computing is often about decisions with yes or no answers. By analogy, a medical test (type of problem) checks if a patient’s specimen (an instance of the problem) has a certain disease indicator (yes or no answer). The instance, represented in a Turing machine in digital form, is the initial sequence of symbols.
A problem is considered “solvable” if a Turing machine can be designed that halts for every instance whether positive or negative and correctly determines which answer the instance yields.
Not Every Problem Can Be SolvedMany problems are solvable using a Turing machine and therefore can be solved on a computer, while many others are not. For example, the domino problem, a variation of the tiling problem formulated by Chinese American mathematician Hao Wang in 1961, is not solvable.
The task is to use a set of dominoes to cover an entire grid and, following the rules of most dominoes games, matching the number of pips on the ends of abutting dominoes. It turns out that there is no algorithm that can start with a set of dominoes and determine whether or not the set will completely cover the grid.
Keeping It ReasonableA number of solvable problems can be solved by algorithms that halt in a reasonable amount of time. These “polynomial-time algorithms” are efficient algorithms, meaning it’s practical to use computers to solve instances of them.
Thousands of other solvable problems are not known to have polynomial-time algorithms, despite ongoing intensive efforts to find such algorithms. These include the traveling salesman problem.
The traveling salesman problem asks whether a set of points with some points directly connected, called a graph, has a path that starts from any point and goes through every other point exactly once, and comes back to the original point. Imagine that a salesman wants to find a route that passes all households in a neighborhood exactly once and returns to the starting point.
These problems, called NP-complete, were independently formulated and shown to exist in the early 1970s by two computer scientists, American Canadian Stephen Cook and Ukrainian American Leonid Levin. Cook, whose work came first, was awarded the 1982 Turing Award, the highest in computer science, for this work.
The Cost of Knowing ExactlyThe best-known algorithms for NP-complete problems are essentially searching for a solution from all possible answers. The traveling salesman problem on a graph of a few hundred points would take years to run on a supercomputer. Such algorithms are inefficient, meaning there are no mathematical shortcuts.
Practical algorithms that address these problems in the real world can only offer approximations, though the approximations are improving. Whether there are efficient polynomial-time algorithms that can solve NP-complete problems is among the seven millennium open problems posted by the Clay Mathematics Institute at the turn of the 21st century, each carrying a prize of a million dollars.
Beyond TuringCould there be a new form of computation beyond Turing’s framework? In 1982, American physicist Richard Feynman, a Nobel laureate, put forward the idea of computation based on quantum mechanics.
In 1995, Peter Shor, an American applied mathematician, presented a quantum algorithm to factor integers in polynomial time. Mathematicians believe that this is unsolvable by polynomial-time algorithms in Turing’s framework. Factoring an integer means finding a smaller integer greater than one that can divide the integer. For example, the integer 688,826,081 is divisible by a smaller integer 25,253, because 688,826,081 = 25,253 x 27,277.
A major algorithm called the RSA algorithm, widely used in securing network communications, is based on the computational difficulty of factoring large integers. Shor’s result suggests that quantum computing, should it become a reality, will change the landscape of cybersecurity.
Can a full-fledged quantum computer be built to factor integers and solve other problems? Some scientists believe it can be. Several groups of scientists around the world are working to build one, and some have already built small-scale quantum computers.
Nevertheless, like all novel technologies invented before, issues with quantum computation are almost certain to arise that would impose new limits.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Laura Ockel / Unsplash
This Week’s Awesome Tech Stories From Around the Web (Through March 11)
D-ID’s New Web App Gives a Face and Voice to OpenAI’s ChatGPT
Aisha Malik | TechCrunch
“When you open up the web app on a desktop or mobile device, you’ll be greeted by an avatar named ‘Alice.’ You can then choose to either type out a question or click the microphone icon to say your query out loud. D-ID notes that Alice can answer almost anything. You can ask Alice to simulate a job interview or even host your family’s trivia night. …In a few weeks, the web app will let users generate a character, such as Dumbledore from Harry Potter, and talk to them.”
Two Oddball Ideas for a Megaqubit Quantum Computer
Samuel K. Moore | IEEE Spectrum
“Experts say quantum computers might need at least a million qubits kept at near absolute zero to do anything computationally noteworthy. But connecting them all by coaxial cable to control and readout electronics, which work at room temperature, would be impossible. Computing giants such as IBM, Google, and Intel hope to solve that problem with cyrogenic silicon chips that can operate close to the qubits themselves. But researchers have recently put forward some more exotic solutions that could quicken the pace.”
Room-Temperature Superconductor Discovery Meets With Resistance
Charlie Wood and Zack Savitsky | Quanta
“The results, published [this week] in Nature, appear to show that a conventional conductor—a solid composed of hydrogen, nitrogen and the rare-earth metal lutetium—was transformed into a flawless material capable of conducting electricity with perfect efficiency. While the announcement has been greeted with enthusiasm by some scientists, others are far more cautious, pointing to the research group’s controversial history of alleged research malfeasance.”
Sam Altman Invested $180 Million Into a Company Trying to Delay Death
Antonio Regalado | MIT Technology Review
“[Altman] says he’s emptied his bank account to fund two other very different but equally ambitious goals: limitless energy and extended life span. One of those bets is on the fusion power startup Helion Energy, into which he’s poured more than $375 million, he told CNBC in 2021. The other is Retro, to which Altman cut checks totaling $180 million the same year. ‘It’s a lot. I basically just took all my liquid net worth and put it into these two companies,’ Altman says.”
Meta’s Powerful AI Language Model Has Leaked Online—What Happens Now?
James Vincent | The Verge
“Meta did not release LLaMA as a public chatbot (though the Facebook owner is building those too) but as an open-source package that anyone in the AI community can request access to. …However, just one week after Meta started fielding requests to access LLaMA, the model was leaked online. On March 3rd, a downloadable torrent of the system was posted on 4chan and has since spread across various AI communities, sparking debate about the proper way to share cutting-edge research in a time of rapid technological change.”
Forget Designer Babies. Here’s How CRISPR Is Really Changing Lives
Antonio Regalado | MIT Technology Review
“…there are now more than 50 experimental studies underway that use gene editing in human volunteers to treat everything from cancer to HIV and blood diseases, according to a tally shared with MIT Technology Review by David Liu, a gene-editing specialist at Harvard University. Most of these studies—about 40 of them—involve CRISPR, the most versatile of the gene-editing methods, which was developed only 10 years ago.”
Could the Next Blockbuster Drug Be Lab-Rat Free?
Emily Anthes | The New York Times
“…momentum is building for non-animal approaches, which could ultimately help speed drug development, improve patient outcomes and reduce the burdens borne by lab animals, experts said. ‘Animals are simply a surrogate for predicting what’s going to happen in a human,’ said Nicole Kleinstreuer, director of the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods. ‘If we can get to a place where we actually have a fully human-relevant model,’ she added, ‘then we don’t need the black box of animals anymore.’i”
This Geothermal Startup Showed Its Wells Can Be Used Like a Giant Underground Battery
James Temple | MIT Technology Review
“The results from the initial experiments…suggest Fervo can create flexible geothermal power plants, capable of ramping electricity output up or down as needed. Potentially more important, the system can store up energy for hours or even days and deliver it back over similar periods, effectively acting as a giant and very long-lasting battery. That means the plants could shut down production when solar and wind farms are cranking, and provide a rich stream of clean electricity when those sources flag.”
Detection Stays One Step Ahead of Deepfakes—For Now
Matthew Hutson | IEEE Spectrum
“…as computer scientists devise better methods for algorithmically generating video, audio, images, and text—typically for more constructive uses such as enabling artists to manifest their visions—they’re also creating counter-algorithms to detect such synthetic content. Recent research shows progress in making detection more robust, sometimes by looking beyond subtle signatures of particular generation tools and instead utilizing underlying physical and biological signals that are hard for AI to imitate.”
GPT-4 Might Just Be a Bloated, Pointless Mess
Jacob Stern | The Atlantic
“Will endless ‘scaling’ of our current language models really bring true machine intelligence? ...the scaling debate is representative of the broader AI discourse. It feels as though the vocal extremes have drowned out the majority. Either ChatGPT will completely reshape our world or it’s a glorified toaster. The boosters hawk their 100-proof hype, the detractors answer with leaden pessimism, and the rest of us sit quietly somewhere in the middle, trying to make sense of this strange new world.“
Image Credit: Laura Skinner / Unsplash
An AI Learned to Play Atari 6,000 Times Faster by Reading the Instructions
Despite impressive progress, today’s AI models are very inefficient learners, taking huge amounts of time and data to solve problems humans pick up almost instantaneously. A new approach could drastically speed things up by getting AI to read instruction manuals before attempting a challenge.
One of the most promising approaches to creating AI that can solve a diverse range of problems is reinforcement learning, which involves setting a goal and rewarding the AI for taking actions that work towards that goal. This is the approach behind most of the major breakthroughs in game-playing AI, such as DeepMind’s AlphaGo.
As powerful as the technique is, it essentially relies on trial and error to find an effective strategy. This means these algorithms can spend the equivalent of several years blundering through video and board games until they hit on a winning formula.
Thanks to the power of modern computers, this can be done in a fraction of the time it would take a human. But this poor “sample-efficiency” means researchers need access to large numbers of expensive specialized AI chips, which restricts who can work on these problems. It also seriously limits the application of reinforcement learning to real-world situations where doing millions of run-throughs simply isn’t feasible.
Now a team from Carnegie Mellon University has found a way to help reinforcement learning algorithms learn much faster by combining them with a language model that can read instruction manuals. Their approach, outlined in a pre-print published on arXiv, taught an AI to play a challenging Atari video game thousands of times faster than a state-of-the-art model developed by DeepMind.
“Our work is the first to demonstrate the possibility of a fully-automated reinforcement learning framework to benefit from an instruction manual for a widely studied game,” said Yue Wu, who led the research. “We have been conducting experiments on other more complicated games like Minecraft, and have seen promising results. We believe our approach should apply to more complex problems.”
Atari video games have been a popular benchmark for studying reinforcement learning thanks to the controlled environment and the fact that the games have a scoring system, which can act as a reward for the algorithms. To give their AI a head start, though, the researchers wanted to give it some extra pointers.
First, they trained a language model to extract and summarize key information from the game’s official instruction manual. This information was then used to pose questions about the game to a pre-trained language model similar in size and capability to GPT-3. For instance, in the game PacMan this might be, “Should you hit a ghost if you want to win the game?”, for which the answer is no.
These answers are then used to create additional rewards for the reinforcement algorithm, beyond the game’s built-in scoring system. In the PacMan example, hitting a ghost would now attract a penalty of -5 points. These extra rewards are then fed into a well-established reinforcement learning algorithm to help it learn the game faster.
The researchers tested their approach on Skiing 6000, which is one of the hardest Atari games for AI to master. The 2D game requires players to slalom down a hill, navigating in between poles and avoiding obstacles. That might sound easy enough, but the leading AI had to run through 80 billion frames of the game to achieve comparable performance to a human.
In contrast, the new approach required just 13 million frames to get the hang of the game, although it was only able to achieve a score about half as good as the leading technique. That means it’s not as good as even the average human, but it did considerably better than several other leading reinforcement learning approaches that couldn’t get the hang of the game at all. That includes the well-established algorithm the new AI relies on.
The researchers say they have already begun testing their approach on more complex 3D games like Minecraft, with promising early results. But reinforcement learning has long struggled to make the leap from video games, where the computer has access to a complete model of the world, to the messy uncertainty of physical reality.
Wu says he is hopeful that rapidly improving capabilities in object detection and localization could soon put applications like autonomous driving or household automation within reach. Either way, the results suggest that rapid improvements in AI language models could act as a catalyst for progress elsewhere in the field.
Image Credit: Kreg Steppe / Flickr