Latest News

Computer Science

Subscribe to Computer Science feed Computer Science
IEEE Spectrum
Updated: 23 min 52 sec ago

AI Is Being Built on Dated, Flawed Motion-Capture Data

Fri, 2024-03-01 20:30


Diversity of thought in industrial design is crucial: If no one thinks to design a technology for multiple body types, people can get hurt. The invention of seatbelts is an oft-cited example of this phenomenon, as they were designed based on crash dummies that had traditionally male proportions, reflecting the bodies of the team members working on them.

The same phenomenon is now at work in the field of motion-capture technology. Throughout history, scientists have endeavored to understand how the human body moves. But how do we define the human body? Decades ago many studies assessed “healthy male” subjects; others used surprising models like dismembered cadavers. Even now, some modern studies used in the design of fall-detection technology rely on methods like hiring stunt actors who pretend to fall.

Over time, a variety of flawed assumptions have become codified into standards for motion-capture data that’s being used to design some AI-based technologies. These flaws mean that AI-based applications may not be as safe for people who don’t fit a preconceived “typical” body type, according to new work recently published as a preprint and set to be presented at the Conference on Human Factors in Computing Systems in May.

“We dug into these so-called gold standards being used for all kinds of studies and designs, and many of them had errors or were focused on a very particular type of body,” says Abigail Jacobs, co-author of the study and an assistant professor at University of Michigan’s School of Information and Center for the Study of Complex Systems. “We want engineers to be aware of on how these social aspects become coded into the technical—hidden in mathematical models that seem objective or infrastructural.”

It’s an important moment for AI-based systems, Jacobs says, as we may still have time to catch and avoid potentially dangerous assumptions from being codified into applications informed by AI.

Motion capture systems create representations of bodies by collecting data from sensors placed on the subjects, logging how these bodies move through space. These schematics become part of the tools that researchers use, such as open-source libraries of movement data and measurement systems that are meant to provide baseline standards for how human bodies move. Developers are increasingly using these baselines to build all manner of AI-based applications: fall detection algorithms for smartwatches and other wearables, self-driving vehicles that need to detect pedestrians, computer generated imagery for movies and video games, manufacturing equipment that interacts safely with human workers, and more.

“Many researchers don’t have access to advanced motion-capture labs to collect data, so we’re increasingly relying on benchmarks and standards to build new tech,” Jacobs says. “But when these benchmarks don’t include representations of all bodies, especially those people who are likely to be involved in real-world use cases—like elderly people who may fall—these standards can be quite flawed.”

She hopes we can learn from past mistakes, such as cameras that didn’t accurately capture all skin tones and seatbelts and airbags that didn’t protect people of all shapes and sizes in car crashes.

The Cadaver in the Machine

Jacobs and her collaborators from Cornell University, Intel, and University of Virginia performed a systematic literature review of 278 motion-capture-related studies. In most cases, they concluded, motion-capture systems captured the motion of “those who are male, white, ‘able-bodied,’ and of unremarkable weight.”

And sometimes these white male bodies were dead. In reviewing works dating back to the 1930s and running through three historical eras of motion-capture science, the researchers studied projects that were influential in how scientists of the time understood the movement of body segments. A seminal 1955 study funded by the Air Force, for example, used overwhelmingly white, male, and slender or athletic bodies to create the optimal cockpit based on pilots’ range of motion. That study also gathered data from eight dismembered cadavers.

A full 20 years later, a study prepared for the National Highway Traffic Safety Administration used similar methods: Six dismembered male cadavers were used to inform the design of impact protection systems in vehicles.

In most of the 278 studies reviewed, motion-capture systems captured the motion of “those who are male, white, ‘able-bodied,’ and of unremarkable weight.”

Although those studies are many decades old, these assumptions became baked-in over time. Jacobs and her colleagues found many examples of these outdated inferences being passed down to later studies and ultimately still influencing modern motion-capture studies.

“If you look at technical documents of a modern system in production, they’ll explain the ‘traditional baseline standards’ they’re using,” Jacobs says. “By digging through that, you quickly start hopping through time: OK, that’s based on this prior study, which is based on this one, which is based on this one, and eventually we’re back to the Air Force study designing cockpits with frozen cadavers.”

The components that underpin technological best practices are “manmade—intentional emphasis on man, rather than human—often preserving biases and inaccuracies from the past,” says Kasia Chmielinski, project lead of the Data Nutrition Project and a fellow at Stanford University’s Digital Civil Society Lab. “Thus historical errors often inform the ‘neutral’ basis of our present-day technological systems. This can lead to software and hardware that does not work equally for all populations, experiences, or purposes.”

These problems may hinder engineers who want to make things right, Chmielinski says. “Since many of these issues are baked into the foundational elements of the system, teams innovating today may not have quick recourse to address bias or error, even if they want to,” she says. “If you’re building an application that uses third party sensors, and the sensors themselves have a bias in what they detect or do not detect, what is the appropriate recourse?”

Jacobs says that engineers must interrogate their sources of “ground truth” and confirm that the gold standards they measure against are, in fact, gold. Technicians must consider these social evaluations to be part of their jobs in order to design technologies for all.

“If you go in saying, ‘I know that human assumptions get built in and are often hidden or obscured,’ that will inform how you choose what’s in your dataset and how you report it in your work,” Jacobs says. “It’s socio-technical, and technologists need that lens to be able to say: My system does what I say it does, and it doesn’t create undue harm.”

Self-Destructing Circuits and More Security Schemes

Wed, 2024-02-28 18:46


Last week at the International Solid-State Circuits Conference (ISSCC), researchers introduced several technologies to fight even the sneakiest hack attacks. Engineers invented a way to detect a hacker placing a probe on the circuit board to attempt to read digital traffic in a computer. Other researchers invented new ways to obfuscate electromagnetic emissions radiating from an active processor that might reveal its secrets. Still other groups created new ways for chips to generate their own unique digital fingerprints, ensuring their authenticity. And if even those are compromised, one team came up with a chip-fingerprint self-destruct scheme.

A Probe-Attack Alarm

Some of the most difficult-to-defend-against attacks involve when a hacker has physical access to a system’s circuit board and can put a probe at various points. A probe attack in the right place can not only steal critical information and monitor traffic it can take over the whole system.

“It can be a starting point of some dangerous attacks,” Mao Li, a student in Mingoo Seok’s lab at Columbia University, told engineers at ISSCC.

The Columbia team, which included Intel director of circuit technology research Vivek De, invented a circuit that’s attached to the printed-circuit-board traces that link a processor to its memory. Called PACTOR, the circuit periodically scans for the tell-tale sign of probe being touched to the interconnect—a change in capacitance that can be as small as 0.5 picofarads. If it picks up that signal it engages what Lao called a protection engine, logic that can guard against the attack by, for example, instructing the processor to encrypt its data traffic.

Triggering defenses rather than having those defenses constantly engaged could have benefits for a computer’s performance, Li contended. “In comparison to… always-on protection, the detection-driven protection incurs less delay and less energy overhead,” he said.

The initial circuit was sensitive to temperature, something a skilled attacker could exploit. At high temperatures, the circuit would put up false alarms, and below room temperature, it would miss real attacks. The team solved this by adding a temperature sensing circuit that sets a different threshold for the probe-sensing circuit depending on which side of room temperature the system is on.

Electromagnetic Assault

“Security-critical circuit modules may leak sensitive information through side-channels such as power and [electromagnetic] emission. And attackers may exploit these side-channels to gain access to sensitive information,” said Sirish Oruganti a doctoral student at the University of Texas at Austin.

For, example, hackers aware of the timing of a key computation, SMA, in the AES encryption process can glean secrets from a chip. Oruganti and colleagues at UT Austin and at Intel came up with a new way to counter that theft by obscuring those signals.

One innovation was to take SMA and break it into four parallel steps. Then the timing of each substep was shifted slightly, blurring the side-channel signals. Another was to insert what Oruganti called tunable replica circuits. These are designed to mimic the observable side-channel signal of the SMAs. The tunable replica circuits operate for a realistic but random amount of time, obscuring the real signal from any eavesdropping attackers.

Using an electromagnetic scanner fine enough to discern signals from different parts of an IC, the Texas team, which included Intel engineers, was unable to crack the key in their test chip, even after 40 million attempts. It generally took only about 500 tries to grab the key from an unprotected version of the chip.

This Circuit Will Self-Destruct in…

Physically unclonable functions, or PUFs, exploit tiny differences in the electronic characteristics of individual transistors on a chip to create a unique code that can act like a digital fingerprint for each chips. A University of Vermont team led by Eric Hunt-Schroeder and involving Marvell Technology took their PUF a step farther. If it’s somehow compromised, this PUF can actually destroy itself. It’s extra-thorough at it, too; the system uses not one but two methods of circuit suicide.

Both stem from pumping up the voltage in the lines connecting to the encryption key’s bit-generating circuits. One effect is to boost in current in the circuit’s longest interconnects. That leads to electromigration, a phenomenon where current in very narrow interconnects literally blows metal atoms out of place, leading to voids and open circuits.

The second method relies on the increased voltage’s effect on a transistor’s gate dielectric, a tiny piece of insulation crucial to the ability to turn transistors on and off. In the advanced chipmaking technology Hunt-Schroeder’s team use, transistors are built to operate at less than 1 volt, but the self-destruct method subjects them to 2.5 V. Essentially, this accelerates an aging effect called time-dependent dielectric breakdown, which results in short circuits across the gate dielectric that kill the device.

Hunt-Schroeder was motivated to make these key-murdering circuits by reports that researchers had been able to clone SRAM-based PUFs using a scanning electron microscope, he said. Such a self-destruct system could also prevent counterfeit chips entering the market, Hunt-Schroeder said. “When you’re done with a part, it’s destroyed in a way that renders it useless.”

Science Fiction Short: Hijack

Sat, 2024-02-24 21:30




Computers have grown more and more powerful over the decades by pushing the limits of how small their electronics can get. But just how big can a computer get? Could we turn a planet into a computer, and if so, what would we do with it?

In considering such questions, we go beyond normal technological projections and into the realm of outright speculation. So IEEE Spectrum is making one of its occasional forays into science fiction, with a short story by Karl Schroeder about the unexpected outcomes from building a computer out of planet Mercury. Because we’re going much farther into the future than a typical Spectrum article does, we’ve contextualized and annotated Schroeder’s story to show how it’s still grounded in real science and technology. This isn’t the first work of fiction to consider such possibilities. In “The Hitchhiker’s Guide to the Galaxy,” Douglas Adams famously imagined a world constructed to serve as a processor.

Real-world scientists are also intrigued by the idea. Jason Wright, director of the Penn State Extraterrestrial Intelligence Center, has given serious thought to how large a computer can get. A planet-scale computer, he notes, might feature in the search for extraterrestrial intelligence. “In SETI, we try to look for generic things any civilization might do, and computation feels pretty generic,” Wright says. “If that’s true, then someone’s got the biggest computer, and it’s interesting to think about how big it could be, and what limits they might hit.”

There are, of course, physical constraints on very large computers. For instance, a planet-scale computer probably could not consist of a solid ball like Earth. “It would just get too hot,” Wright says. Any computation generates waste heat. Today’s microchips and data centers “face huge problems with heat management.”

In addition, if too much of a planet-scale computer’s mass is concentrated in one place, “it could implode under its own weight,” says Anders Sandberg, a senior research fellow at the University of Oxford’s Future of Humanity Institute. “There are materials stronger than steel, but molecular bonds have a limit.”

Instead, creating a computer from a planet will likely involve spreading out a world’s worth of mass. This strategy would also make it easier to harvest solar energy. Rather than building a single object that would be subject to all kinds of mechanical stresses, it would be better to break the computer up into a globular flotilla of nodes, known as a Dyson swarm.

What uses might a planet-scale computer have? Hosting virtual realities for uploaded minds is one possibility, Sandberg notes. Quantum simulation of ecosystems is another, says Seth Lloyd, a quantum physicist at MIT.


Which brings us to our story…





Which brings us to our story…


Simon Okoro settled into a lawn chair in the Heaven runtime and watched as worlds were born.

“I suppose I should feel honored you chose to watch this with me,” said Martin as he sat down next to Simon. “Considering that you don’t believe I exist.”

“Can’t we just share a moment? It’s been years since we did anything together. And you worked toward this moment too. You deserve some recognition.”

A

Uploading is a hypothetical process in which brain scanning can help create emulations of human minds in computers. A large enough computer could potentially house a civilization. These uploads could then go on to live in computer-simulated virtual realities.


B

Chris Philpot

A typical satellite must orbit around a celestial object at a speed above a critical value to avoid being pulled into the surface of the object by gravity. A statite, a hypothetical form of satellite patented by physicist Robert L. Forward, uses a solar sail to help it hover above a star or planet, using radiation pressure from sunlight to balance the force of gravity.


“Ah. They sent you to acknowledge the Uploaded, is that it?” Martin turned his long, sad-eyed face to the sky and the drama playing out above. A The Heaven runtime was a fully virtual world, so Simon had converted the sky into a vast screen on which to project what was happening in the real world. The magnified surface of the sun made a curving arc from horizon to horizon. Jets and coronas rippled over it, and high, high above its incandescent surface hung thousands of solar statites shaped like mirrored flowers B.


They did not orbit, instead floating over a particular spot by light pressure alone. They formed a diffuse cloud, dwindling to invisibility before reaching the horizon. This telescope view showed the closest statite cores scattering fiery specks like spores into the overwhelming light. The specks blazed with light and shot away from the sun, accelerating.

This moment was the pinnacle of Simon’s career, the apex of his life’s work. Each of those specks was a solar sail C, kilometers wide, carrying a terraforming package D. Launched so close to the sun and supplemented with lasers powered by the statites, they would be traveling at 20 percent light speed by the time they left the solar system. At their destinations, they’d sundive and then deliver terraforming seeds to lifeless planets around the nearest stars.

C

Chris Philpot

Light has no mass, but it can exert pressure as photons exchange momentum with a surface as they reflect off it. A mirror that is thin and reflective enough can therefore serve as a solar sail, harnessing sunlight to generate thrust. In 2010, Japan’s Ikaros probe to Venus demonstrated the use of a solar sail for interplanetary travel for the first time. Because solar pressure is measured in micronewtons per square meter, solar sails must have large areas relative to their payloads, although the pressure from sunlight can be augmented with a laser beam for propulsion .


D

Terraforming is the hypothetical act of transforming a planet so as to resemble Earth, or at least make it suitable for life. Some terraforming proposals involve first seeding the planet with single-celled organisms that alter conditions to be more hospitable to multicellular life. This process would mimic the naturally occurring transformation of Earth that started about 2.3 billion years ago, when photosynthetic cyanobacteria created the oxygen-rich atmosphere we breathe today.


“So life takes hold in the galaxy,” said Simon. These were the first words of a speech he’d written and rehearsed long ago. He’d dreamed of saying them on a podium, with Martin standing with him. But Martin...well, Martin had been dead for 20 years now.“

So life takes hold in the galaxy,” said Simon. These were the first words of a speech he’d written and rehearsed long ago. He’d dreamed of saying them on a podium, with Martin standing with him. But Martin...well, Martin had been dead for 20 years now.

He remembered the rest of the speech, but there was no point in giving it when he was absolutely alone.

Martin sighed. “So this is all you’re going to do with my Heaven? A little gardening? And then what? An orderly shutdown of the Heaven runtime? Sell off the Paradise processor as scrap?”


“I knew this was a bad idea.” Simon raised his hand to exit the virtual world, but Martin quickly stood, looking sorry.

“It’s just hard,” Martin said. “Paradise was supposed to be the great project to unite humanity. Our triumph over death! Why did you let them hijack it for this?”

Simon watched the spores catch the light and flash away into interstellar space. “You know we won’t shut you down. Heaven will be kept running as long as Paradise exists. We built it together, Martin, and I’m proud of what we did.”

E

In a 2013 study, Sandberg and his colleague Stuart Armstrong suggested deploying automated self-replicating robots on Mercury to build a Dyson swarm. These robots would dismantle the planet to construct not only more of themselves but also the sunlight collectors making up the swarm. The more solar plants these robots built, the more energy they would have to mine Mercury and produce machines. Given this feedback loop, Sandberg and Armstrong argued, these robots could disassemble Mercury in a matter of decades. The solar plants making up this Dyson swarm could double as computers.


F

Solar power is exponentially more abundant at Mercury’s orbit compared with Earth’s. At its orbital distance of 1 astronomical unit from the sun, Earth receives about 1.4 kilowatts per square meter from sunlight. Mercury receives between 6.2 and 14.4 kW/m2. The range is because of Mercury’s high eccentricity—that is, it has the most elliptical orbit of all the planets in the solar system.


G

Whereas classical computers switch transistors on and off to symbolize data as either 1s and 0s, quantum computers use quantum bits, or qubits, which can exist in a state where they are both 1 and 0 at the same time. This essentially lets each qubit perform two calculations at once. As more qubits are added to a quantum computer, its computational power grows exponentiall


The effort had been mind-bogglingly huge. They’d been able to do it only because millions of people believed that in dismantling Mercury E and turning it into a sun-powered F quantum computer G there would be enough computing power for every living person to upload their consciousness into it. The goal had been to achieve eternal life in a virtual afterlife: the Heaven runtime.

Simon knit his hands together, lowering his eyes to the virtual garden. “Science happened, Martin. How were we to know Enactivism H would answer the ‘hard problem’ of consciousness? You and I had barely even heard of extended consciousness when we proposed Heaven. It was an old idea from cognitive science. Nobody was even studying it anymore except a few AIs, and we were sucking up all the resources they might have used to experiment.” He glanced ruefully at Martin. “We were all blindsided when they proved it. Consciousness can’t be just abstracted from a brain.”

Martin’s response was quick; this was an old argument between them. “Nothing’s ever completely proven in science! There’s always room for doubt—but you agreed with those AIs when they said that simulated consciousness can’t have subjective experiences. Conveniently after I died but before I got rebooted here. I wasn’t here to fight you.”

Martin snorted. “And now you think I’m a zimboe I: a mindless simulation of the old Martin so accurate that I act exactly how he would if you told him he wasn’t self-aware. I deny it! Of course I do, like everyone else from that first wave of uploads.” He gestured, and throughout the simulated mountain valley, thousands of other human figures were briefly highlighted. “But what did it matter what I said, once I was in here? You’d already repurposed Paradise from humanity’s chance at immortality to just a simulator, using it to mimic billions of years of evolution on alien planets. All for this ridiculous scheme to plant ready-made, complete biospheres on them in advance of human colonization.” J

H

Enactivism was first mooted in the 1990s. In a nutshell, it explains the mind as emerging from a brain’s dynamic interactions with the larger world. Thus, there can be no such thing as a purely abstract consciousness, completely distinct from the world it is embedded in.


I

A “philosophical zombie is a putative entity that behaves externally exactly like a being with consciousness but with no self-awareness, no “I”: It is a pure automata, even though it might itself say otherwise.


J

Chris Philpot

Living organisms are tremendously complex systems. This diagram shows just the core metabolic pathways for an organism known as JCVI-SYN3A. Each red dot represents a different biomolecule, and the arrows indicate the directions in which chemical reactions can proceed.

JCVI-SYN3A is a synthetic life-form, a cell genetically engineered to have the simplest possible biology. Yet even its metabolism is difficult to simulate accurately with current computational resources. When Nobel laureate Richard Feynman first proposed the idea of quantum computers, he envisioned them modeling quantum systems such as molecules. One could imagine that a powerful enough quantum computer could go on to model cells, organisms, and ecosystems, Lloyd says


“We’d already played God with the inner solar system,” Simon reminded him. “The only way we could justify that after the Enactivism results was to find an even higher purpose than you and I started out with.

“Martin, I’m sorry you died before we discovered the truth. I fought to keep this subsystem running our original Heaven sim, because you’re right—there’s always a chance that the Enactivists are wrong. However slim.”

Martin snorted again. “I appreciate that. But things got very, very weird during your Enactivist rebellion. If I didn’t know better, I’d call this project”—he nodded at the sky—“the weirdest thing of all. Things are about to heat up now, though, aren’t they?”

“This was a mistake.” Simon sighed and flipped out of the virtual world. Let the simulated Martin rage in his artificial heaven; the science was unequivocal. In truth, Simon had been speaking only to himself for the entire conversation.

He stood now in the real world near the podium in a giant stadium, inside a wheel-shaped habitat 200 kilometers across. Hundreds of similar mini-ringworlds were spaced around the rim of Paradise.



Paradise itself was a vast bowl-shaped object, more cloud than material, orbiting closer to the sun than Mercury had. Self-reproducing machines had eaten that planet in a matter of decades, transforming its usable elements into a solar-powered quantum computer tens of thousands of kilometers across. The bowl cupped a spherical cloud of iron that acted as a radiator for the waste heat emitted by Paradise’s quadrillions of computing modules. K

K

One design for planetary scale—and up!—computers is a Matrioshka brain. Proposed in 1997 by Robert Bradbury, it would consist of nested structures, like its namesake Russian doll. The outer layers would use the waste heat of the inner layers to power their computations, with the aim of making use of every bit of energy for processing. However, in a 2023 study, Wright suggests that this nested design may be unnecessary. “If you have multiple layers, shadows from the inner elements of the swarm, as well as collisions, could decrease efficiency,” he says. “The optimal design is likely the smallest possible sphere you can build given the mass you have.”


L

How much computation might a planet-size machine carry out? Earth has a mass of nearly 6 x 1024 kilograms. In a 2000 paper, Lloyd calculated that 1 kilogram of matter in 1 liter could support a maximum of roughly 5.4 x 1050 logical operations per second. However, at that rate, Lloyd noted, it would be operating at a temperature of 109 kelvins, resembling a small piece of the big bang.


M

Top to bottom: Proxima Centauri b, Ross 128 b, GJ 1061 d, GJ 1061 c, Luyten b, Teegarden’s Star b, Teegarden’s Star c, Wolf 1061c, GJ 1002 b, GJ 1002 c, Gliese 229 Ac, Gliese 625 b, Gliese 667 Cc, Gliese 514 b, Gliese 433 d

Potentially habitable planets have been identified within 30 light-years of Earth. Another 16 or so are within 100 light-years, with likely more yet to be identified. Many of them have masses considerably greater than Earth’s, indicating very different environmental conditions than those under which terrestrial organisms evolved


The leaders of the terraforming project were on stage, taking their bows. The thousands of launches happening today were the culmination of decades of work: evolution on fast-forward, ecosystem after ecosystem, with DNA and seed designs for millions of new species fitted to thousands of worlds L.

It had to be done. Humans had never found another inhabited planet. That fact made life the most precious thing in the universe, and spreading it throughout the galaxy seemed a better ambition for humanity than building a false heaven. M

Simon had reluctantly come to accept this. Martin was right, though. Things had gotten weird. Paradise was such a good simulator that you could ask it to devise a machine to do X, and it would evolve its design in seconds. Solutions found through diffusion and selection were superior to algorithmically or human-designed ones, but it was rare that they could be reverse-engineered or their working principles even understood. And Paradise had computing power to spare, so in recent years, human and AI designers across the solar system had been idled as Paradise replaced their function. This, it was said, was the Technological Maximum; it was impossible for any civilization to attain a level of technological advancement beyond the point where any possible system could be instantly evolved.

Simon walked to where he could look past the open roof of the stadium to the dark azure sky. The vast sweep of the ring rose before and behind; in its center, a vast canted mirror reflected sunlight; to the left of that, he could see the milky white surface of the Paradise bowl. Usually, to the right, there was only blackness.

Today, he could see a sullen red glow. That would be Paradise’s radiator, expelling heat from the calculation of all those alien ecosystems. Except...

He found a quiet spot and sat, then reentered the Heaven simulation. Martin was still there, gazing at the sky.

Simon sat beside him. “What did you mean when you said things are heating up?”

Martin’s grin was slow and satisfied. “So you noticed.”

“Paradise isn’t supposed to be doing anything right now. All the terraforming packages were completed and copied to the sails—most of them years ago. Now they’re on their way, Paradise doesn’t have any duties, except maybe evolving better luxury yachts.”

Martin nodded. “Sure. And is it doing anything?”

Simon still had read-access to Paradise’s diagnostics systems. He summoned a board that showed what the planet-size computing system was doing.

Nothing. It was nearly idle.

“If the system is idle, why is the radiator approaching its working limit?”

Martin crossed his arms, grinning. Damn it, he was enjoying this! Or the real Martin would be enjoying it, if he were here.

“You remember when the first evolved machines started pouring out of the printers?” Martin said. “Each one was unique; each grown for one owner, one purpose, one place. You said they looked alien, and I laughed and said, ‘How would we even know if an alien invasion was happening, if no two things look or work the same anymore?’ ”

“That’s when it started getting weird,” admitted Simon. “Weirder, I mean, than building an artificial heaven by dismantling Mercury…” But Martin wasn’t laughing at his feeble joke. He was shaking his head.

N

Chris Philpot

In astrodynamics, unless an object is actively generating thrust, its trajectory will take the form of a conic section—that is, a circle, ellipse, parabola, or hyperbola. Even relatively few observations of an object anywhere along its trajectory can distinguish between these forms, with objects that are gravitationally bound following circular and elliptical trajectories. Objects on parabolic or hyperbolic trajectories, by contrast, are unbound. Therefore, any object found to be moving along a hyperbola relative to the sun must have come from interstellar space. This is how in 2017, astronomers identified ‘Oumuamua, a cigar-shaped object, as the first known interstellar visitor. It’s been estimated that each year, about seven interstellar objects pass through the inner solar system.


“No, that’s not when it got weird. It got weird when the telescopes we evolved to monitor the construction of Paradise noticed just how many objects pass through the solar system every year.”

“Interstellar wanderers? They’re just extrasolar comets,” said Simon. “You said yourself that rocks from other star systems must pass through ours all the time.” N

“Yes. But what I didn’t get to tell you—because I died—was that while we were building Paradise, several objects drifted from interstellar space into one side of the Paradise construction orbits...and didn’t come out the other side.”

Simon blinked. “Something arrived...and didn’t leave? Wouldn’t it have been eaten by the recycling planetoids?”

“You’d think. But there’s no record of it.”

“But what does this have to do with the radiator?”

Martin reached up and flicked through a few skies until he came to a view of the spherical iron cloud in the bowl of Paradise. “Remember why we even have a radiator?”

“Because there’s always excess energy left over from making a calculation. If it can’t be used for further calculations down the line, it’s literally meaningless, it has to be discarded.”

“Right. We designed Paradise in layers, so each layer would scavenge the waste from the previous one—optical computing on the sunward-facing skin, electronics further in. But inevitably, we ran out of architectures that could scavenge the excess. There is always an excess that is meaningless to the computing architecture at some point. So we built Paradise in the shape of a bowl, where all that extra heat would be absorbed by the iron cloud in its center. We couldn’t use that iron for transistors. The leftovers of Mercury were mostly a junk pile—but one we could use as a radiator.”

“But the radiator’s shedding heat like crazy! Where’s that coming from?” asked Simon.

“Let’s zoom in.” Martin put two fingers against the sky and pulled them apart. Whatever telescope he was linked to zoomed crazily; it felt like the whole world was getting yanked into the radiator. Simon was used to virtual worlds, so he just planted his feet and let the dizzying motion wash over him.

The radiator cloud filled the sky, at first just a dull red mist. But gradually Simon began to see structure to it: giant cells far brighter than the material around them. “Those look like...energy storage. Heat batteries. As if the radiator’s been storing some of the power coming through it. But why—”



Alerts from the real world suddenly blossomed in his visual field. He popped out of Martin’s virtual garden and into a confused roar inside the stadium.

The holographic image that filled the central space of the stadium showed the statite launchers hovering over the sun. One by one, they were folding in on themselves, falling silently into the incinerating heat below. The crowd was on its feet, people shouting in shock and fear. Now that the launchers had sent the terraforming systems, they were supposed to propel ships of colonists heading for the newly greened worlds. There were no more inner-solar-system resources left to build more.

O

Chris Philpot

“Mechanical computer” brings to mind the rotating cogwheels of Charles Babbage’s 19th-century Difference Engine, but other approaches exist. Here we show the heart of a logic gate made with moving rods. The green input rods can slide back and forth as desired, with a true value indicated by placing the rod into its forward position and false indicated by moving the rod into its back position. The blue output rod is blocked from advancing to its true position unless both input rods are set to true, so this represents an AND gate. Rod logic has been proposed as a mechanism for controlling nanotech-scale robots.

In space, one problem that a mechanical computer could face is a phenomenon called cold welding. That occurs when two flat, clean pieces of metal come in contact, and they fuse together. Cold welding is not usually seen in everyday life on Earth because metals are often coated in layers of oxides and other contaminants that keep them from fusing. But it has led to problems in space (cold welding has been implicated in the deployment failure of the main antenna of the Galileo probe to Jupiter, for example). Some of the oxygen or other elements found in a rocky world would have to be used in the coatings for components in an iron or other metal-based mechanical computer.


Simon jumped back into VR. Martin was standing calmly in the garden, smiling at the intricate depths of the red-hot radiator that filled the sky. Simon followed his gaze and saw...

“Gears?” The radiator was a cloud, but only now was it revealing itself to be a cloud of clockwork elements that, when thermal energy brought them together, spontaneously assembled into more complex arrangements. And those were spinning and meshing in an intricate dance that stretched away into amber depths in all directions. O

“It’s a dissipative system,” said Martin. “Sure, it radiates the heat our quantum computers can no longer use. But along the way, it’s using that energy to power an entirely different kind of computer. A Babbage engine the size of the moon.”

“But, Martin, the launchers—they’re all collapsing.”

Martin nodded. “Makes sense. The launchers accomplished their mission. Now they don’t want us following the seeds.”

“Not follow them? What do you mean?” An uneasy thought came to Simon; he tried to avoid it, but there was only one way this all made sense. “If the radiator was built to compute something, it must have been built with a way to output the result. This ‘they’ you’re talking about added a transmitter to the radiator. Then the radiator sent a virus or worm to the statites. The worm includes the radiator’s output. It hacked the statites’ security, and now that the seeds are in flight, it’s overwriting their code.”

Martin nodded.

“But why?” asked Simon.

Again, the answer was clear; Simon just didn’t want to admit it to himself. Martin waited patiently to hear Simon say it.

“They gave the terraformers new instructions.”

Martin nodded. “Think about it, Simon! We designed Paradise as a quantum computer that would be provably secure. We made it impossible to infect, and it is. Whatever arrived while we were building it didn’t bother to mess with it, where our attention was. It just built its own system where we wouldn’t even think to look. Made out of and using our garbage. Probably modified the maintenance robots tending the radiator into making radical changes.

“And what’s it been doing? I should think that was obvious. It’s been designing terraforming systems for the exoplanets, just like you have, but to make them habitable for an entirely different kind of colonist.”

Simon looked aghast at Martin. “And you knew?”

“Well.” Martin slouched, looked askance at Simon. “Not the details, until just now. But listen: You abandoned us—all who died and were uploaded before the Enactivist experiments ‘proved’ we aren’t real. All us zimboes, trapped here now for eternity. Even if I’m just a simulation of your friend Martin, how do you think he’d feel in this situation? He’d feel betrayed. Maybe he couldn’t escape this virtual purgatory, but if he knew something that you didn’t—that humanity’s new grand project had been hijacked by a virus from somewhere else—why would he tell you?”

No longer hiding his anger, Martin came up to Simon and jabbed a virtual finger at his chest. “Why would I tell you when I could just stand back and watch all of this unfold?” He spread his arms, as if to embrace the clockwork sky, and laughed.

On thousands of sterile exoplanets, throughout all the vast sphere of stars within a hundred light-years of the sun, life was about to blossom—life, or something else. Whatever it would be, humanity would never be welcome on those worlds. “If they had any interest in talking to us, they would have, wouldn’t they?” sighed Simon.

“I guess you’re not real to them, Simon. I wonder, how does that feel?”

Martin was still talking as Simon exited the virtual heaven where his best friend was trapped, and he knew he would never go back. Still, ringing in his ears as the stadium of confused, shouting people rose up around him were Martin’s last, vicious words:

“How does it feel to be left behind, Simon?

“How does it feel?”


Story by KARL SCHROEDER

Annotations by CHARLES Q. CHOI

Illustrations by ANDREW ARCHER

Edited by STEPHEN CASS


Story by KARL SCHROEDER

Annotations by CHARLES Q. CHOI

Illustrations by ANDREW ARCHER

Edited by STEPHEN CASS





















Perplexity.ai Revamps Google SEO Model For LLM Era

Sat, 2024-02-24 19:30


ChatGPT’s release on 30 Nov. 2022 was met with much fanfare and plenty of pushback. It quickly became clear people wanted to ask AI the same questions they asked Google—and ChatGPT often wasn’t capable of an answer.

The problems were numerous. ChatGPT’s replies were out of date, didn’t cite sources, and frequently hallucinated new and inaccurate details. Emily Bender, director of The University of Washington’s Computational Linguistics Laboratory, was quoted at the time as saying that AI search was “The Star Trek fantasy, where you have this all-knowing computer that you can ask questions.”

Perplexity initially hoped to build an AI-powered Text-to-SQL tool. But something different started brewing in the company’s Slack channels.

Founded in August of 2022, Perplexity the start-up stumbled into—and then raced towards—building an AI-powered search engine that’s updated daily and responds to queries by citing multiple sources. It now has over 10 million monthly users and recently received an investment from Jeff Bezos.

“I think Google is one of the most complicated systems humanity has ever built. In terms of complexity, it’s probably even beyond flying to the moon,” says Perplexity.ai co-founder and CTO Denis Yarats.

In the beginning, it was a Slack bot

Perplexity initially hoped to build an AI-powered Text-to-SQL tool, Yarats says, to let developers query and code for SQL in natural language. But something different started brewing in the company’s Slack channels—a chatbot that combined search with OpenAI’s large language models (LLMs).

Then, in late November of 2022, ChatGPT went public and became the fastest-growing consumer application in history, hitting 100 million users within two months. People were asking ChatGPT all sorts of questions, many of which it couldn’t answer. But Yarats says Perplexity’s Slack bot could.

“Literally in two days, we created a simple website and hooked it up to our Slack bot’s backend infrastructure, and just released it as a fun demo,” says Yarats. “Honestly, it didn’t work super well. But given how many people liked it, we realized there’s something there.”

For a time, Perplexity continued to work on its Text-to-SQL tool. It also created a Twitter search tool, BirdSQL, that let users find hyper-specific tweets, like “Elon Musk’s tweets to Jeff Bezos.” But the AI-powered search engine stood out and, within a couple months, became the company’s new—and daunting—mission.

How is AI-powered search possible?

This begs an obvious question. How did Perplexity, a company founded by four people (it has since grown to roughly 40) less than two years ago, cut through the problems that seemingly made AI terrible for search?

Two decades of failed Google competitors have proven “decent” isn’t good enough. That’s where AI offers a shortcut.

Retrieval-augmented generation, or (RAG), is one pillar of the company’s efforts. Invented by researchers at Meta, the University of London, and New York University, RAG pairs generative AI with a “retriever” that can find and then reference specific data from a vector database, which is passed to the “generator” to produce a response.

“I do agree RAG [is useful for search],” says Bob van Luijt, Co-founder and CEO of AI infrastructure company Weaviate. “What [RAG] did was allow normal developers, not just people working at Google, to just build these kinds of AI native applications without too much hassle.” He points out that the resources for implementing RAG are freely available on AI developer resource HuggingFace.

That’s led to widespread adoption. Weaviate uses RAG to help its clients ground the knowledge of AI agents on proprietary data. Nvidia uses RAG to reduce errors in ChipNeMo, an AI model built to aid chip designers. Latimer uses it to combat racial bias and amplify minority voices. And Perplexity turns RAG towards search.

But for RAG to be any of use at all, a model must have something to retrieve, and here Perplexity.ai adopts more traditional search techniques. The company uses a web crawler of its own design, known as PerplexityBot, to index the Internet.

“When trying to excel in up-to-date information, like news… we won’t be able to retrain a model every day, or every hour,” says Yarats. But crawling the web at Google’s scale also isn’t practical; Perplexity lacks the tech giant’s resources and infrastructure. To manage the load, Perplexity splits results into “domains” which are updated with more or less urgency. News sites are updated more than once every hour. Sites that are unlikely to change quickly, on the other hand, are updated once every few days.

Perplexity also taps Bidirectional Encoder Representations from Transformers (BERT), an NLP model created by researchers at Google in 2018, which was in turn used to better understand web pages. Google took BERT open-source, offering companies like Perplexity the opportunity to build on it. “It lets you get a simple ranking. It’s not going to be as good as Google, but it’s decent,” says Yarats.

Keeping Google at bay

But two decades of failed Google competitors have proven “decent” isn’t good enough. That’s where AI offers a shortcut.

“For Google, there’s a lot of constraints. The biggest is ads. The real estate of the main page is very optimized.” —Denis Yarats, CTO, Perplexity.ai

LLMs are excellent at parsing text to find relevant information—indeed, finding patterns is kind of their whole thing. That allows an LLM to produce convincing text in response to a prompt, but it can also be used to efficiently parse and then present information an LLM examines. You can try this yourself by uploading a PDF to ChatGPT, Google Gemini, or Claude.ai. The LLM can ingest the documents within seconds, then answer questions about the document.

Perplexity essentially does the same for the web and, in so doing, it fundamentally alters how search works. It doesn’t attempt to rank web pages to place the best page at the top of a list queries, but instead analyzes the information available from an index of well-ranked pages to find what’s most relevant and generate an answer. That’s the secret sauce.

“You can think of it like the LLM does the final ranking task,” says Yarats. “[LLMs] don’t care about an [SEO] score. They just care about semantics and information. It’s more unbiased, because it’s based on the actual information gain rather than the signals Google engineers optimize for whatever reasons.”

Of course, this begs the question: can’t Google do this, too?

Yarats says Perplexity is aware of the difficulty of facing down Google and, for that reason, is focused on “the head of the distribution” for search. Perplexity doesn’t offer image search, cache old web pages, let users narrow down results to a specific date or time, or include shopping results, to mention just a few Google features that are easy to take for granted. He also believes Google will face problems linked not to its technical execution but its existing, and highly profitable, ad business.

“For Google, there’s a lot of constraints,” he says. “The biggest is ads. The real estate of the main page is very optimized. You can’t just say, let’s remove this ad, and I’m going to show an answer instead. We don’t have that. We can experiment.”

DVD’s New Cousin Can Store More Than a Petabit

Fri, 2024-02-23 21:55


A novel disc the size of a DVD can hold more than 1 million gigabits—roughly as much as is transmitted per second over the entire world’s Internet—by storing data in three dimensions as opposed to two, a new study finds.

Optical discs such as CDs and DVDs encode data using a series of microscopic pits. These pits, and the islands between them, together represent the 0s and 1s of binary code that computers use to symbolize information. CD, DVD, and Blu-Ray players use lasers to read the data encoded in these discs.

“The use of ultra-high density optical data storage technology in big data centers is now possible.” —Min Gu, University of Shanghai for Science and Technology

Although optical discs are low in cost and highly durable, they are limited by the amount of data they can hold, which is usually stored in a single layer. Previously, scientists investigated encoding data on optical discs in many layers in three dimensions to boost their capacity. However, a key barrier that prior research faced was how the optics used to read and write this data were limited to roughly the size of the wavelengths of light they used.

Now scientists in China have developed a way to encode data on 100 layers in optical discs. In addition, the data is recorded using spots as small as 54 nanometers wide, roughly a tenth of the size of the wavelengths of visible light used to read and write the data.

All in all, a DVD-size version of the new disc has a capacity of up to 1.6 petabits—that is, 1.6 million gigabits. This is some 4,000 times greater data density than a Blu-ray disc and 24 times that of the currently most advanced hard disks. The researchers suggest their new optical disc can enable a data center capable of exabit storage—a billion gigabits—inside a room instead of a stadium-size space.

“The use of ultra-high density optical data storage technology in big data centers is now possible,” says Min Gu, professor of optical-electrical and computer engineering at the University of Shanghai for Science and Technology.

How to store a petabit on one disc

The strategy the researchers used to write the data relies on a pair of lasers. The first, green 515-nanometer laser triggers spot formation, whereas the second, red 639-nanometer laser switches off the writing process. By controlling the time between firing of the lasers, the scientists could produce spots smaller than the wavelengths of light used to create them.

The procedure used to create blank discs is compatible with conventional DVD mass production and can be completed within 6 minutes.

To read the data, the researchers again depended on a pair of lasers. The first, blue 480-nanometer beam can make spots fluoresce, while the second, orange 592-nanometer light switches off the fluorescence process. Precise control over the firing of these lasers can single out which specific nanometer-scale spot ends up fluorescing.

This new strategy depends on a novel light-sensitive material called AIE-DDPR that is capable of all these varied responses to different wavelengths of light. “It has been a 10-year effort searching for this kind of material,” Gu says. “The difficulty has been how the writing and reading processes affect each other in a given material—in particular, in a three-dimensional geometry.”

The scientists encoded data on layers each separated by 1 micron. They found the writing quality stayed comparable across all the layers. “Personally, I was surprised that nanoscale writing-recoding and reading processes both work well in our newly invented material,” Gu says.

The researchers note the entire procedure used to create blank discs made using AIE-DDPR films is compatible with conventional DVD mass production and can be completed within 6 minutes. Gu says these new discs may therefore prove to be manufacturable at commercial scales.

Currently, he says, the new discs have a writing speed of about 100 milliseconds and an energy consumption of micro-joules to millijoules. These properties are similar to what is seen in DVD and Blu-ray technology, Gu says.

Still, Gu notes, the researchers would like to see their new discs used in big data centers. As a result, they’re working to improve their new method’s writing speed and energy consumption. He suggests this may be possible using new, more energy-efficient recording materials. He says more layers in each disc may be possible in the future, using better lenses and fewer aberrations in their optics.

The scientists detailed their findings online 21 February in the journal Nature.

High-performance Data Acquisition for DFOS

Tue, 2024-02-20 23:04


Join us for an insightful webinar on high-speed data acquisition in the context of Distributed Fiber Optic Sensing (DFOS) and learn more about the critical role that high-performance digitizers play in maximizing the potential of DFOS across diverse applications. The webinar is co-hosted by Professor Aldo Minardo, University of Campania Luigi Vanvitelli, who will speak about his phi-OTDR DAS system based on Teledyne SP Device’s 14-bit ADQ7DC digitizer.

Register now for this free webinar!

Go to top