Latest News

Computer Science

Subscribe to Computer Science feed Computer Science
IEEE Spectrum
Updated: 2 hours 40 min ago

Cybersecurity Gaps Could Put Astronauts at Grave Risk

Wed, 2023-05-31 20:23


This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

On 3 July 1996, Earth was facing all but absolute destruction from an alien force hovering above three of the world’s biggest cities. Hope of humanity’s survival dwindled after brute force failed to thwart the attackers. But a small piece of malicious computer code changed the course of history when it was uploaded to the aliens’ computer system the next day. The malware—spoiler alert—disabled the invading ships’ defenses and ultimately saved the fate of humanity.

At least, that’s what happened in the wildly speculative 1996 sci-fi film Independence Day.

Yet, for all the reality-defying situations the blockbuster depicted, the prospective reality of a malware attack wreaking havoc on a future crewed spacecraft mission has digital security experts very concerned. Gregory Falco, an assistant professor of civil and systems engineering at Johns Hopkins, explored the topic in a recent paper presented at the spring 2023 IEEE Aerospace Conference. Inspiration for the study, he says, came from his discovering a relative lack of cybersecurity features in the Artemis crew’s next-generation spacesuits.

“Maybe you might think about securing the communications link to your satellite, but the stuff in space all trusts the rest of stuff in space.”
—James Pavur, cybersecurity engineer

“The reality was that there was zero specification when they had their call for proposals [for new spacesuit designs] that had anything to do with cyber[security],” Falco says. “That was frustrating for me to see. This paper was not supposed to be groundbreaking. ... It was supposed to be kind of a call to say, ‘Hey, this is a problem.’”

As human spaceflight prepares to enter a new, modern era with NASA’s Artemis program, China’s Tiangong Space Station, and a growing number of fledgling space tourism companies, cybersecurity is at least as much of a persistent problem up there as it is down here. Its magnitude is only heightened by the fact that maliciously-driven system failures—in the cold, unforgiving vacuum of space—can escalate to life-or-death with just a few inopportune missteps. Apollo-era and even Space Shuttle-era approaches to cybersecurity are overdue for an update, Falco says.

“Security by obscurity” no longer works

When the US and other space-faring nations, such as the then-Soviet Union, began to send humans to space in the late 1960s, there was little to fear in the way of cybersecurity risks. Not only did massively interconnected systems like the internet not yet exist, but technology aboard these craft were so bespoke that it protected itself through a “security by obscurity” approach.

This meant that the technology was so complex that it effectively kept itself safe from tampering, says James Pavur, a cybersecurity researcher and lead cybersecurity software engineer at software company Istari Global.

A consequence of this security approach is that once you do manage to enter the craft’s internal systems—whether you’re a crew member or perhaps in years to come a space tourist—you’ll be granted full access to the online systems with essentially zero questions asked.

This security approach is not only insecure, says Pavur, but it is also vastly different from the zero-trust approach applied to many terrestrial technologies.

“Cybersecurity has been something that kind of stops on the ground,” he says. “Like maybe you might think about securing the communications link to your satellite, but the stuff in space all trusts the rest of stuff in space.”

NASA is no stranger to cybersecurity attacks on its terrestrial systems—nearly 2,000 “cyber incidents” were made in 2020 according to a 2021 NASA report. But the types of threats that could target crewed spacecraft missions would be much different from phishing emails, says Falco.

What are the cyber threats in outer space?

Cyber threats to crewed spacecraft may focus on proximity approaches, such as installing malware or ransomware into a craft’s internal computer. In his paper, Falco and co-author Nathaniel Gordon layout four ways that crew members, including space tourists, may be used as part of these threats: crew as the attacker, crew as an attack vector, crew as collateral damage, and crew as the target.

“It’s almost akin to medical device security or things of that nature rather than opening email,” Falco says. “You don’t have the same kind of threats as you would have for an IT network.”

Among a host of troubling scenarios, proprietary secrets —both private and national—could be stolen, the crew could be put at risk as part of a ransomware attack, or crew members could even be deliberately targeted through an attack on safety critical systems like air filters.

All of these types of attacks have taken place on Earth, say Falco and Gordon in their paper. But the high level of publicity of the work as well as the integrated nature of spacecraft—close physical and network proximity of systems within a mission—could make cyberattack on spacecraft particularly appealing. Again heightening the stakes, the harsh environment of outer (or lunar or planetary) space renders malicious cyber theats that much more perilous for crew members.

To date, deadly threats like these have gratefully not impacted human spaceflight. Though if science fiction provides any over-the-horizon warning system for the shape of threats to come, consider sci-fi classics like 2001: A Space Odyssey or Alien—in which a non-human crew member is able to control the crafts’ computers in order to change the ship’s route and to even prevent a crew member from leaving the ship in an escape pod.

Right now, say Falco and Gordon, there is little to keep a bad actor or a manipulated crew member onboard a spacecraft from doing something similar. Luckily, the growing presence of humans in space also provides an opportunity to create meaningful hardware, software, and policy changes surrounding the cybersecurity of these missions.

Saadia Pekkanen is the founding director of the University of Washington’s Space Law, Data and Policy Program. In order to create a fertile environment for these innovations, she says, it will be important for space-dominant countries like the US and China to create new policies and legislation to dictate how to address their own nations’ cybersecurity risk.

While these changes won’t directly impact international policy, decisions made by these countries could steer how other countries address these problems as well.

“We’re hopeful that there continues to be dialogue at the international level, but a lot of the regulatory action is actually going to come, we think, at the national level,” Pekkanen says.

How can the problem be fixed?

Hope for a solution, Pavur says, could begin with the fact that another sector in aerospace—the satellite industry—has made recent strides toward greater and more robust cybersecurity of their telemetry and communications (as outlined in a 2019 review paper published in the journal IEEE Aerospace and Electronic Systems).

Falco points toward relevant terrestrial cybersecurity standards—including the zero-trust protocol—that require users to prove their identity to access the systems that keep safety-critical operations separate from all other onboard tasks.

Creating a security environment that’s more supportive of ethical hackers —the kind of hackers who break things to find security flaws in order to fix them instead of exploit them— would provide another crucial step forward, Pavur says. However, he adds, this might be easier said than done.

“That’s very uncomfortable for the aerospace industry because it’s just not really how they historically thought about threat and risk management,” he says. “But I think it can be really transformative for companies and governments that are willing to take that risk.”

Falco also notes that space tourism flights could benefit from a space-faring equivalent of the TSA—to ensure that malware isn’t being smuggled onboard in a passenger’s digital devices. But perhaps most important, instead of “cutting and pasting” imperfect terrestrial solutions into space, Falco says that now is the time to reinvent how the world secures critical cyber infrastructure in Earth orbit and beyond.

“We should use this opportunity to come up with new or different paradigms for how we handle security of physical systems,” he says. “It’s a whitespace. Taking things that are half-assed and don’t work perfectly to begin with and popping them into this domain is not going to really serve anyone the way we need.”

TLA+ Helps Programmers Squash Bugs Before Coding

Wed, 2023-05-31 19:28


Design is an essential part of the development process for many software engineers. Like an architect sketching blueprints, a programmer devises algorithms to support their code and creates models to envision how the different elements of their systems will work together. But what if programmers could test those algorithms and models to uncover design flaws before they could turn into bugs in the written code?

That’s the goal with TLA+, an open-source, high-level language for modeling software programs and hardware systems. Its underlying logic is based on the temporal logic of actions (TLA), a mathematical way to reason about the correctness of concurrent algorithms. Both TLA and TLA+ were developed by Leslie Lamport, a distinguished scientist at Microsoft Research who is best known for inventing LaTeX, a document preparation system for scientific papers. Lamport also won the 2013 A.M. Turing Award from the Association for Computing Machinery for his work on clarifying the behavior of distributed systems.

Lamport is quick to note that TLA+ is not a programming language but a specification language. “It’s describing the program at a higher level of abstraction—what it is supposed to do and how it’s supposed to do it,” he says.

This makes TLA+ valuable for verifying that a program’s design or supporting algorithm is valid, a feature made possible by the TLA+ model checker. After creating specifications and writing models on TLA+, engineers can run everything through the model checker to find and fix design errors before they get implemented into code.

The language was first used in the industry to model hardware, particularly at Intel. “It was initially appealing to hardware engineers because [they] were used to the idea of describing things precisely above the circuit level,” says Lamport. “TLA+ gave them a language in which they could express their high-level circuit designs rigorously and check them.”

While it was first used in the hardware sector, TLA+ is specifically geared toward concurrent programs and distributed systems, with an emphasis on correctness rather than speed. “It’s not about how to compute something faster, but how to get the processes to interact with each other so that they are computing the right thing,” says Lamport. “As systems get larger, more and more engineers realize the importance of getting those high-level designs correct.”

Some of today’s tech giants have integrated TLA+ into their development processes. Amazon, for instance, has used TLA+ to test its cloud computing services, as well as search for hard-to-find yet fundamental algorithm flaws in its distributed systems and optimize performance without sacrificing correctness. Microsoft has applied a similar approach to its Azure cloud computing platform. Engineers at Oracle have, with the help of TLA+, ensured that their distributed systems designs are correct. Meanwhile, the group that developed Virtuoso, a real-time operating system used in the European Space Agency’s Rosetta spacecraft, created a model of the next operating system, OpenComRTOS, on TLA+. The resulting codebase has a size around 10 times smaller than its predecessor, according to Lamport.

Yet the language is not only for building large, complex systems. “If you’re a programmer writing a piece of concurrent code, you can use TLA+ for that particular algorithm to make sure it is correct and code it,” Lamport says.

The challenge, however, can be getting started. As TLA+ is math-based, it comes with a steep learning curve and might appear intimidating to software engineers. Lamport has created some resources that could help, but programmers may find it easier to begin with PlusCal, a programming language he developed for writing algorithms. A PlusCal algorithm is designed so that it can be translated into a TLA+ model, which can in turn be reviewed using the model checker.

Lamport has passed on the responsibility of maintaining TLA+ to the TLA+ Foundation, whose inaugural members include Amazon, the Linux Foundation, Microsoft, and Oracle. The Foundation aims to “promote adoption, provide education and training resources, fund research, develop tools, and build a community of TLA+ practitioners,” as well as “ensure the continuous improvement and evolution of the TLA+ language.”

Lamport believes that the current biggest need for the language is a tool that could translate a high-level TLA+ design directly into code. But for now, he hopes the language will help engineers in their day-to-day work of developing software or hardware. “It improves the way you can design a system,” he says. “It gets you thinking outside the box and changes the way you think.”

Keeping Moore’s Law Going is Getting Complicated

Wed, 2023-05-24 21:40


There was a time, decades really, when all it took to make a better computer chip were smaller transistors and narrower interconnects. That time’s long gone now, and although transistors will continue to get a bit smaller, simply making them so is no longer the point. The only way to keep up computing’s exponential pace now is a scheme called system technology cooptimization (STCO), argued researchers at ITF World 2023 last week in Antwerp, Belgium. It’s the ability to break chips up into their functional components, use the optimal transistor and interconnect technology for each function, and stitch them back together to create a lower power, better functioning whole.

“This leads us to a new paradigm for CMOS,” says Imec R&D manager Marie Garcia Bardon. CMOS 2.0, as the Belgium-based nanotech research organization is calling it, is a complicated vision. But it may be the most practical way forward, and parts of it are already evident in today’s most advanced chips.

How We Got Here

In a sense, the semiconductor industry was spoiled by the decades prior to about 2005, says Julien Ryckaert, R&D vice president at Imec. During that time, chemists and device physicists were able to regularly produce a smaller, lower power, faster transistor that could be used for every function on a chip and that would lead to a steady increase in computing capability. But the wheels began to come off that scheme not long thereafter. Device specialists could come up with excellent new transistors, but those transistors weren’t making better, smaller circuits such as the SRAM memory and standard logic cells that make up the bulk of CPUs. In response, chipmakers began to break down the barriers between standard cell design and transistor development. Called design technology cooptimization (DTCO), the new scheme led to devices designed specifically to make better standard cells and memory.

But DTCO isn’t enough to keep computing going. Limits of physics and economic realities conspired to put barriers in the path to progressing with a one-size-fits-all transistor. For example, physical limits have prevented CMOS operating voltages from decreasing below about 0.7 volts, slowing down progress in power consumption, explains Anabela Veloso, principal engineer at Imec. Moving to multicore processors helped ameliorate that issue for a time. Meanwhile, I/O limits meant it became more and more necessary to integrate the functions of multiple chips onto the processor. So in addition to a system-on-chip (SoC) having multiple instances of processor cores, they also integrate network, memory, and often specialized signal processing cores. Not only do these cores and functions have different power and other needs, but they also can’t be made smaller at the same rate. Even the CPU’s cache memory, SRAM, isn’t scaling down as quickly as the processor’s logic.

System technology cooptimization

Getting things unstuck is as much a philosophical shift as a collection of technologies. According to Ryckaert, STCO means looking at a system-on-chip as a collection of functions, such as power supply, I/O, and cache memory. “When you start reasoning about functions, you realize that an SoC is not this homogeneous system, just transistors and interconnect,” he says. “It is functions, which are optimized for different purposes.”

Ideally, you could build each function using the process technology best suited to it. In practice that mostly means building each on its own sliver of silicon, or chiplet. And then binding those together using technology, such as advanced 3D stacking, so that all the functions act as if they were on the same piece of silicon.

Examples of this thinking are already present in advanced processors and AI accelerators. Intel’s high-performance computing accelerator Ponte Vecchio (now called Intel Data Center GPU Max) is made up of 47 chiplets built using two different processes each from both Intel and TSMC. AMD already uses different technologies for the I/O chiplet and compute chiplets in its CPUs, and it recently began separating out SRAM for the compute chiplet’s high-level cache memory.

Imec’s roadmap to CMOS 2.0 goes even further. It requires continuing to shrink transistors, moving power and possibly clock signals beneath a CPU’s silicon, and ever-tighter 3D chip integration. “We can use those technologies to recognize the different functions, to disintegrate the SoC, and reintegrate it to be very efficient,” says Ryckaert.

Transistors will change form over the coming decade, but so will the metal that connects them. Ultimately transistors could be stacked-up devices made of 2D semiconductors instead of silicon. And power delivery and other infrastructure could be layered beneath the transistors.Imec

Continued transistor scaling

Major chipmakers are already transitioning from the FinFET transistors that powered the last decade of computers and smartphones to a new architecture, nanosheet transistors. [See “The Nanosheet Transistor is the Next Step in Moore’s Law”] Ultimately, two nanosheet transistors will be built atop each other to form the complementary FET, or CFET, which Velloso says “represents the ultimate in CMOS scaling.” [See “3D-stacked CMOS Takes Moore’s Law to New Heights”]

As these devices scale down and change shape, one of the main goals is to drive down the size of standard logic cells. That is typically measured in “track height”, basically, the number of metal interconnect lines that can fit within the cell. Advanced FinFETs and early nanosheet devices are 6-track cells. Moving to 5 tracks may require an interstitial design called a forksheet, which squeezes devices together more closely without necessarily making them smaller. CFETs will then reduce cells to 4 tracks or possibly fewer.

Leading-edge transistors are already transitioning from the fin field-effect transistor (FinFET) architecture to nanosheets. The ultimate goal is to stack two devices atop each other in a CFET configuration. The Forksheet may be an intermediary step on the way.Imec

According to Imec, chipmakers will be able to produce the finer features needed for this progression using ASML’s next generation of extreme-ultraviolet lithography. That tech, called high-numerical aperture EUV, is under construction at ASML now, and Imec is next in line for delivery. Increasing numerical aperture, an optics term related to the range of angles over which a system can gather light, leads to more precise images.

Backside power delivery networks

The basic idea in backside power delivery networks is to remove all the interconnects that send power—as opposed to data signals—from above the silicon surface and place them below it. This should allow for less power loss, because the power delivering interconnects can be larger and less resistant. It also frees up room above the transistor layer for signal-carrying interconnects, possibly leading to more compact designs. [See “Next-Gen Chips Will Be Powerd From Below”.]

In the future, even more could be moved to the backside of the silicon. For example, so called global interconnects, those that span (relatively) great distances to carry clock and other signals, could go beneath the silicon. Or engineers could add active power delivery devices—such as electrostatic discharge safety diodes.

3D Integration

There are several ways to do 3D integration, but the most advanced today are wafer-to-wafer and die-to-wafer hybrid bonding. [See “3 Ways 3D Chip Tech is Upending Computing”.] These two provide the highest density of interconnections between two silicon dies. But it requires that the two dies are designed together, so their functions and interconnect points align, allowing them to act as a single chip, says Anne Jourdain, principal member of the technical staff. Imec R&D is on track to be able to produce millions of 3D connections per square millimeter in the near future.

Getting to CMOS 2.0

CMOS 2.0 would take disaggregation and heterogeneous integration to the extreme. Depending on which technologies make sense for the particular applications it could result in a 3D system that incorporates layers of embedded memory, I/O and power infrastructure, high-density logic, high drive-current logic, and huge amounts of cache memory.

Getting to that point will take not just technology development, but the tools and training to discern which technologies would actually improve a system. As Bardon points out, smartphones, servers, machine learning accelerators, and augmented- and virtual-reality systems all have very different requirements and constraints. What makes sense for one, might be a dead end for the other.

Could These Bills Endanger Encrypted Messaging?

Wed, 2023-05-24 18:30


Billions of people around the world use a messaging app equipped with end-to-end encryption, such as WhatsApp, Telegram, or Signal. In theory, end-to-end encryption means that only the sender and receiver hold the keys they need to decrypt their message. Not even an app’s owners can peek in.

In the eyes of some encryption proponents, this privacy tool now faces its greatest challenge yet—legislation in the name of a safer Internet. The latest example is the United Kingdom’s Online Safety Bill, which is expected to become law later this year. Proposed laws in other democratic countries echo the U.K.’s. These laws, according to their opponents, would necessarily undermine the privacy-preserving cornerstone of end-to-end encryption.

On its face, the bill isn’t about encryption; it aims to make the Internet less unpleasant. The bill would give the U.K.’s broadcasting and telecoms regulator, Ofcom, additional policing powers over messaging apps, social-media platforms, search engines, and other services. Ofcom could order providers to take down harmful content, such as hateful trolling, revenge porn, and child pornography, and fine those service providers for failing to comply.

The authorities are “looking for needles in a haystack....Why would they want to vastly increase the haystack by scanning one billion messages a month of everyday people?” —Joe Mullin, Electronic Frontier Foundation

The specific segment of the Online Safety Bill that worries encryption advocates is Clause 110, which entitles Ofcom to issue takedown orders for messages “whether communicated publicly or privately by means of the service.” To do this, the bill obliges services to monitor messages with “accredited technology” that has received Ofcom’s stamp of approval.

Observers believe that there is no way for service providers to comply with Clause 110 takedown orders without compromising encryption. Representatives from Meta (which owns WhatsApp), Signal (which pioneered the Signal encryption protocol that WhatsApp also uses), and five other firms signed an open letter in opposition to the bill:

“The Bill provides no explicit protection for encryption, and if implemented as written, could empower OFCOM to try to force the proactive scanning of private messages on end-to-end encrypted communication services, nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users.”

What does proactive scanning look like in practice? One example could be Microsoft’s PhotoDNA, which the company says was designed to crack down on images of child pornography. PhotoDNA assigns each image an irreversible hash; authorities can compare that hash to other hashes to find copies of an image without actually examining the image itself.

According to Joe Mullin, a policy analyst at the Electronic Frontier Foundation (EFF), a nonprofit that opposes the bill, services could comply with Clause 110 by mandating that PhotoDNA or similar software run on their users’ devices. While this would leave encryption intact, it would also act as what Mullin calls a “backdoor,” allowing for an app’s owners or law-enforcement agencies to monitor encrypted messages.

In an app that has end-to-end encryption, such a system might work something like this: Software like PhotoDNA, running on a user’s device, might create a hash for each message or each media file a user can see. If the authorities flag a particular hash, an app’s owner could scan the sea of hashes to pinpoint groups or conversations that also hold that hash’s corresponding message. Then, whether voluntarily or under legal obligation, the owner might share that information with law enforcement.

While this method wouldn’t break encryption, Mullin and other privacy advocates still find the idea of client-side monitoring to be unacceptably intrusive.

“Another strong possibility is that to avoid the creation of such backdoors, services will be intimidated away from using encryption altogether,” Mullin believes.

The U.K.’s Department for Science, Innovation and Technology did not respond to a request for comment. However, earlier this month, a spokesperson of a different U.K. government office denied that the bill would require services to weaken encryption.

Privacy concerns everywhere

The U.K. bill isn’t the only one raising privacy advocates’ concerns.

Since 2020, U.S. lawmakers from both major parties have pushed the so-called EARN IT Act. In the name of cracking down on child pornography, the bill would open the (currently closed) door for lawsuits against Internet services who fail to remove such material. The bill does not mention encryption, and its elected backers have denied that the act would harm encryption. The bill’s opponents, however, fear that the threat of legal action might encourage services to create backdoors or discourage services from encrypting messages at all.

In the European Union, lawmakers have proposed the Regulation to Prevent and Combat Child Sexual Abuse. In its current form, the regulation would allow law enforcement to send “detection orders” to tech platforms, requiring them to scan messages, media, or other data. Critics believe that by mandating scanning, the regulation would undermine encryption.

In March, WhatsApp’s boss Will Cathcart said the app would not comply with the bill’s requirements

EFF’s Mullin, for his part, believes that other methods—allowing users to report malicious posts within an app, analyzing suspicious metadata, even traditional police work—can crack down on child sexual abuse material better than scanning messages or creating backdoors to encrypted data.

The authorities are “looking for needles in a haystack,” Mullin says. “Why would they want to vastly increase the haystack by scanning one billion messages a month of everyday people?”

Elsewhere, Russia and China have laws that allow authorities to mandate that encryption software providers decrypt data, including messages, without a warrant. A 2018 Australian law gave law-enforcement agencies the power to execute warrants ordering Internet services to decrypt and share information with them. Amazon, Facebook, Google, and Twitter all opposed the law, but they could not prevent its passing.

Back in Westminster, the Online Safety Bill is just a few hurdles away from assent. But even the bill’s passing probably won’t mean the end of the saga. In March, WhatsApp’s boss Will Cathcart said the app would not comply with the bill’s requirements.

The Strange Story of the Teens Behind the Mirai Botnet

Tue, 2023-05-23 18:30


First-year college students are understandably frustrated when they can’t get into popular upper-level electives. But they usually just gripe. Paras Jha was an exception. Enraged that upper-class students were given priority to enroll in a computer-science elective at Rutgers, the State University of New Jersey, Paras decided to crash the registration website so that no one could enroll.

On Wednesday night, 19 November 2014, at 10:00 p.m. EST—as the registration period for first-year students in spring courses had just opened—Paras launched his first distributed denial-of-service (DDoS) attack. He had assembled an army of some 40,000 bots, primarily in Eastern Europe and China, and unleashed them on the Rutgers central authentication server. The botnet sent thousands of fraudulent requests to authenticate, overloading the server. Paras’s classmates could not get through to register.

The next semester Paras tried again. On 4 March 2015, he sent an email to the campus newspaper, The Daily Targum: “A while back you had an article that talked about the DDoS attacks on Rutgers. I’m the one who attacked the network.… I will be attacking the network once again at 8:15 pm EST.” Paras followed through on his threat, knocking the Rutgers network offline at precisely 8:15 p.m.



On 27 March, Paras unleashed another assault on Rutgers. This attack lasted four days and brought campus life to a standstill. Fifty thousand students, faculty, and staff had no computer access from campus.

On 29 April, Paras posted a message on Pastebin, a website popular with hackers for sending anonymous messages. “The Rutgers IT department is a joke,” he taunted. “This is the third time I have launched DDoS attacks against Rutgers, and every single time, the Rutgers infrastructure crumpled like a tin can under the heel of my boot.”

Paras was furious that Rutgers chose Incapsula, a small cybersecurity firm based in Massachusetts, as its DDoS-mitigation provider. He claimed that Rutgers chose the cheapest company. “Just to show you the poor quality of Incapsula’s network, I have gone ahead and decimated the Rutgers network (and parts of Incapsula), in the hopes that you will pick another provider that knows what they are doing.”

Paras’s fourth attack on the Rutgers network, taking place during finals, caused chaos and panic on campus. Paras reveled in his ability to shut down a major state university, but his ultimate objective was to force it to abandon Incapsula. Paras had started his own DDoS-mitigation service, ProTraf Solutions, and wanted Rutgers to pick ProTraf over Incapsula. And he wasn’t going to stop attacking his school until it switched.

A Hacker Forged in Minecraft

Paras Jha was born and raised in Fanwood, a leafy suburb in central New Jersey. When Paras was in the third grade, a teacher recommended that he be evaluated for attention deficit hyperactivity disorder, but his parents didn’t follow through.

As Paras progressed through elementary school, his struggles increased. Because he was so obviously intelligent, his teachers and parents attributed his lackluster performance to laziness and apathy. His perplexed parents pushed him even harder.

Paras sought refuge in computers. He taught himself how to code when he was 12 and was hooked. His parents happily indulged this passion, buying him a computer and providing him with unrestricted Internet access. But their indulgence led Paras to isolate himself further, as he spent all his time coding, gaming, and hanging out with his online friends.

Paras was particularly drawn to the online game Minecraft. In ninth grade, he graduated from playing Minecraft to hosting servers. It was in hosting game servers that he first encountered DDoS attacks.

Minecraft server administrators often hire DDoS services to knock rivals offline. As Paras learned more sophisticated DDoS attacks, he also studied DDoS defense. As he became proficient in mitigating attacks on Minecraft servers, he decided to create ProTraf Solutions.

Paras’s obsession with Minecraft attacks and defense, compounded by his untreated ADHD, led to an even greater retreat from family and school. His poor academic performance in high school frustrated and depressed him. His only solace was Japanese anime and the admiration he gained from the online community of Minecraft DDoS experts.

Paras’s struggles deteriorated into paralysis when he enrolled in Rutgers, studying for a B.S. in computer science. Without his mother’s help, he was unable to regulate the normal demands of living on his own. He could not manage his sleep, schedule, or study. Paras was also acutely lonely. So he immersed himself in hacking.

Paras and two hacker friends, Josiah White and Dalton Norman, decided to go after the kings of DDoS—a gang known as VDoS. The gang had been providing these services to the world for four years, which is an eternity in cybercrime. The decision to fight experienced cybercriminals may seem brave, but the trio were actually older than their rivals. The VDoS gang members had been only 14 years old when they started to offer DDoS services from Israel in 2012. These 19-year-old American teenagers would be going to battle against two 18-year-old Israeli teenagers. The war between the two teenage gangs would not only change the nature of malware. Their struggle for dominance in cyberspace would create a doomsday machine.

Bots for Tots - Here’s how three teenagers built a botnet that could take down the Internet

The Mirai botnet, with all its devastating potential, was not the product of an organized-crime or nation-state hacking group—it was put together by three teenage boys. They rented out their botnet to paying customers to do mischief with and used it to attack chosen targets of their own. But the full extent of the danger became apparent only later, after this team made the source code for their malware public. Then others used it to do greater harm: crashing Germany’s largest Internet service provider; attacking Dyn’s Domain Name System servers, making the Internet unusable for millions; and taking down all of Liberia’s Internet—to name a few examples.

The Mirai botnet exploited vulnerable Internet of Things devices, such as Web-connected video cameras, ones that supported Telnet, an outdated system for logging in remotely. Owners of these devices rarely updated their passwords, so they could be easily guessed using a strategy called a dictionary attack.

The first step in assembling a botnet was to scan random IP addresses looking for vulnerable IoT devices, ones whose passwords could be guessed. Once identified, the addresses of these devices were passed to a “loader,” which would put the malware on the vulnerable device. Infected devices located all over the world could then be used for distributed denial-of-service attacks, orchestrated by a command-and-control (C2) server. When not attacking a target, these bots would be enlisted to scan for more vulnerable devices to infect.

Botnet Madness

Botnet malware is useful for financially motivated crime because botmasters can tell the bots in their thrall to implant malware on vulnerable machines, send phishing emails, or engage in click fraud, in which botnets profit by directing bots to click pay-per-click ads. Botnets are also great DDoS weapons because they can be trained on a target and barrage it from all directions. One day in February 2000, for example, the hacker MafiaBoy knocked out Fifa.com, Amazon.com, Dell, E-Trade, eBay, CNN, as well as Yahoo, at the time the largest search engine on the Internet.

After taking so many major websites offline, MafiaBoy was deemed a national-security threat. President Clinton ordered a national manhunt to find him. In April 2000, MafiaBoy was arrested and charged, and in January 2001 he pled guilty to 58 charges of denial-of-service attacks. Law enforcement did not reveal MafiaBoy’s real name, as this national-security threat was 15 years old.

Both MafiaBoy and the VDoS crew were adolescent boys who crashed servers. But whereas MafiaBoy did it for the sport, VDoS did it for the money. Indeed, these teenage Israeli kids were pioneering tech entrepreneurs. They helped launch a new form of cybercrime: DDoS as a service. With it, anyone could now hack with the click of a button, no technical knowledge needed.

It might be surprising that DDoS providers could advertise openly on the Web. After all, DDoSing another website is illegal everywhere. To get around this, these “booter services” have long argued they perform a legitimate function: providing those who set up Web pages a means to stress test websites.

In theory, such services do play an important function. But only in theory. As a booter-service provider admitted to University of Cambridge researchers, “We do try to market these services towards a more legitimate user base, but we know where the money comes from.”

The Botnets of August

Paras dropped out of Rutgers in his sophomore year and, with his father’s encouragement, spent the next year focused on building ProTraf Solutions, his DDoS-mitigation business. And just like a mafia don running a protection racket, he had to make that protection needed. After launching four DDoS attacks his freshman year, he attacked Rutgers yet again in September 2015, still hoping that his former school would give up on Incapsula. Rutgers refused to budge.

ProTraf Solutions was failing, and Paras needed cash. In May 2016, Paras reached out to Josiah White. Like Paras, Josiah frequented Hack Forums. When he was 15, he developed major portions of Qbot, a botnet worm that at its height in 2014 had enslaved half a million computers. Now 18, Josiah switched sides and worked with his friend Paras at ProTraf doing DDoS mitigation.

The hacker’s command-and-control (C2) server orchestrates the actions of many geographically distributed bots (computers under its control). Those computers, which could be IoT devices like IP cameras, can be directed to overwhelm the victim’s servers with unwanted traffic, making them unable to respond to legitimate requests. IEEE Spectrum

But Josiah soon returned to hacking and started working with Paras to take the Qbot malware, improve it, and build a bigger, more powerful DDoS botnet. Paras and Josiah then partnered with 19-year-old Dalton Norman. The trio turned into a well-oiled team: Dalton found the vulnerabilities; Josiah updated the botnet malware to exploit these vulnerabilities; and Paras wrote the C2—software for the command-and-control server—for controlling the botnet.

But the trio had competition. Two other DDoS gangs—Lizard Squad and VDoS—decided to band together to build a giant botnet. The collaboration, known as PoodleCorp, was successful. The amount of traffic that could be unleashed on a target from PoodleCorp’s botnet hit a record value of 400 gigabits per second, almost four times the rate that any previous botnet had achieved. They used their new weapon to attack banks in Brazil, U.S. government sites, and Minecraft servers. They achieved this firepower by hijacking 1,300 Web-connected cameras. Web cameras tend to have powerful processors and good connectivity, and they are rarely patched. So a botnet that harnesses video has enormous cannons at its disposal.

While PoodleCorp was on the rise, Paras, Josiah, and Dalton worked on a new weapon. By the beginning of August 2016, the trio had completed the first version of their botnet malware. Paras called the new code Mirai, after the anime series Mirai Nikki.

When Mirai was released, it spread like wildfire. In its first 20 hours, it infected 65,000 devices, doubling in size every 76 minutes. And Mirai had an unwitting ally in the botnet war then raging.

Up in Anchorage, Alaska, the FBI cyber unit was building a case against VDoS. The FBI was unaware of Mirai or its war with VDoS. The agents did not regularly read online boards such as Hack Forums. They did not know that the target of their investigation was being decimated. The FBI also did not realize that Mirai was ready to step into the void.

The head investigator in Anchorage was Special Agent Elliott Peterson. A former U.S. Marine, Peterson is a calm and self-assured agent with a buzz cut of red hair. At the age of 33, Peterson had returned to his native state of Alaska to prosecute cybercrime.

On 8 September 2016, the FBI’s Anchorage and New Haven cyber units teamed up and served a search warrant in Connecticut on the member of PoodleCorp who ran the C2 that controlled all its botnets. On the same day, the Israeli police arrested the VDoS founders in Israel. Suddenly, PoodleCorp was no more.

The Mirai group waited a couple of days to assess the battlefield. As far as they could tell, they were the only botnet left standing. And they were ready to use their new power. Mirai won the war because Israeli and American law enforcement arrested the masterminds behind PoodleCorp. But Mirai would have triumphed anyway, as it was ruthlessly efficient in taking control of Internet of Things devices and excluding competing malware.

A few weeks after the arrests of those behind VDoS, Special Agent Peterson found his next target: the Mirai botnet. In the Mirai case, we do not know the exact steps that Peterson’s team took in their investigation: Court orders in this case are currently “under seal,” meaning that the court deems them secret. But from public reporting, we know that Peterson’s team got its break in the usual way—from a Mirai victim: Brian Krebs, a cybersecurity reporter whose blog was DDoSed by the Mirai botnet on 25 September.

The FBI uncovered the IP address of the C2 and loading servers but did not know who had opened the accounts. Peterson’s team likely subpoenaed the hosting companies to learn the names, emails, cellphones, and payment methods of the account holders. With this information, it would seek court orders and then search warrants to acquire the content of the conspirators’ conversations.

Still, the hunt for the authors of the Mirai malware must have been a difficult one, given how clever these hackers were. For example, to evade detection Josiah didn’t just use a VPN. He hacked the home computer of a teenage boy in France and used his computer as the “exit node.” The orders for the botnet, therefore, came from this computer. Unfortunately for the owner, he was a big fan of Japanese anime and thus fit the profile of the hacker. The FBI and the French police discovered their mistake after they raided the boy’s house.

Done and Done For

After wielding its power for two months, Paras dumped nearly the complete source code for Mirai on Hack Forums. “I made my money, there’s lots of eyes looking at IOT now, so it’s time to GTFO [Get The F*** Out],” Paras wrote. With that code dump, Paras had enabled anyone to build their own Mirai. And they did.

Dumping code is reckless, but not unusual. If the police find source code on a hacker’s devices, they can claim that they “downloaded it from the Internet.” Paras’s irresponsible disclosure was part of a false-flag operation meant to throw off the FBI, which had been gathering evidence indicating Paras’s involvement in Mirai and had contacted him to ask questions. Though he gave the agent a fabricated story, getting a text from the FBI probably terrified him.

Mirai had captured the attention of the cybersecurity community and of law enforcement. But not until after Mirai’s source code dropped would it capture the attention of the entire United States. The first attack after the dump was on 21 October, on Dyn, a company based in Manchester, N.H., that provides Domain Name System (DNS) resolution services for much of the East Coast of the United States.

Mike McQuade

It began at 7:07 a.m. EST with a series of 25-second attacks, thought to be tests of the botnet and Dyn’s infrastructure. Then came the sustained assaults: of one hour, and then five hours. Interestingly, Dyn was not the only target. Sony’s PlayStation video infrastructure was also hit. Because the torrents were so immense, many other websites were affected. Domains such as cnn.com, facebook.com, and nytimes.com wouldn’t work. For the vast majority of these users, the Internet became unusable. At 7:00 p.m., another 10-hour salvo hit Dyn and PlayStation.

Further investigations confirmed the point of the attack. Along with Dyn and PlayStation traffic, the botnet targeted Xbox Live and Nuclear Fallout game-hosting servers. Nation-states were not aiming to hack the upcoming U.S. elections. Someone was trying to boot players off their game servers. Once again—just like MafiaBoy, VDoS, Paras, Dalton, and Josiah—the attacker was a teenage boy, this time a 15-year-old in Northern Ireland named Aaron Sterritt.

Meanwhile, the Mirai trio left the DDoS business, just as Paras said. But Paras and Dalton did not give up on cybercrime. They just took up click fraud.

Click fraud was more lucrative than running a booter service. While Mirai was no longer as big as it had been, the botnet could nevertheless generate significant advertising revenue. Paras and Dalton earned as much money in one month from click fraud as they ever made with DDoS. By January 2017, they had earned over US $180,000, as opposed to a mere $14,000 from DDoSing.

Had Paras and his friends simply shut down their booter service and moved on to click fraud, the world would likely have forgotten about them. But by releasing the Mirai code, Paras created imitators. Dyn was the first major copycat attack, but many others followed. And due to the enormous damage these imitators wrought, law enforcement was intensely interested in the Mirai authors.

After collecting information tying Paras, Josiah, and Dalton to Mirai, the FBI quietly brought each up to Alaska. Peterson’s team showed the suspects its evidence and gave them the chance to cooperate. Given that the evidence was irrefutable, each folded.

Paras Jha was indicted twice, once in New Jersey for his attack on Rutgers, and once in Alaska for Mirai. Both indictments carried the same charge—one violation of the Computer Fraud and Abuse Act. Paras faced up to 10 years in federal prison for his actions. Josiah and Dalton were only indicted in Alaska and so faced 5 years in prison.

The trio pled guilty. At the sentencing hearing held on 18 September 2018, in Anchorage, each of the defendants expressed remorse for his actions. Josiah White’s lawyer conveyed his client’s realization that Mirai was “a tremendous lapse in judgment.”

Unlike Josiah, Paras spoke directly to Judge Timothy Burgess in the courtroom. Paras began by accepting full responsibility for his actions and expressed his deep regret for the trouble he’d caused his family. He also apologized for the harm he’d caused businesses and, in particular, Rutgers, the faculty, and his fellow students.

The Department of Justice made the unusual decision not to ask for jail time. In its sentencing memo, the government noted “the divide between [the defendants’] online personas, where they were significant, well-known, and malicious actors in the DDoS criminal milieu and their comparatively mundane ‘real lives’ where they present as socially immature young men living with their parents in relative obscurity.” It recommended five years of probation and 2,500 hours of community service.

The government had one more requestfor that community service “to include continued work with the FBI on cybercrime and cybersecurity matters.” Even before sentencing, Paras, Josiah, and Dalton had logged close to 1,000 hours helping the FBI hunt and shut down Mirai copycats. They contributed to more than a dozen law enforcement and research efforts. In one instance, the trio assisted in stopping a nation-state hacking group. They also helped the FBI prevent DDoS attacks aimed at disrupting Christmas-holiday shopping. Judge Burgess accepted the government’s recommendation, and the trio escaped jail time.

The most poignant moments in the hearing were Paras’s and Dalton’s singling out for praise the very person who caught them. “Two years ago, when I first met Special Agent Elliott Peterson,” Paras told the court, “I was an arrogant fool believing that somehow I was untouchable. When I met him in person for the second time, he told me something I will never forget: ‘You’re in a hole right now. It’s time you stop digging.’ ” Paras finished his remarks by thanking “my family, my friends, and Agent Peterson for helping me through this.”

Go to top