Latest News

Computer Science

Subscribe to Computer Science feed Computer Science
IEEE Spectrum
Updated: 32 min 7 sec ago

AI Goes To K Street: ChatGPT Turns Lobbyist

Tue, 2023-01-31 20:01

Concerns around how professional lobbyists distort the political process are nothing new. But new evidence suggests their efforts could soon be turbocharged by increasingly powerful language AI. A proof-of-concept from a Stanford University researcher shows that the technology behind internet sensation ChatGPT could help automate efforts to influence politicians.

Political lobbyists spend a lot of time scouring draft bills to asses if they’re pertinent to their clients’ objectives, and then drafting talking points for speeches, media campaigns, and letters to Congress designed to influence the direction of the legislation. Given recent breakthroughs in the ability of AI-powered services like ChatGPT to analyze and generate text, John Nay, a fellow at the Stanford Center for Legal Informatics, wanted to investigate whether these models could takeover some of that work.

In a matter of days, he was able to piece together a rudimentary AI lobbyist using OpenAI’s GPT-3 large language (LLM) model, which is the brains behind ChatGPT. In a paper published on the arXiv preprint server, he showed that the model was able to predict whether a summary of a U.S. congressional bill was relevant to a specific company 75 percent of the time. What’s more, the AI was able to then draft a letter to the bill’s sponsor arguing for changes to the legislation.

“The law-making process is not ready for this,” says Nay. “This was just a simple proof-of-concept built over a few days. With more resources and more time spent on this, especially with more focus on building out the workflow and a user experience tied in with the day-to-day of human lobbyists, this could likely be built into something relatively sophisticated.”

Nay’s approach involved feeding the model with text prompts via OpenAI’s API. He provided the model with the title of the bill, a summary of the bill, the subjects of the bill as determined by the Congressional Research Service, the name of the concerned company and the business description the firm filed with the U.S. Securities and Exchange Commission.

Alongside this, he told the model to imagine that it was a lobbyist and use the provided information to work out if the bill was relevant to the company. The model was also asked to explain its reasoning and provide a confidence score out of 100. For bills deemed relevant, the model was then prompted to draft a letter to their sponsor(s) persuading them to make additions favorable to the company in question.

Because most legislation doesn’t impact most companies, Nay found that his dataset of 485 bills didn’t always yield high relevancy. When they tested the approach on an older version of GPT-3 released in March 2022, it managed an average confidence score of only 52.2 percent. But when tried out on the GPT-3.5 model, which powers ChatGPT and was only made public in November 2022, it achieved an average score of 75.1 percent. On bills where the confidence score was over 90, relevancy rose to 79 percent.

The paper doesn’t attempt to assess how effective the drafted letters would be at influencing policy, and Nay makes clear the approach is still nowhere near being able to do the bulk of a lobbyist’s job. But he says the significant boost in prediction performance seen between models released just months apart is noteworthy. “There is a clear trend of quickly increasing capabilities,” he says.

That could potentially spell serious trouble for the legislative process, according to Nay. It could make mass influence campaigns significantly easier, particularly at the local level, and could lead to a flood of letter writing that could overwhelm already thin-stretched congressional offices or distort their perception of public opinion. Laws could also be useful resource for helping future AI systems to understand the values and goals of human society, says Nay, but not if AI lobbyists are distorting how those laws are made.

It’s impossible to predict how quickly AI will become sophisticated enough to effectively influence the lobbying process, says renowned security expert Bruce Schneier. He notes that letter writing is probably not the bottleneck here, and in fact learning how to understand political networks and develop strategies to influence them will probably be more important skills. But he says this research is a sign of things to come. “It’s just a baby step in that direction, but I think it is the direction that society is going,” he says.

And political lobbying is only one way in which AI is likely to warp society in the future, he adds. In a new book out next week called A Hacker’s Mind, Schneier outlines how the powerful hack everything from the legislative process to the tax code and market dynamics, and he says AI is likely to turbocharge these efforts. “There are a lot of possibilities and we really are just beginning to scratch them,” he says. “The law is completely not ready for this.”

Evolution and Impact of Wi-Fi Technology

Mon, 2023-01-30 18:30

The importance of Wi-Fi as the most popular carrier of wireless IP traffic in the emerging era of IoT and 5G continues its legacy, and Wi-Fi standards are continually evolving to help implement next generation applications in the cyberspace for positioning, human computer interfacing, motion, and gesture detection, as well as authentication and security. Wi-Fi is a go-to connectivity technology that is stable, proven, easy to deploy, and desirable by a host of diversified vertical markets, including Medical, Public Safety, Offender Tracking, Industrial, PDA, Security (monitoring), and others.

Register now for this free webinar!

This webinar will cover:

  • The evolution of the Wi-Fi technology, standards, and applications
  • Key aspects/features of Wi-Fi 6 and Wi-Fi 7
  • Wi-Fi market trends and outlook
  • Solutions that will help build robust Wi-Fi 6 and Wi-Fi 7 networks

This lecture is based on his recent paper: Pahlavan, K. and Krishnamurthy, P., 2021. Evolution and impact of Wi-Fi technology and applications: a historical perspective. International Journal of Wireless Information Networks, 28(1), pp.3-19.

Forecasting the Ice Loss of Greenland’s Glaciers With Viscoelastic Modeling

Fri, 2023-01-27 18:30

This sponsored article is brought to you by COMSOL.

To someone standing near a glacier, it may seem as stable and permanent as anything on Earth can be. However, Earth’s great ice sheets are always moving and evolving. In recent decades, this ceaseless motion has accelerated. In fact, ice in polar regions is proving to be not just mobile, but alarmingly mortal.

Rising air and sea temperatures are speeding up the discharge of glacial ice into the ocean, which contributes to global sea level rise. This ominous progression is happening even faster than anticipated. Existing models of glacier dynamics and ice discharge underestimate the actual rate of ice loss in recent decades. This makes the work of Angelika Humbert, a physicist studying Greenland’s Nioghalvfjerdsbræ outlet glacier, especially important — and urgent.

As the leader of the Modeling Group in the Section of Glaciology at the Alfred Wegener Institute (AWI) Helmholtz Centre for Polar and Marine Research in Bremerhaven, Germany, Humbert works to extract broader lessons from Nioghalvfjerdsbræ’s ongoing decline. Her research combines data from field observations with viscoelastic modeling of ice sheet behavior. Through improved modeling of elastic effects on glacial flow, Humbert and her team seek to better predict ice loss and the resulting impact on global sea levels.

She is acutely aware that time is short. “Nioghalvfjerdsbræ is one of the last three ‘floating tongue’ glaciers in Greenland,” explains Humbert. “Almost all of the other floating tongue formations have already disintegrated.”

One Glacier That Holds 1.1 Meter of Potential Global Sea Level Rise

The North Atlantic island of Greenland is covered with the world’s second largest ice pack after that of Antarctica. (Fig. 1) Greenland’s sparsely populated landscape may seem unspoiled, but climate change is actually tearing away at its icy mantle.

The ongoing discharge of ice into the ocean is a “fundamental process in the ice sheet mass-balance,” according to a 2021 article in Communications Earth & Environment by Humbert and her colleagues. (Ref. 1) The article notes that the entire Northeast Greenland Ice Stream contains enough ice to raise global sea levels by 1.1 meters. While the entire formation is not expected to vanish, Greenland’s overall ice cover has declined dramatically since 1990. This process of decay has not been linear or uniform across the island. Nioghalvfjerdsbræ, for example, is now Greenland’s largest outlet glacier. The nearby Petermann Glacier used to be larger, but has been shrinking even more quickly. (Ref. 2)

Existing Models Underestimate the Rate of Ice Loss

Greenland’s overall loss of ice mass is distinct from “calving”, which is the breaking off of icebergs from glaciers’ floating tongues. While calving does not directly raise sea levels, the calving process can quicken the movement of land-based ice toward the coast. Satellite imagery from the European Space Agency (Fig. 2) has captured a rapid and dramatic calving event in action. Between June 29 and July 24 of 2020, a 125 km2 floating portion of Nioghalvfjerdsbræ calved into many separate icebergs, which then drifted off to melt into the North Atlantic.

Direct observations of ice sheet behavior are valuable, but insufficient for predicting the trajectory of Greenland’s ice loss. Glaciologists have been building and refining ice sheet models for decades, yet, as Humbert says, “There is still a lot of uncertainty around this approach.” Starting in 2014, the team at AWI joined 14 other research groups to compare and refine their forecasts of potential ice loss through 2100. The project also compared projections for past years to ice losses that actually occurred. Ominously, the experts’ predictions were “far below the actually observed losses” since 2015, as stated by Martin Rückamp of AWI. (Ref. 3) He says, “The models for Greenland underestimate the current changes in the ice sheet due to climate change.”

Viscoelastic Modeling to Capture Fast-Acting Forces

Angelika Humbert has personally made numerous trips to Greenland and Antarctica to gather data and research samples, but she recognizes the limitations of the direct approach to glaciology. “Field operations are very costly and time consuming, and there is only so much we can see,” she says. “What we want to learn is hidden inside a system, and much of that system is buried beneath many tons of ice! We need modeling to tell us what behaviors are driving ice loss, and also to show us where to look for those behaviors.”

Since the 1980s, researchers have relied on numerical models to describe and predict how ice sheets evolve. “They found that you could capture the effects of temperature changes with models built around a viscous power law function,” Humbert explains. “If you are modeling stable, long-term behavior, and you get your viscous deformation and sliding right, your model can do a decent job. But if you are trying to capture loads that are changing on a short time scale, then you need a different approach.”

To better understand the Northeast Greenland Ice Stream glacial system and its discharge of ice into the ocean, researchers at the Alfred Wegener Institute have developed an improved viscoelastic model to capture how tides and subglacial topography contribute to glacial flow.

What drives short-term changes in the loads that affect ice sheet behavior? Humbert and the AWI team focus on two sources of these significant but poorly understood forces: oceanic tidal movement under floating ice tongues (such as the one shown in Fig. 2) and the ruggedly uneven landscape of Greenland itself. Both tidal movement and Greenland’s topography help determine how rapidly the island’s ice cover is moving toward the ocean.

To investigate the elastic deformation caused by these factors, Humbert and her team built a viscoelastic model of Nioghalvfjerdsbræ in the COMSOL Multiphysics software. The glacier model’s geometry is based on data from radar surveys. The model solved underlying equations for a viscoelastic Maxwell material across a 2D model domain consisting of a vertical cross section along the blue line shown in Fig. 3. The simulated results were then compared to actual field measurements of glacier flow obtained by four GPS stations, one of which is shown in Fig. 3.

How Cycling Tides Affect Glacier Movement

The tides around Greenland typically raise and lower the coastal water line between 1 and 4 meters per cycle. This action exerts tremendous force on outlet glaciers’ floating tongues, and these forces are transmitted into the land-based parts of the glacier as well. AWI’s viscoelastic model explores how these cyclical changes in stress distribution can affect the glacier’s flow toward the sea.

The charts in Figure 4 present the measured tide-induced stresses acting on Nioghalvfjerdsbræ at three locations, superimposed on stresses predicted by viscous and viscoelastic simulations. Chart a shows how displacements decline further when they are 14 kilometers inland from the grounding line (GL). Chart b shows that cyclical tidal stresses lessen at GPS-hinge, located in a bending zone near the grounding line between land and sea. Chart c shows activity at the location called GPS-shelf, which is mounted on ice floating in the ocean. Accordingly, it shows the most pronounced waveform of cyclical tidal stresses acting on the ice.

“The floating tongue is moving up and down, which produces elastic responses in the land-based portion of the glacier,” says Julia Christmann, a mathematician on the AWI team who plays a key role in constructing their simulation models. “There is also a subglacial hydrological system of liquid water between the inland ice and the ground. This basal water system is poorly known, though we can see evidence of its effects.” For example, chart a shows a spike in stresses below a lake sitting atop the glacier. “Lake water flows down through the ice, where it adds to the subglacial water layer and compounds its lubricating effect,” Christmann says.

The plotted trend lines highlight the greater accuracy of the team’s new viscoelastic simulations, as compared to purely viscous models. As Christmann explains, “The viscous model does not capture the full extent of changes in stress, and it does not show the correct amplitude. (See chart c in Fig. 4.) In the bending zone, we can see a phase shift in these forces due to elastic response.” Christmann continues, “You can only get an accurate model if you account for viscoelastic ‘spring’ action.”

Modeling Elastic Strains from Uneven Landscapes

The crevasses in Greenland’s glaciers reveal the unevenness of the underlying landscape. Crevasses also provide further evidence that glacial ice is not a purely viscous material. “You can watch a glacier over time and see that it creeps, as a viscous material would,” says Humbert. However, a purely viscous material would not form persistent cracks the way that ice sheets do. “From the beginning of glaciology, we have had to accept the reality of these crevasses,” she says. The team’s viscoelastic model provides a novel way to explore how the land beneath Nioghalvfjerdsbræ facilitates the emergence of crevasses and affects glacial sliding.

“When we did our simulations, we were surprised at the amount of elastic strain created by topography,” Christmann explains. “We saw these effects far inland, where they would have nothing to do with tidal changes.”

Figure 6 shows how vertical deformation in the glacier corresponds to the underlying landscape and helps researchers understand how localized elastic vertical motion affects the entire sheet’s horizontal movement. Shaded areas indicate velocity in that part of the glacier compared to its basal velocity. Blue zones are moving vertically at a slower rate than the sections that are directly above the ground, indicating that the ice is being compressed. Pink and purple zones are moving faster than ice at the base, showing that ice is being vertically stretched.

These simulation results suggest that the AWI team’s improved model could provide more accurate forecasts of glacial movements. “This was a ‘wow’ effect for us,” says Humbert. “Just as the up and down of the tides creates elastic strain that affects glacier flow, now we can capture the elastic part of the up and down over bedrock as well.”

Scaling Up as the Clock Runs Down

The improved viscoelastic model of Nioghalvfjerdsbræ is only the latest example of Humbert’s decades-long use of numerical simulation tools for glaciological research. “COMSOL is very well suited to our work,” she says. “It is a fantastic tool for trying out new ideas. The software makes it relatively easy to adjust settings and conduct new simulation experiments without having to write custom code.” Humbert’s university students frequently incorporate simulation into their research. Examples include Julia Christmann’s PhD work on the calving of ice shelves, and another degree project that modeled the evolution of the subglacial channels that carry meltwater from the surface to the ice base.

The AWI team is proud of their investigative work, but they are fully cognizant of just how much information about the world’s ice cover remains unknown — and that time is short. “We cannot afford Maxwell material simulations of all of Greenland,” Humbert concedes. “We could burn years of computational time and still not cover everything. But perhaps we can parameterize the localized elastic response effects of our model, and then implement it at a larger scale,” she says.

This scale defines the challenges faced by 21st-century glaciologists. The size of their research subjects is staggering, and so is the global significance of their work. Even as their knowledge is growing, it is imperative that they find more information, more quickly. Angelika Humbert would welcome input from people in other fields who study viscoelastic materials. “If other COMSOL users are dealing with fractures in Maxwell materials, they probably face some of the same difficulties that we have, even if their models have nothing to do with ice!” she says. “Maybe we can have an exchange and tackle these issues together.”

Perhaps, in this spirit, we who benefit from the work of glaciologists can help shoulder some of the vast and weighty challenges they bear.

  1. J. Christmann, V. Helm, S.A. Khan, A. Humbert, et al. “Elastic Deformation Plays a Non-Negligible Role in Greenland’s Outlet Glacier Flow“, Communications Earth & Environment, vol. 2, no. 232, 2021.
  2. European Space Agency, “Spalte Breaks Up“, September 2020.
  3. Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, “Model comparison: Experts calculate future ice loss and the extent to which Greenland and the Antarctic will contribute to sea-level rise“, September 2020.

Stacking Turns Organic Transistors Up

Wed, 2023-01-25 19:47

Organic electronics appear to be, as the name might imply, quite good at interacting with a biological body and brain. Now scientists have created record-breaking, high-performance organic electronic devices using a potentially cheap, easy, and scalable approach that adopts a vertical architecture instead of a flat one, according to a new study.

Modern electronics rely on transistors, which are essentially switches that flick on and off to encode data as ones and zeros. Most transistors are made of inorganic semiconductors, but organic electronics depend on organic compounds. Whereas organic field-effect transistors (OFETs) have ions that accumulate only on the surface of the organic material, organic electrochemical transistors (OECTs) rely on ions flowing in and out of organic semiconductors. This feature helps make OECTs efficient switches and powerful amplifiers.

“Our vertically stacked electrochemical transistor takes performance to a totally new level.”
—Tobin Marks, Northwestern University

Organic electronics have a number of advantages over their standard counterparts, such as flexibility, low weight, and easy, cheap fabrication. The way in which OECTs communicate—using ions, just as biology does—may also open up applications such as biomedical sensing, body-machine interfaces, and brain-imitating neuromorphic technology. In addition, previous research found that OECTs can possess exceptionally low driving voltages of less than 1 volt, low power consumption of less than 1 microwatt, and high transconductances—a measure of how well they can amplify signals—of more than 10 millisiemens.

However, previous research into OECTs was hindered by problems such as slow speeds and poor stability during operation. Until now, the best OECTs could achieve switching speeds of roughly 1 kilohertz and a stability on the order of 5,000 cycles of switching. In addition, manufacturing these devices often required complex, expensive fabrication techniques as well as channel lengths—the distance between the source and the drain electrodes—that were at least 10 micrometers long.

Now scientists have developed OECTs with switching speeds greater than 1 kHz, a stability across more than 50,000 cycles, channel lengths of less than 100 nanometers, as well as transconductances of 200 to 400 mS—figures that are the highest seen yet in OECTs. The key to this advance is a vertical architecture in which these devices are built like sandwiches, instead of the flat architecture seen with most previous OECTs and conventional transistors (in which they are laid out like street maps).

“Our vertically stacked electrochemical transistor takes performance to a totally new level,” says study cosenior author Tobin Marks, a materials chemist at Northwestern University in Evanston, Ill.

OECTs have three electrodes—a source and drain electrode connected by a thin film, or channel, of an organic semiconductor, plus a gate electrode connected to an electrolyte material that covers the channel. Applying a voltage to the gate electrode causes ions in the electrolyte to flow into the channel, altering the current passing between the source and drain electrodes.

In the new study, the researchers sandwiched the channel between two gold electrodes—the source on the bottom and the drain on top, with neither electrode completely covering the channel. The channel was made of a semiconducting ion-permeable compound mixed with another polymer that helped make the channel structurally robust and more stable during operation. The electrolyte lay on top of both the channel and the drain electrode.

The scientists noted they could fabricate these vertical OECTs in a simple and scalable way using standard manufacturing techniques. The vertical architecture also means these devices can be stacked on top of each other to achieve high circuit density, they say. The gate can also readily be modified—say, with biomolecules designed to latch onto specific molecules—to help serve as a sensor, says study coauthor Jonathan Rivnay, a materials scientist and biomedical engineer at Northwestern University.

In addition, the scientists could make the channel using either an n-type semiconductor, which carries negative charges in the form of electrons, or a p-type semiconductor, which carries positive charges in the form of holes. Previously, high-performance n-type OECTs, which are crucial for sensors and logic circuits, have proven difficult to build. In the new study, the research team’s vertical n-type OECTs outperformed any previous n- and p-type OECTs when used in complementary logic circuits that use both n- and p-type OECTs. (This work also marked the first vertically stacked complementary OECT logic circuits.)

The researchers are now exploring how to modify the materials and fabrication techniques used to make the vertical OECTs to further boost their speed and stability, Marks says.

The scientists detailed their findings in the 19 January issue of the journal Nature.

Picosecond Accuracy in Multi-channel Data Acquisition

Fri, 2023-01-20 19:28

Timing accuracy is vital for multi-channel synchronized sampling at high speed. In this webinar, we explain challenges and solutions for clocking, triggering, and timestamping in Giga-sample-per-second data acquisition systems.

Learn more about phase-locked sampling, clock and trigger distribution, jitter reduction, trigger correction, record alignment, and more.

Register now to join this free webinar!

Date: Tuesday, February 28, 2023

Time: 10 AM PST | 1 PM EST
Duration: 30 minutes

In this webinar, we explain challenges and solutions for clocking, triggering, and timestamping in Giga-sample-per-second data acquisition systems.

Topics covered in this webinar:

  • Phase-locked sampling
  • Clock and trigger distribution
  • Trigger correction and record alignment
  • Daisy-chaining to achieve 50 ps trigger accuracy for 64 channels sampling at 5 GSPS per channel

Who should attend? Developers that want to learn more about how to optimize performance in high-performance multi-channel systems.

What attendees will learn? How to distribute clocks and triggers, triggering methods, synchronized sampling on multiple boards, and more.

Presenter: Thomas Elter, Senior Field Applications Engineer

The Lisa Was Apple’s Best Failure

Thu, 2023-01-19 23:00

Happy 40th Birthday to Lisa! The Apple Lisa computer, that is. In celebration of this milestone, the Computer History Museum has received permission from Apple to release the source code to the Lisa, including its system and applications software.

You can access the Lisa source code here.

What is the Apple Lisa computer, and why was its release on 19 January 1983 an important date in computer history? Apple’s Macintosh line of computers, known for bringing mouse-driven graphical user interfaces (GUIs) to the masses and transforming the way we use computers, owes its existence to its immediate predecessor, the Lisa. Without the Lisa, there would have been no Macintosh—at least in the form we have it today—and perhaps there would have been no Microsoft Windows either.

From DOS to the Graphical User Interface

There was a time when a majority of personal computer users interacted with their machines via command-line interfaces—that is, through text-based operating systems such as CP/M and MS/DOS, in which users had to type arcane commands to control their computers. The invention of the graphical user interface, or GUI, especially in the form of windows, icons, menus, and pointers (collectively known as WIMP), controlled by a mouse, occurred at Xerox PARC in the 1970s. Xerox’s Alto was a prototype computer with a bitmapped graphics display designed to be used by just one person—a “personal computer.” Key elements of the WIMP GUI paradigm, such as overlapping windows and popup menus, were invented by Alan Kay’s Learning Research Group for the children’s software development environment, Smalltalk.

In 1979, a delegation from Apple Computer, led by Steve Jobs, visited PARC and received a demonstration of Smalltalk on the Alto. Upon seeing the GUI, Jobs immediately grasped the potential of this new way of interacting with a computer and didn’t understand why Xerox wasn’t marketing the technology to the public. Jobs could see that all computers should work this way, and he wanted Apple to bring this technology out from the research lab to the masses.

From the Apple II to the Lisa

In its own R&D labs, Apple was already working on a successor to its best-selling, but command-line-based, Apple II personal computer. The machine was code-named “Lisa,” after Steve Jobs’ child with a former girlfriend. The code-name stuck, and a backronym, Local Integrated Systems Architecture, was invented to conceal the connection to Jobs’ daughter. Unlike the Apple II, which was aimed at the home computer market, the Lisa would be targeted at the business market, use the powerful Motorola 68000 microprocessor, and be paired with a hard drive.After the PARC visit, Jobs and many of Lisa’s engineers, including Bill Atkinson, worked to incorporate a GUI into the Lisa. Atkinson developed the QuickDraw graphics library for the Lisa, and he collaborated with Larry Tesler, who left PARC to join Apple, on developing the Lisa’s user interface. Tesler created an object-oriented variant of Pascal, called Clascal, for the Lisa Toolkit application programming interfaces. Later, with the guidance of Pascal creator Niklaus Wirth, Clascal would evolve into the official Object Pascal.

A screenshot from the Apple Lisa 2 shows icons on the desktop and the menu bar with pulldown menus at the top of the screen. This interface is very similar to that of the original Macintosh.David T. Craig

A reorganization of the company in 1982, however, removed Jobs from having any direct influence on the Lisa project, which was subsequently managed by John Couch. Jobs then discovered the Macintosh project started by Jef Raskin. Jobs took over that project and moved it away from Raskin’s original appliance-like vision to one more like the Lisa—a mouse-driven, GUI-based computer but more affordable than the Lisa.

For a few years, the Lisa and Macintosh teams competed internally, although there was collaboration as well. Atkinson’s QuickDraw became part of the Macintosh, and Atkinson thus contributed to both projects. Lisa software manager Bruce Daniels worked on the Macintosh for a time, greatly influencing the direction of the Mac toward the Lisa’s GUI. Tesler’s work on the object-oriented Lisa Toolkit would later evolve into the MacApp frameworks, which used Object Pascal. Owen Densmore, who had been at Xerox, worked on printing for both the Lisa and the Macintosh.

The Lisa’s user interface underwent many versions before finally arriving at the icon-based desktop familiar to us from the Macintosh. The final Lisa Desktop Manager still had a few key differences from the Mac. The Lisa had a document-centric rather than application-centric model, for example. Each program on the Lisa featured a “stationary pad” that resided on the desktop, separate from the application icon. Users tore off a sheet from the pad to create a new document. Users rarely interacted with the application’s icon itself. The idea of centering the user’s world around documents rather than applications would reemerge in the 1990s, with technologies such as Apple’s OpenDoc and Microsoft’s OLE.

The Lisa was released to the public on 19 January 1983, at a price of US $9,995 (about $30,000 today). This was two years after Xerox had introduced its own GUI-based workstation, the Star, for $16,595, which was similarly targeted at office workers. The high price of both machines compared to the IBM PC, a command-line-based personal computer that retailed for $1,565, was enough to doom them both to failure.

But price wasn’t the Lisa’s only problem. Its sophisticated operating system, which allowed multiple programs to run at the same time, was too powerful even for its 68000 processor, and the machine thus ran sluggishly. Additionally, the Lisa shipped with a suite of applications, including word processing and charts, which discouraged third-party developers from writing software for it. The original Lisa included dual floppy drives, called Twiggy, that had been designed in-house and proved unreliable.

From the Lisa to the Macintosh

Meanwhile, the Macintosh project competed with Lisa internally for resources and had Jobs’ full attention. Announced in the famous Superbowl ad, the Macintosh began shipping in January 1984 for $2,495. Unlike the Lisa, it had no hard drive, had greatly reduced memory, didn’t multitask, and lacked some other advanced features, and thus was much more affordable. An innovative marketing program created by Dan’l Lewin (now CHM’s CEO) sold Macintoshes at a discount to college students, contributing significantly to the Mac’s installed base.

The advent of Postscript laser printers like the Apple LaserWriter in 1985, combined with the page layout application PageMaker from Aldus, created a killer application for the Macintosh: desktop publishing. This new market would grow to a billion dollars by 1988. The Macintosh would become the first commercially successful computer with a graphical user interface, and its product line continues to this day.

The Lisa 2 series was announced in January 1984 alongside the Macintosh.Computer History Museum

The Lisa 2, whose two models were priced at $3,495 and $5,495, respectively, was announced alongside the Macintosh in January 1984. The original Lisa’s problematic Twiggy floppy drives were replaced with a single Sony 3.5-inch floppy drive, the same as that used on the Mac. A year later, the Lisa 2/10 was rebranded as the Macintosh XL with MacWorks, an emulator that allowed it to run Macintosh software. But despite improved sales, the product was killed off in April 1985 so that Apple could focus on the Mac, according to Owen Linzmayer’s 2004 book Apple Confidential 2.0.

From the Macintosh to the World

The release of the GUI-based Lisa and the Macintosh inspired several software companies to create software “shells” that would install GUI environments on top of MS-DOS command-line-based IBM PCs. The first of these was VisiOn, released in late 1983 by VisiCorp, publisher of the first spreadsheet program VisiCalc. This was followed in 1985 by GEM from Digital Research, the company behind the command-line-based CP/M operating system. Microsoft came out with Windows later the same year, although Windows wouldn’t see wide use until the 1990s, when Windows 3.0 came out. Both GEM and Windows were influenced by the Mac’s user interface. And between Windows and the Macintosh, GUIs have become the user interface paradigm for personal computers.

Apple’s John Couch and his son demonstrate the “what you see if what you get” concept on a Lisa computer.Roger Ressmeyer/Corbis/Getty Images

Despite the Lisa’s failure in the marketplace, it holds a place in the history of computing as the first GUI-based computer to be released by a personal computer company. Though the Xerox Star 8010 beat the Lisa to market, the Star was competing with other workstations from Apollo and Sun. Perhaps more importantly, without the Lisa and its incorporation of the PARC-inspired GUI, the Macintosh itself would not have been based on the GUI. Both computers shared key technologies, such as the mouse and the QuickDraw graphics library. The Lisa was a key steppingstone to the Macintosh, and an important milestone in the history of graphical user interfaces and personal computers more generally.

Learn more about the Art of Code at CHM or sign up for regular emails about upcoming source code releases and related events.

Editor’s note: This post originally appeared on the blog of the Computer History Museum.

Optical AI Could Feed Voracious Data Needs

Wed, 2023-01-18 20:57

A brain-imitating neural network that employs photons instead of electrons could rapidly analyze vast amounts of data by running many computations simultaneously using thousands of wavelengths of light, a new study finds.

Artificial neural networks are increasingly finding use in applications such as analyzing medical scans and supporting autonomous vehicles. In these artificial intelligence systems, components (a.k.a. neurons) are fed data and cooperate to solve a problem, such as recognizing faces. A neural network is dubbed “deep“ if it possesses multiple layers of neurons.

As neural networks grow in size and power, they are becoming more energy-hungry when run on conventional electronics. Which is why some scientists have been investigating optical computing as a promising, next-generation AI medium. This approach uses light instead of electricity to perform computations more quickly and with less power than its electronic counterparts.

For example, a diffractive optical neural network is composed of a stack of layers each possessing thousands of pixels that can diffract, or scatter, light. These diffractive features serve as the neurons in a neural network. Deep learning is used to design each layer so when input in the form of light shines on the stack, the output light encodes data from complex tasks such as image classification or image reconstruction. All this computing “does not consume power, except for the illumination light,” says study senior author Aydogan Ozcan, an optical engineer at the University of California at Los Angeles.

Such diffractive networks could analyze large amounts of data at the speed of light to perform tasks such as identifying objects. For example, they could help autonomous vehicles instantly recognize pedestrians or traffic signs, or medical diagnostic systems quickly identify evidence of disease. Conventional electronics need to first image those items, then convert those signals to data, and finally run programs to figure out what those objects are. In contrast, diffractive networks only need to receive light reflected off or otherwise arriving from those items—they can identify an object because the light from it gets mostly diffracted toward a single pixel assigned to that kind of object.

Previously, Ozcan and his colleagues designed a monochromatic diffractive network using a series of thin 64-square-centimeter polymer wafers fabricated using 3D printing. When illuminated with a single wavelength or color of light, this diffractive network could implement a single matrix multiplication operation. These calculations, which involve multiplying grids of numbers known as matrices, are key to many computational tasks, including operating neural networks.

Now the researchers have developed a broadband diffractive optical processor that can accept multiple input wavelengths of light at once for up to thousands of matrix multiplication operations “executed simultaneously at the speed of light,” Ozcan says.

In the new study, the scientists 3D-printed three diffractive layers, each with 14,400 diffractive features. Their experiments showed the diffractive network could successfully operate using two submillimeter-wavelength terahertz-frequency channels. Their computer models suggested these diffractive networks could accept up to roughly 2,000 wavelength channels simultaneously.

“We demonstrated the feasibility of massively parallel optical computing by employing a wavelength multiplexing scheme,” Ozcan says.

The scientists note it should prove possible to build diffractive networks that use visible and other frequencies of light other than terahertz. Such optical neural nets can also be manufactured from a wide variety of materials and techniques.

All in all, they “may find applications in various fields, including, for example, biomedical imaging, remote sensing, analytical chemistry and material science,” Ozcan says.

The scientists detailed their findings 9 January in the journal Advanced Photonics.

Go to top