IEEE News

IEEE Spectrum IEEE Spectrum

  • Why the Art of Invention Is Always Being Reinvented
    by Peter B. Meyer on 01. November 2024. at 14:00



    Every invention begins with a problem—and the creative act of seeing a problem where others might just see unchangeable reality. For one 5-year-old, the problem was simple: She liked to have her tummy rubbed as she fell asleep. But her mom, exhausted from working two jobs, often fell asleep herself while putting her daughter to bed. “So [the girl] invented a teddy bear that would rub her belly for her,” explains Stephanie Couch, executive director of the Lemelson MIT Program. Its mission is to nurture the next generation of inventors and entrepreneurs.

    Anyone can learn to be an inventor, Couch says, given the right resources and encouragement. “Invention doesn’t come from some innate genius, it’s not something that only really special people get to do,” she says. Her program creates invention-themed curricula for U.S. classrooms, ranging from kindergarten to community college.

    This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

    We’re biased, but we hope that little girl grows up to be an engineer. By the time she comes of age, the act of invention may be something entirely new—reflecting the adoption of novel tools and the guiding forces of new social structures. Engineers, with their restless curiosity and determination to optimize the world around them, are continuously in the process of reinventing invention.

    In this special issue, we bring you stories of people who are in the thick of that reinvention today. IEEE Spectrum is marking 60 years of publication this year, and we’re celebrating by highlighting both the creative act and the grindingly hard engineering work required to turn an idea into something world changing. In these pages, we take you behind the scenes of some awe-inspiring projects to reveal how technology is being made—and remade—in our time.

    Inventors Are Everywhere

    Invention has long been a democratic process. The economist B. Zorina Khan of Bowdoin College has noted that the U.S. Patent and Trademark Office has always endeavored to allow essentially anyone to try their hand at invention. From the beginning, the patent examiners didn’t care who the applicants were—anyone with a novel and useful idea who could pay the filing fee was officially an inventor.

    This ethos continues today. It’s still possible for an individual to launch a tech startup from a garage or go on “Shark Tank” to score investors. The Swedish inventor Simone Giertz, for example, made a name for herself with YouTube videos showing off her hilariously bizarre contraptions, like an alarm clock with an arm that slapped her awake. The MIT innovation scholar Eric von Hippel has spotlighted today’s vital ecosystem of “user innovation,” in which inventors such as Giertz are motivated by their own needs and desires rather than ambitions of mass manufacturing.

    But that route to invention gets you only so far, and the limits of what an individual can achieve have become starker over time. To tackle some of the biggest problems facing humanity today, inventors need a deep-pocketed government sponsor or corporate largess to muster the equipment and collective human brainpower required.

    When we think about the challenges of scaling up, it’s helpful to remember Alexander Graham Bell and his collaborator Thomas Watson. “They invent this cool thing that allows them to talk between two rooms—so it’s a neat invention, but it’s basically a gadget,” says Eric Hintz, a historian of invention at the Smithsonian Institution. “To go from that to a transcontinental long-distance telephone system, they needed a lot more innovation on top of the original invention.” To scale their invention, Hintz says, Bell and his colleagues built the infrastructure that eventually evolved into Bell Labs, which became the standard-bearer for corporate R&D.

    In this issue, we see engineers grappling with challenges of scale in modern problems. Consider the semiconductor technology supported by the U.S. CHIPS and Science Act, a policy initiative aimed at bolstering domestic chip production. Beyond funding manufacturing, it also provides US $11 billion for R&D, including three national centers where companies can test and pilot new technologies. As one startup tells the tale, this infrastructure will drastically speed up the lab-to-fab process.

    And then there are atomic clocks, the epitome of precision timekeeping. When researchers decided to build a commercial version, they had to shift their perspective, taking a sprawling laboratory setup and reimagining it as a portable unit fit for mass production and the rigors of the real world. They had to stop optimizing for precision and instead choose the most robust laser, and the atom that would go along with it.

    These technology efforts benefit from infrastructure, brainpower, and cutting-edge new tools. One tool that may become ubiquitous across industries is artificial intelligence—and it’s a tool that could further expand access to the invention arena.

    What if you had a team of indefatigable assistants at your disposal, ready to scour the world’s technical literature for material that could spark an idea, or to iterate on a concept 100 times before breakfast? That’s the promise of today’s generative AI. The Swiss company Iprova is exploring whether its AI tools can automate “eureka” moments for its clients, corporations that are looking to beat their competitors to the next big idea. The serial entrepreneur Steve Blank similarly advises young startup founders to embrace AI’s potential to accelerate product development; he even imagines testing product ideas on digital twins of customers. Although it’s still early days, generative AI offers inventors tools that have never been available before.

    Measuring an Invention’s Impact

    If AI accelerates the discovery process, and many more patentable ideas come to light as a result, then what? As it is, more than a million patents are granted every year, and we struggle to identify the ones that will make a lasting impact. Bryan Kelly, an economist at the Yale School of Management, and his collaborators made an attempt to quantify the impact of patents by doing a technology-assisted deep dive into U.S. patent records dating back to 1840. Using natural language processing, they identified patents that introduced novel phrasing that was then repeated in subsequent patents—an indicator of radical breakthroughs. For example, Elias Howe Jr.’s 1846 patent for a sewing machine wasn’t closely related to anything that came before but quickly became the basis of future sewing-machine patents.

    Another foundational patent was the one awarded to an English bricklayer in 1824 for the invention of Portland cement, which is still the key ingredient in most of the world’s concrete. As Ted C. Fishman describes in his fascinating inquiry into the state of concrete today, this seemingly stable industry is in upheaval because of its heavy carbon emissions. The AI boom is fueling a construction boom in data centers, and all those buildings require billions of tons of concrete. Fishman takes readers into labs and startups where researchers are experimenting with climate-friendly formulations of cement and concrete. Who knows which of those experiments will result in a patent that echoes down the ages?

    Some engineers start their invention process by thinking about the impact they want to make on the world. The eminent Indian technologist Raghunath Anant Mashelkar, who has popularized the idea of “Gandhian engineering”, advises inventors to work backward from “what we want to achieve for the betterment of humanity,” and to create problem-solving technologies that are affordable, durable, and not only for the elite.

    Durability matters: Invention isn’t just about creating something brand new. It’s also about coming up with clever ways to keep an existing thing going. Such is the case with the Hubble Space Telescope. Originally designed to last 15 years, it’s been in orbit for twice that long and has actually gotten better with age, because engineers designed the satellite to be fixable and upgradable in space.

    For all the invention activity around the globe—the World Intellectual Property Organization says that 3.5 million applications for patents were filed in 2022—it may be harder to invent something useful than it used to be. Not because “everything that can be invented has been invented,” as in the apocryphal quote attributed to the unfortunate head of the U.S. patent office in 1889. Rather, because so much education and experience are required before an inventor can even understand all the dimensions of the door they’re trying to crack open, much less come up with a strategy for doing so. Ben Jones, an economist at Northwestern’s Kellogg School of Management, has shown that the average age of great technological innovators rose by about six years over the course of the 20th century. “Great innovation is less and less the provenance of the young,” Jones concluded.

    Consider designing something as complex as a nuclear fusion reactor, as Tom Clynes describes in “An Off-the-Shelf Stellarator.” Fusion researchers have spent decades trying to crack the code of commercially viable fusion—it’s more akin to a calling than a career. If they succeed, they will unlock essentially limitless clean energy with no greenhouse gas emissions or meltdown danger. That’s the dream that the physicists in a lab in Princeton, N.J., are chasing. But before they even started, they first had to gain an intimate understanding of all the wrong ways to build a fusion reactor. Once the team was ready to proceed, what they created was an experimental reactor that accelerates the design-build-test cycle. With new AI tools and unprecedented computational power, they’re now searching for the best ways to create the magnetic fields that will confine the plasma within the reactor. Already, two startups have spun out of the Princeton lab, both seeking a path to commercial fusion.

    The stellarator story and many other articles in this issue showcase how one innovation leads to the next, and how one invention can enable many more. The legendary Dean Kamen, best known for mechanical devices like the Segway and the prosthetic “Luke” arm, is now trying to push forward the squishy world of biological manufacturing. In an interview, Kamen explains how his nonprofit is working on the infrastructure—bioreactors, sensors, and controls—that will enable companies to explore the possibilities of growing replacement organs. You could say that he’s inventing the launchpad so others can invent the rockets.

    Sometimes everyone in a research field knows where the breakthrough is needed, but that doesn’t make it any easier to achieve. Case in point: the quest for a household humanoid robot that can perform domestic chores, switching effortlessly from frying an egg to folding laundry. Roboticists need better learning software that will enable their bots to navigate the uncertainties of the real world, and they also need cheaper and lighter actuators. Major advances in these two areas would unleash a torrent of creativity and may finally bring robot butlers into our homes.

    And maybe the future roboticists who make those breakthroughs will have cause to thank Marina Umaschi Bers, a technologist at Boston College who cocreated the ScratchJr programming language and the KIBO robotics kit to teach kids the basics of coding and robotics in entertaining ways. She sees engineering as a playground, a place for children to explore and create, to be goofy or grandiose. If today’s kindergartners learn to think of themselves as inventors, who knows what they’ll create tomorrow?

  • Honor a Loved One With an IEEE Foundation Memorial Fund
    by IEEE Foundation on 31. October 2024. at 18:00



    As the philanthropic partner of IEEE, the IEEE Foundation expands the organization’s charitable body of work by inspiring philanthropic engagement that ignites a donor’s innermost interests and values.

    One way the Foundation does so is by partnering with IEEE units to create memorial funds, which pay tribute to members, family, friends, teachers, professors, students, and others. This type of giving honors someone special while also supporting future generations of engineers and celebrating innovation.

    Below are three recently created memorial funds that not only have made an impact on their beneficiaries and perpetuated the legacy of the namesake but also have a deep meaning for those who launched them.

    EPICS in IEEE Fischer Mertel Community of Projects

    The EPICS in IEEE Fischer Mertel Community of Projects was established to support projects “designed to inspire multidisciplinary teams of engineering students to collaborate and engineer solutions to address local community needs.”

    The fund was created by the children of Joe Fischer and Herb Mertel to honor their fathers’ passion for mentoring students. Longtime IEEE members, Fischer and Mertel were active with the IEEE Electromagnetic Compatibility Society. Fischer was the society’s 1972 president and served on its board of directors for six years. Mertel served on the society’s board from 1979 to 1983 and again from 1989 to 1993.

    “The EPICS in IEEE Fischer Mertel Community of Projects was established to inspire and support outstanding engineering ideas and efforts that help communities worldwide,” says Tina Mertel, Herb’s daughter. “Joe Fischer and my father had a lifelong friendship and excelled as engineering leaders and founders of their respective companies [Fischer Custom Communications and EMACO]. I think that my father would have been proud to know that their friendship and work are being honored in this way.”

    The nine projects supported thus far have the potential to impact more than 104,000 people because of the work and collaboration of 190 students worldwide. The projects funded are intended to represent at least two of the EPICS in IEEE’s focus categories: education and outreach; human services; environmental; and access and abilities.

    Here are a few of the projects:

    IEEE AESS Michael C. Wicks Radar Student Travel Grant

    The IEEE Michael C. Wicks Radar Student Travel Grant was established by IEEE Fellow Michael Wicks prior to his death in 2022. The grant provides travel support for graduate students who are the primary authors on a paper being presented at the annual IEEE Radar Conference. Wicks was an electronics engineer and a radio industry leader who was known for developing knowledge-based space-time adaptive processing. He believed in investing in the next generation and he wanted to provide an opportunity for that to happen.Ten graduate students have been awarded the Wicks grant to date. This year two students from Region 8 (Africa, Europe, Middle East) and two students from Region 10 (Asia and Pacific) were able to travel to Denver to attend the IEEE Radar Conference and present their research. The papers they presented are “Target Shape Reconstruction From Multi-Perspective Shadows in Drone-Borne SAR Systems” and “Design of Convolutional Neural Networks for Classification of Ships from ISAR Images.”

    Fumio Koyoma and Constance J. Chang-Hasnian wearing medal awards around their necks and posing with Kathleen Kramer and Tom Coughlin against a backdrop that reads, \u201cIEEE Honors Ceremony\u201d. Life Fellow Fumio Koyama and IEEE Fellow Constance J. Chang-Hasnain proudly display their IEEE Nick Holonyak, Jr. Medal for Semiconductor Optoelectronic Technologies at this year’s IEEE Honors Ceremony. They are accompanied by IEEE President-Elect Kathleen Kramer and IEEE President Tom Coughlin.Robb Cohen

    IEEE Nick Holonyak Jr. Medal for Semiconductor Optoelectronic Technologies

    The IEEE Nick Holonyak Jr. Medal for Semiconductor Optoelectronic Technologies was created with a memorial fund supported by some of Holonyak’s former graduate students to honor his work as a professor and mentor. Presented on behalf of the IEEE Board of Directors, the medal recognizes outstanding contributions to semiconductor optoelectronic devices and systems including high-energy-efficiency semiconductor devices and electronics.

    Holonyak was a prolific inventor and longtime professor of electrical engineering and physics. In 1962, while working as a scientist at General Electric’s Advanced Semiconductor Laboratory in Syracuse, N.Y., he invented the first practical visible-spectrum LED and laser diode. His innovations are the basis of the devices now used in high-efficiency light bulbs and laser diodes. He left GE in 1963 to join the University of Illinois Urbana-Champaign as a professor of electrical engineering and physics at the invitation of John Bardeen, his Ph.D. advisor and a two-time Nobel Prize winner in physics. Holonyak retired from UIUC in 2013 but continued research collaborations at the university with young faculty members.

    “In addition to his remarkable technical contributions, he was an excellent teacher and mentor to graduate students and young electrical engineers,” says Russell Dupuis, one of his doctoral students. “The impact of his innovations has improved the lives of most people on the earth, and this impact will only increase with time. It was my great honor to be one of his students and to help create this important IEEE medal to ensure that his work will be remembered in the future.”

    The award was presented for the first time at this year’s IEEE Honors Ceremony, in Boston, to IEEE Fellow Constance Chang-Hasnain and Life Fellow Fumio Koyama for “pioneering contributions to vertical cavity surface-emitting laser (VCSEL) and VCSEL-based photonics for optical communications and sensing.”

    Establishing a memorial fund through the IEEE Foundation is a gratifying way to recognize someone who has touched your life while also advancing technology for humanity. If you are interested in learning more about memorial and tribute funds, reach out to the IEEE Foundation team: donate@ieee.org.

  • New Carrier Fluid Makes Hydrogen Way Easier to Transport
    by Willie D. Jones on 31. October 2024. at 12:00



    Imagine pulling up to a refueling station and filling your vehicle’s tank with liquid hydrogen, as safe and convenient to handle as gasoline or diesel, without the need for high-pressure tanks or cryogenic storage. This vision of a sustainable future could become a reality if a Calgary, Canada–based company, Ayrton Energy, can scale up its innovative method of hydrogen storage and distribution. Ayrton’s technology could make hydrogen a viable, one-to-one replacement for fossil fuels in existing infrastructure like pipelines, fuel tankers, rail cars, and trucks.

    The company’s approach is to use liquid organic hydrogen carriers (LOHCs) to make it easier to transport and store hydrogen. The method chemically bonds hydrogen to carrier molecules, which absorb hydrogen molecules and make them more stable—kind of like hydrogenating cooking oil to produce margarine.

    Black gloved hands pour a clear liquid from a beaker into a vial. A researcher pours a sample of Ayrton’s LOHC fluid into a vial.Ayrton Energy

    The approach would allow liquid hydrogen to be transported and stored in ambient conditions, rather than in the high-pressure, cryogenic tanks (to hold it at temperatures below -252 ºC) currently required for keeping hydrogen in liquid form. It would also be a big improvement on gaseous hydrogen, which is highly volatile and difficult to keep contained.

    Founded in 2021, Ayrton is one of several companies across the globe developing LOHCs, including Japan’s Chiyoda and Mitsubishi, Germany’s Covalion, and China’s Hynertech. But toxicity, energy density, and input energy issues have limited LOHCs as contenders for making liquid hydrogen feasible. Ayrton says its formulation eliminates these trade-offs.

    Safe, Efficient Hydrogen Fuel for Vehicles

    Conventional LOHC technologies used by most of the aforementioned companies rely on substances such as toluene, which forms methylcyclohexane when hydrogenated. These carriers pose safety risks due to their flammability and volatility. Hydrogenious LOHC Technologies in Erlanger, Germany and other hydrogen fuel companies have shifted toward dibenzyltoluene, a more stable carrier that holds more hydrogen per unit volume than methylcyclohexane, though it requires higher temperatures (and thus more energy) to bind and release the hydrogen. Dibenzyltoluene hydrogenation occurs at between 3 and 10 megapascals (30 and 100 bar) and 200–300 ºC, compared with 10 MPa (100 bar), and just under 200 ºC for methylcyclohexane.

    Ayrton’s proprietary oil-based hydrogen carrier not only captures and releases hydrogen with less input energy than is required for other LOHCs, but also stores more hydrogen than methylcyclohexane can—55 kilograms per cubic meter compared with methylcyclohexane’s 50 kg/m³. Dibenzyltoluene holds more hydrogen per unit volume (up to 65 kg/m³), but Ayrton’s approach to infusing the carrier with hydrogen atoms promises to cost less. Hydrogenation or dehydrogenation with Ayrton’s carrier fluid occurs at 0.1 megapascal (1 bar) and about 100 ºC, says founder and CEO Natasha Kostenuk. And as with the other LOHCs, after hydrogenation it can be transported and stored at ambient temperatures and pressures.

    Judges described [Ayrton's approach] as a critical technology for the deployment of hydrogen at large scale.” —Katie Richardson, National Renewable Energy Lab

    Ayrton’s LOHC fluid is as safe to handle as margarine, but it’s still a chemical, says Kostenuk. “I wouldn’t drink it. If you did, you wouldn’t feel very good. But it’s not lethal,” she says.

    Kostenuk and fellow Ayrton cofounder Brandy Kinkead (who serves as the company’s chief technical officer) were originally trying to bring hydrogen generators to market to fill gaps in the electrical grid. “We were looking for fuel cells and hydrogen storage. Fuel cells were easy to find, but we couldn’t find a hydrogen storage method or medium that would be safe and easy to transport to fuel our vision of what we were trying to do with hydrogen generators,” Kostenuk says. During the search, they came across LOHC technology but weren’t satisfied with the trade-offs demanded by existing liquid hydrogen carriers. “We had the idea that we could do it better,” she says. The duo pivoted, adjusting their focus from hydrogen generators to hydrogen storage solutions.

    “Everybody gets excited about hydrogen production and hydrogen end use, but they forget that you have to store and manage the hydrogen,” Kostenuk says. Incompatibility with current storage and distribution has been a barrier to adoption, she says. “We’re really excited about being able to reuse existing infrastructure that’s in place all over the world.” Ayrton’s hydrogenated liquid has fuel-cell-grade (99.999 percent) hydrogen purity, so there’s no advantage in using pure liquid hydrogen with its need for subzero temperatures, according to the company.

    The main challenge the company faces is the set of issues that come along with any technology scaling up from pilot-stage production to commercial manufacturing, says Kostenuk. “A crucial part of that is aligning ourselves with the right manufacturing partners along the way,” she notes.

    Asked about how Ayrton is dealing with some other challenges common to LOHCs, Kostenuk says Ayrton has managed to sidestep them. “We stayed away from materials that are expensive and hard to procure, which will help us avoid any supply chain issues,” she says. By performing the reactions at such low temperatures, Ayrton can get its carrier fluid to withstand 1,000 hydrogenation-dehydrogenation cycles before it no longer holds enough hydrogen to be useful. Conventional LOHCs are limited to a couple of hundred cycles before the high temperatures required for bonding and releasing the hydrogen breaks down the fluid and diminishes its storage capacity, Kostenuk says.

    Breakthrough in Hydrogen Storage Technology

    In acknowledgement of what Ayrton’s nontoxic, oil-based carrier fluid could mean for the energy and transportation sectors, the U.S. National Renewable Energy Lab (NREL) at its annual Industry Growth Forum in May named Ayrton an “outstanding early-stage venture.” A selection committee of more than 180 climate tech and cleantech investors and industry experts chose Ayrton from a pool of more than 200 initial applicants, says Katie Richardson, group manager of NREL’s Innovation and Entrepreneurship Center, which organized the forum. The committee based its decision on the company’s innovation, market positioning, business model, team, next steps for funding, technology, capital use, and quality of pitch presentation. “Judges described Ayrton’s approach as a critical technology for the deployment of hydrogen at large scale,” Richardson says.

    As a next step toward enabling hydrogen to push gasoline and diesel aside, “we’re talking with hydrogen producers who are right now offering their customers cryogenic and compressed hydrogen,” says Kostenuk. “If they offered LOHC, it would enable them to deliver across longer distances, in larger volumes, in a multimodal way.” The company is also talking to some industrial site owners who could use the hydrogenated LOHC for buffer storage to hold onto some of the energy they’re getting from clean, intermittent sources like solar and wind. Another natural fit, she says, is energy service providers that are looking for a reliable method of seasonal storage beyond what batteries can offer. The goal is to eventually scale up enough to become the go-to alternative (or perhaps the standard) fuel for cars, trucks, trains, and ships.

  • The AI Boom Rests on Billions of Tonnes of Concrete
    by Ted C. Fishman on 30. October 2024. at 13:00



    Along the country road that leads to ATL4, a giant data center going up east of Atlanta, dozens of parked cars and pickups lean tenuously on the narrow dirt shoulders. The many out-of-state plates are typical of the phalanx of tradespeople who muster for these massive construction jobs. With tech giants, utilities, and governments budgeting upwards of US $1 trillion for capital expansion to join the global battle for AI dominance, data centers are the bunkers, factories, and skunkworks—and concrete and electricity are the fuel and ammunition.

    To the casual observer, the data industry can seem incorporeal, its products conjured out of weightless bits. But as I stand beside the busy construction site for DataBank’s ATL4, what impresses me most is the gargantuan amount of material—mostly concrete—that gives shape to the goliath that will house, secure, power, and cool the hardware of AI. Big data is big concrete. And that poses a big problem.

    This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

    Concrete is not just a major ingredient in data centers and the power plants being built to energize them. As the world’s most widely manufactured material, concrete—and especially the cement within it—is also a major contributor to climate change, accounting for around 6 percent of global greenhouse gas emissions. Data centers use so much concrete that the construction boom is wrecking tech giants’ commitments to eliminate their carbon emissions. Even though Google, Meta, and Microsoft have touted goals to be carbon neutral or negative by 2030, and Amazon by 2040, the industry is now moving in the wrong direction.

    Last year, Microsoft’s carbon emissions jumped by over 30 percent, primarily due to the materials in its new data centers. Google’s greenhouse emissions are up by nearly 50 percent over the past five years. As data centers proliferate worldwide, Morgan Stanley projects that data centers will release about 2.5 billion tonnes of CO2 each year by 2030—or about 40 percent of what the United States currently emits from all sources.

    But even as innovations in AI and the big-data construction boom are boosting emissions for the tech industry’s hyperscalers, the reinvention of concrete could also play a big part in solving the problem. Over the last decade, there’s been a wave of innovation, some of it profit-driven, some of it from academic labs, aimed at fixing concrete’s carbon problem. Pilot plants are being fielded to capture CO 2 from cement plants and sock it safely away. Other projects are cooking up climate-friendlier recipes for cements. And AI and other computational tools are illuminating ways to drastically cut carbon by using less cement in concrete and less concrete in data centers, power plants, and other structures.

    Demand for green concrete is clearly growing. Amazon, Google, Meta, and Microsoft recently joined an initiative led by the Open Compute Project Foundation to accelerate testing and deployment of low-carbon concrete in data centers, for example. Supply is increasing, too—though it’s still minuscule compared to humanity’s enormous appetite for moldable rock. But if the green goals of big tech can jump-start innovation in low-carbon concrete and create a robust market for it as well, the boom in big data could eventually become a boon for the planet.

    Hyperscaler Data Centers: So Much Concrete

    At the construction site for ATL4, I’m met by Tony Qoori, the company’s big, friendly, straight-talking head of construction. He says that this giant building and four others DataBank has recently built or is planning in the Atlanta area will together add 133,000 square meters (1.44 million square feet) of floor space.

    They all follow a universal template that Qoori developed to optimize the construction of the company’s ever-larger centers. At each site, trucks haul in more than a thousand prefabricated concrete pieces: wall panels, columns, and other structural elements. Workers quickly assemble the precision-measured parts. Hundreds of electricians swarm the building to wire it up in just a few days. Speed is crucial when construction delays can mean losing ground in the AI battle.

    A large data center under construction . Multiple cherry picker cranes are in the background, and in the foreground are workers preparing for a concrete pour. The ATL4 data center outside Atlanta is one of five being built by DataBank. Together they will add over 130,000 square meters of floor space.DataBank

    That battle can be measured in new data centers and floor space. The United States is home to more than 5,000 data centers today, and the Department of Commerce forecasts that number to grow by around 450 a year through 2030. Worldwide, the number of data centers now exceeds 10,000, and analysts project another 26.5 million m2 of floor space over the next five years. Here in metro Atlanta, developers broke ground last year on projects that will triple the region’s data-center capacity. Microsoft, for instance, is planning a 186,000-m2 complex; big enough to house around 100,000 rack-mounted servers, it will consume 324 megawatts of electricity.

    The velocity of the data-center boom means that no one is pausing to await greener cement. For now, the industry’s mantra is “Build, baby, build.”

    “There’s no good substitute for concrete in these projects,” says Aaron Grubbs, a structural engineer at ATL4. The latest processors going on the racks are bigger, heavier, hotter, and far more power hungry than previous generations. As a result, “you add a lot of columns,” Grubbs says.

    1,000 Companies Working on Green Concrete

    Concrete may not seem an obvious star in the story of how electricity and electronics have permeated modern life. Other materials—copper and silicon, aluminum and lithium—get higher billing. But concrete provides the literal, indispensable foundation for the world’s electrical workings. It is the solid, stable, durable, fire-resistant stuff that makes power generation and distribution possible. It undergirds nearly all advanced manufacturing and telecommunications. What was true in the rapid build-out of the power industry a century ago remains true today for the data industry: Technological progress begets more growth—and more concrete. Although each generation of processor and memory squeezes more computing onto each chip, and advances in superconducting microcircuitry raise the tantalizing prospect of slashing the data center’s footprint, Qoori doesn’t think his buildings will shrink to the size of a shoebox anytime soon. “I’ve been through that kind of change before, and it seems the need for space just grows with it,” he says.

    By weight, concrete is not a particularly carbon-intensive material. Creating a kilogram of steel, for instance, releases about 2.4 times as much CO2 as a kilogram of cement does. But the global construction industry consumes about 35 billion tonnes of concrete a year. That’s about 4 tonnes for every person on the planet and twice as much as all other building materials combined. It’s that massive scale—and the associated cost and sheer number of producers—that creates both a threat to the climate and inertia that resists change.

    Aerial view of a cement plant with rail cars extending to the distance on one side. At its Edmonton, Alberta, plant [above], Heidelberg Materials is adding systems to capture carbon dioxide produced by the manufacture of Portland cement.Heidelberg Materials North America

    Yet change is afoot. When I visited the innovation center operated by the Swiss materials giant Holcim, in Lyon, France, research executives told me about the database they’ve assembled of nearly 1,000 companies working to decarbonize cement and concrete. None yet has enough traction to measurably reduce global concrete emissions. But the innovators hope that the boom in data centers—and in associated infrastructure such as new nuclear reactors and offshore wind farms, where each turbine foundation can use up to 7,500 cubic meters of concrete—may finally push green cement and concrete beyond labs, startups, and pilot plants.

    Why cement production emits so much carbon

    Though the terms “cement” and “concrete” are often conflated, they are not the same thing. A popular analogy in the industry is that cement is the egg in the concrete cake. Here’s the basic recipe: Blend cement with larger amounts of sand and other aggregates. Then add water, to trigger a chemical reaction with the cement. Wait a while for the cement to form a matrix that pulls all the components together. Let sit as it cures into a rock-solid mass.

    Portland cement, the key binder in most of the world’s concrete, was serendipitously invented in England by William Aspdin, while he was tinkering with earlier mortars that his father, Joseph, had patented in 1824. More than a century of science has revealed the essential chemistry of how cement works in concrete, but new findings are still leading to important innovations, as well as insights into how concrete absorbs atmospheric carbon as it ages.

    As in the Aspdins’ day, the process to make Portland cement still begins with limestone, a sedimentary mineral made from crystalline forms of calcium carbonate. Most of the limestone quarried for cement originated hundreds of millions of years ago, when ocean creatures mineralized calcium and carbonate in seawater to make shells, bones, corals, and other hard bits.

    Cement producers often build their large plants next to limestone quarries that can supply decades’ worth of stone. The stone is crushed and then heated in stages as it is combined with lesser amounts of other minerals that typically include calcium, silicon, aluminum, and iron. What emerges from the mixing and cooking are small, hard nodules called clinker. A bit more processing, grinding, and mixing turns those pellets into powdered Portland cement, which accounts for about 90 percent of the CO2 emitted by the production of conventional concrete [see infographic, “Roads to Cleaner Concrete”].

    A woman wearing a dark blazer and pants stands in front of a blackboard with notes and equations, as well as some machinery. Karen Scrivener, shown in her lab at EPFL, has developed concrete recipes that reduce emissions by 30 to 40 percent.Stefan Wermuth/Bloomberg/Getty Images

    Decarbonizing Portland cement is often called heavy industry’s “hard problem” because of two processes fundamental to its manufacture. The first process is combustion: To coax limestone’s chemical transformation into clinker, large heaters and kilns must sustain temperatures around 1,500 °C. Currently that means burning coal, coke, fuel oil, or natural gas, often along with waste plastics and tires. The exhaust from those fires generates 35 to 50 percent of the cement industry’s emissions. Most of the remaining emissions result from gaseous CO 2 liberated by the chemical transformation of the calcium carbonate (CaCO3) into calcium oxide (CaO), a process called calcination. That gas also usually heads straight into the atmosphere.

    Concrete production, in contrast, is mainly a business of mixing cement powder with other ingredients and then delivering the slurry speedily to its destination before it sets. Most concrete in the United States is prepared to order at batch plants—souped-up materials depots where the ingredients are combined, dosed out from hoppers into special mixer trucks, and then driven to job sites. Because concrete grows too stiff to work after about 90 minutes, concrete production is highly local. There are more ready-mix batch plants in the United States than there are Burger King restaurants.

    Batch plants can offer thousands of potential mixes, customized to fit the demands of different jobs. Concrete in a hundred-story building differs from that in a swimming pool. With flexibility to vary the quality of sand and the size of the stone—and to add a wide variety of chemicals—batch plants have more tricks for lowering carbon emissions than any cement plant does.

    Cement plants that capture carbon

    China accounts for more than half of the concrete produced and used in the world, but companies there are hard to track. Outside of China, the top three multinational cement producers—Holcim, Heidelberg Materials in Germany, and Cemex in Mexico—have launched pilot programs to snare CO2 emissions before they escape and then bury the waste deep underground. To do that, they’re taking carbon capture and storage (CCS) technology already used in the oil and gas industry and bolting it onto their cement plants.

    These pilot programs will need to scale up without eating profits—something that eluded the coal industry when it tried CCS decades ago. Tough questions also remain about where exactly to store billions of tonnes of CO 2 safely, year after year.

    The appeal of CCS for cement producers is that they can continue using existing plants while still making progress toward carbon neutrality, which trade associations have committed to reach by 2050. But with well over 3,000 plants around the world, adding CCS to all of them would take enormous investment. Currently less than 1 percent of the global supply is low-emission cement. Accenture, a consultancy, estimates that outfitting the whole industry for carbon capture could cost up to $900 billion.

    “The economics of carbon capture is a monster,” says Rick Chalaturnyk, a professor of geotechnical engineering at the University of Alberta, in Edmonton, Canada, who studies carbon capture in the petroleum and power industries. He sees incentives for the early movers on CCS, however. “If Heidelberg, for example, wins the race to the lowest carbon, it will be the first [cement] company able to supply those customers that demand low-carbon products”—customers such as hyperscalers.

    Though cement companies seem unlikely to invest their own billions in CCS, generous government subsidies have enticed several to begin pilot projects. Heidelberg has announced plans to start capturing CO2 from its Edmonton operations in late 2026, transforming it into what the company claims would be “the world’s first full-scale net-zero cement plant.” Exhaust gas will run through stations that purify the CO2 and compress it into a liquid, which will then be transported to chemical plants to turn it into products or to depleted oil and gas reservoirs for injection underground, where hopefully it will stay put for an epoch or two.

    Chalaturnyk says that the scale of the Edmonton plant, which aims to capture a million tonnes of CO2 a year, is big enough to give CCS technology a reasonable test. Proving the economics is another matter. Half the $1 billion cost for the Edmonton project is being paid by the governments of Canada and Alberta.

    ROADS TO CLEANER CONCRETE


    As the big-data construction boom boosts the tech industry’s emissions, the reinvention of concrete could play a major role in solving the problem.

    • CONCRETE TODAY Most of the greenhouse emissions from concrete come from the production of Portland cement, which requires high heat and releases carbon dioxide (CO2) directly into the air.

    • CONCRETE TOMORROW At each stage of cement and concrete production, advances in ingredients, energy supplies, and uses of concrete promise to reduce waste and pollution.

    An illustration of the process for cleaner concrete.

    The U.S. Department of Energy has similarly offered Heidelberg up to $500 million to help cover the cost of attaching CCS to its Mitchell, Ind., plant and burying up to 2 million tonnes of CO2 per year below the plant. And the European Union has gone even bigger, allocating nearly €1.5 billion ($1.6 billion) from its Innovation Fund to support carbon capture at cement plants in seven of its member nations.

    These tests are encouraging, but they are all happening in rich countries, where demand for concrete peaked decades ago. Even in China, concrete production has started to flatten. All the growth in global demand through 2040 is expected to come from less-affluent countries, where populations are still growing and quickly urbanizing. According to projections by the Rhodium Group, cement production in those regions is likely to rise from around 30 percent of the world’s supply today to 50 percent by 2050 and 80 percent before the end of the century.

    So will rich-world CCS technology translate to the rest of the world? I asked Juan Esteban Calle Restrepo, the CEO of Cementos Argos, the leading cement producer in Colombia, about that when I sat down with him recently at his office in Medellín. He was frank. “Carbon capture may work for the U.S. or Europe, but countries like ours cannot afford that,” he said.

    Better cement through chemistry

    As long as cement plants run limestone through fossil-fueled kilns, they will generate excessive amounts of carbon dioxide. But there may be ways to ditch the limestone—and the kilns. Labs and startups have been finding replacements for limestone, such as calcined kaolin clay and fly ash, that don’t release CO 2 when heated. Kaolin clays are abundant around the world and have been used for centuries in Chinese porcelain and more recently in cosmetics and paper. Fly ash—a messy, toxic by-product of coal-fired power plants—is cheap and still widely available, even as coal power dwindles in many regions.

    At the Swiss Federal Institute of Technology Lausanne (EPFL), Karen Scrivener and colleagues developed cements that blend calcined kaolin clay and ground limestone with a small portion of clinker. Calcining clay can be done at temperatures low enough that electricity from renewable sources can do the job. Various studies have found that the blend, known as LC3, can reduce overall emissions by 30 to 40 percent compared to those of Portland cement.

    LC3 is also cheaper to make than Portland cement and performs as well for nearly all common uses. As a result, calcined clay plants have popped up across Africa, Europe, and Latin America. In Colombia, Cementos Argos is already producing more than 2 million tonnes of the stuff annually. The World Economic Forum’s Centre for Energy and Materials counts LC3 among the best hopes for the decarbonization of concrete. Wide adoption by the cement industry, the centre reckons, “can help prevent up to 500 million tonnes of CO2 emissions by 2030.”

    In a win-win for the environment, fly ash can also be used as a building block for low- and even zero-emission concrete, and the high heat of processing neutralizes many of the toxins it contains. Ancient Romans used volcanic ash to make slow-setting but durable concrete: The Pantheon, built nearly two millennia ago with ash-based cement, is still in great shape.

    Coal fly ash is a cost-effective ingredient that has reactive properties similar to those of Roman cement and Portland cement. Many concrete plants already add fresh fly ash to their concrete mixes, replacing 15 to 35 percent of the cement. The ash improves the workability of the concrete, and though the resulting concrete is not as strong for the first few months, it grows stronger than regular concrete as it ages, like the Pantheon.

    University labs have tested concretes made entirely with fly ash and found that some actually outperform the standard variety. More than 15 years ago, researchers at Montana State University used concrete made with 100 percent fly ash in the floors and walls of a credit union and a transportation research center. But performance depends greatly on the chemical makeup of the ash, which varies from one coal plant to the next, and on following a tricky recipe. The decommissioning of coal-fired plants has also been making fresh fly ash scarcer and more expensive.

    Side view of a man in a lab coat as he climbs stairs in an industrial but simple looking pilot cement plant that is about twice his size. At Sublime Systems’ pilot plant in Massachusetts, the company is using electrochemistry instead of heat to produce lime silicate cements that can replace Portland cement.Tony Luong

    That has spurred new methods to treat and use fly ash that’s been buried in landfills or dumped into ponds. Such industrial burial grounds hold enough fly ash to make concrete for decades, even after every coal plant shuts down. Utah-based Eco Material Technologies is now producing cements that include both fresh and recovered fly ash as ingredients. The company claims it can replace up to 60 percent of the Portland cement in concrete—and that a new variety, suitable for 3D printing, can substitute entirely for Portland cement.

    Hive 3D Builders, a Houston-based startup, has been feeding that low-emissions concrete into robots that are printing houses in several Texas developments. “We are 100 percent Portland cement–free,” says Timothy Lankau, Hive 3D’s CEO. “We want our homes to last 1,000 years.”

    Sublime Systems, a startup spun out of MIT by battery scientists, uses electrochemistry rather than heat to make low-carbon cement from rocks that don’t contain carbon. Similar to a battery, Sublime’s process uses a voltage between an electrode and a cathode to create a pH gradient that isolates silicates and reactive calcium, in the form of lime (CaO). The company mixes those ingredients together to make a cement with no fugitive carbon, no kilns or furnaces, and binding power comparable to that of Portland cement. With the help of $87 million from the U.S. Department of Energy, Sublime is building a plant in Holyoke, Mass., that will be powered almost entirely by hydroelectricity. Recently the company was tapped to provide concrete for a major offshore wind farm planned off the coast of Martha’s Vineyard.

    Software takes on the hard problem of concrete

    It is unlikely that any one innovation will allow the cement industry to hit its target of carbon neutrality before 2050. New technologies take time to mature, scale up, and become cost-competitive. In the meantime, says Philippe Block, a structural engineer at ETH Zurich, smart engineering can reduce carbon emissions through the leaner use of materials.

    His research group has developed digital design tools that make clever use of geometry to maximize the strength of concrete structures while minimizing their mass. The team’s designs start with the soaring architectural elements of ancient temples, cathedrals, and mosques—in particular, vaults and arches—which they miniaturize and flatten and then 3D print or mold inside concrete floors and ceilings. The lightweight slabs, suitable for the upper stories of apartment and office buildings, use much less concrete and steel reinforcement and have a CO2 footprint that’s reduced by 80 percent.

    There’s hidden magic in such lean design. In multistory buildings, much of the mass of concrete is needed just to hold the weight of the material above it. The carbon savings of Block’s lighter slabs thus compound, because the size, cost, and emissions of a building’s conventional-concrete elements are slashed.

    Aerial view of a geometric and vaulted looking fabricated floor under construction outside. Three people with hard hats stand on it. Vaulted, a Swiss startup, uses digital design tools to minimize the concrete in floors and ceilings, cutting their CO2 footprint by 80 percent.Vaulted

    In Dübendorf, Switzerland, a wildly shaped experimental building has floors, roofs, and ceilings created by Block’s structural system. Vaulted, a startup spun out of ETH, is engineering and fabricating the lighter floors of a 10-story office building under construction in Zug, Switzerland.

    That country has also been a leader in smart ways to recycle and reuse concrete, rather than simply landfilling demolition rubble. This is easier said than done—concrete is tough stuff, riddled with rebar. But there’s an economic incentive: Raw materials such as sand and limestone are becoming scarcer and more costly. Some jurisdictions in Europe now require that new buildings be made from recycled and reused materials. The new addition of the Kunsthaus Zürich museum, a showcase of exquisite Modernist architecture, uses recycled material for all but 2 percent of its concrete.

    As new policies goose demand for recycled materials and threaten to restrict future use of Portland cement across Europe, Holcim has begun building recycling plants that can reclaim cement clinker from old concrete. It recently turned the demolition rubble from some 1960s apartment buildings outside Paris into part of a 220-unit housing complex—touted as the first building made from 100 percent recycled concrete. The company says it plans to build concrete recycling centers in every major metro area in Europe and, by 2030, to include 30 percent recycled material in all of its cement.

    Further innovations in low-carbon concrete are certain to come, particularly as the powers of machine learning are applied to the problem. Over the past decade, the number of research papers reporting on computational tools to explore the vast space of possible concrete mixes has grown exponentially. Much as AI is being used to accelerate drug discovery, the tools learn from huge databases of proven cement mixes and then apply their inferences to evaluate untested mixes.

    Researchers from the University of Illinois and Chicago-based Ozinga, one of the largest private concrete producers in the United States, recently worked with Meta to feed 1,030 known concrete mixes into an AI. The project yielded a novel mix that will be used for sections of a data-center complex in DeKalb, Ill. The AI-derived concrete has a carbon footprint 40 percent lower than the conventional concrete used on the rest of the site. Ryan Cialdella, Ozinga’s vice president of innovation, smiles as he notes the virtuous circle: AI systems that live in data centers can now help cut emissions from the concrete that houses them.

    A sustainable foundation for the information age

    Cheap, durable, and abundant yet unsustainable, concrete made with Portland cement has been one of modern technology’s Faustian bargains. The built world is on track to double in floor space by 2060, adding 230,000 km 2, or more than half the area of California. Much of that will house the 2 billion more people we are likely to add to our numbers. As global transportation, telecom, energy, and computing networks grow, their new appendages will rest upon concrete. But if concrete doesn’t change, we will perversely be forced to produce even more concrete to protect ourselves from the coming climate chaos, with its rising seas, fires, and extreme weather.

    The AI-driven boom in data centers is a strange bargain of its own. In the future, AI may help us live even more prosperously, or it may undermine our freedoms, civilities, employment opportunities, and environment. But solutions to the bad climate bargain that AI’s data centers foist on the planet are at hand, if there’s a will to deploy them. Hyperscalers and governments are among the few organizations with the clout to rapidly change what kinds of cement and concrete the world uses, and how those are made. With a pivot to sustainability, concrete’s unique scale makes it one of the few materials that could do most to protect the world’s natural systems. We can’t live without concrete—but with some ambitious reinvention, we can thrive with it.

  • Teens Gain Experience at IEEE’s TryEngineering Summer Institute
    by Robert Schneider on 29. October 2024. at 19:00



    The future of engineering is bright, and it’s being shaped by the young minds at the TryEngineering Summer Institute (TESI), a program administered by IEEE Educational Activities. This year more than 300 students attended TESI to fuel their passion for engineering and prepare for higher education and careers. Sessions were held from 30 June through 2 August on the campuses of Rice University, the University of Pennsylvania, and the University of San Diego.

    The program is an immersive experience designed for students ages 13 to 17. It offers hands-on projects, interactive workshops, field trips, and insights into the profession from practicing engineers. Participants get to stay on a college campus, providing them with a preview of university life.

    Student turned instructor

    One future innovator is Natalie Ghannad, who participated in the program as a student in 2022 and was a member of this year’s instructional team in Houston at Rice University. Ghannad is in her second year as an electrical engineering student at the University of San Francisco. University students join forces with science and engineering teachers at each TESI location to serve as instructors.

    For many years, Ghannad wanted to follow in her mother’s footsteps and become a pediatric neurosurgeon. As a high school junior in Houston in 2022, however, she had a change of heart and decided to pursue engineering after participating in the TESI at Rice. She received a full scholarship from the IEEE Foundation TESI Scholarship Fund, supported by IEEE societies and councils.

    “I really liked that it was hands-on,” Ghannad says. “From the get-go, we were introduced to 3D printers and laser cutters.”

    The benefit of participating in the program, she says, was “having the opportunity to not just do the academic side of STEM but also to really get to play around, get your hands dirty, and figure out what you’re doing.”

    “Looking back,” she adds, “there are so many parallels between what I’ve actually had to do as a college student, and having that knowledge from the Summer Institute has really been great.”

    She was inspired to volunteer as a teaching assistant because, she says, “I know I definitely want to teach, have the opportunity to interact with kids, and also be part of the future of STEM.”

    More than 90 students attended the program at Rice. They visited Space Center Houston, where former astronauts talked to them about the history of space exploration.

    Participants also were treated to presentations by guest speakers including IEEE Senior Member Phil Bautista, the founder of Bull Creek Data, a consulting company that provides technical solutions; IEEE Senior Member Christopher Sanderson, chair of the IEEE Region 5 Houston Section; and James Burroughs, a standards manager for Siemens in Atlanta. Burroughs, who spoke at all three TESI events this year, provided insight on overcoming barriers to do the important work of an engineer.

    Learning about transit systems and careers

    The University of Pennsylvania, in Philadelphia, hosted the East Coast TESI event this year. Students were treated to a field trip to the Southeastern Pennsylvania Transportation Association (SEPTA), one of the largest transit systems in the country. Engineers from AECOM, a global infrastructure consulting firm with offices in Philadelphia that worked closely with SEPTA on its most recent station renovation, collaborated with IEEE to host the trip.

    The benefit of participating in the program was “having the opportunity to not just do the academic side of STEM but also to really get to play around, get your hands dirty, and figure out what you’re doing.” — Natalie Ghannad

    Participants also heard from guest speakers including Api Appulingam, chief development officer of the Philadelphia International Airport, who told the students the inspiring story of her career.

    Guest speakers from Google and Meta

    Students who attended the TESI camp at the University of San Diego visited Qualcomm. Hosted by the IEEE Region 6 director, Senior Member Kathy Herring Hayashi, they learned about cutting-edge technology and toured the Qualcomm Museum.

    Students also heard from guest speakers including IEEE Member Andrew Saad, an engineer at Google; Gautam Deryanni, a silicon validation engineer at Meta; Kathleen Kramer, 2025 IEEE president and a professor of electrical engineering at the University of San Diego; as well as Burroughs.

    “I enjoyed the opportunity to meet new, like-minded people and enjoy fun activities in the city, as well as get a sense of the dorm and college life,” one participant said.

    Hands-on projects

    In addition to field trips and guest speakers, participants at each location worked on several hands-on projects highlighting the engineering design process. In the toxic popcorn challenge, the students designed a process to safely remove harmful kernels. Students tackling the bridge challenge designed and built a span out of balsa wood and glue, then tested its strength by gradually adding weight until it failed. The glider challenge gave participants the tools and knowledge to build and test their aircraft designs.

    One participant applauded the hands-on activities, saying, “All of them gave me a lot of experience and helped me have a better idea of what engineering field I want to go in. I love that we got to participate in challenges and not just listen to lectures—which can be boring.”

    The students also worked on a weeklong sparking solutions challenge. Small teams identified a societal problem, such as a lack of clean water or limited mobility for senior citizens, then designed a solution to address it. On the last day of camp, they pitched their prototypes to a team of IEEE members that judged the projects based on their originality and feasibility. Each student on the winning teams at each location were awarded the programmable Mech-5 robot.

    Twenty-nine scholarships were awarded with funding from the IEEE Foundation. IEEE societies that donated to the cause were the IEEE Computational Intelligence Society, the IEEE Computer Society, the IEEE Electronics Packaging Society, the IEEE Industry Applications Society, the IEEE Oceanic Engineering Society, the IEEE Power & Energy Society, the IEEE Power Electronics Society, the IEEE Signal Processing Society, and the IEEE Solid-State Circuits Society.

  • Principles of PID Controllers
    by Zurich Instruments on 29. October 2024. at 17:37



    Thanks to their ability to adjust the system’s output accurately and quickly without detailed knowledge about its dynamics, PID control loops stand as a powerful and widely used tool for maintaining a stable and predictable output in a variety of applications. In this paper, we review the fundamental principles and characteristics of these control systems, providing insight into their functioning, tuning strategies, advantages, and trade-offs.

    As a result of their integrated architecture, Zurich Instruments’ lock-in amplifiers allow users to make the most of all the advantages of digital PID control loops, so that their operation can be adapted to match the needs of different use cases.

  • The Unlikely Inventor of the Automatic Rice Cooker
    by Allison Marsh on 29. October 2024. at 14:00



    “Cover, bring to a boil, then reduce heat. Simmer for 20 minutes.” These directions seem simple enough, and yet I have messed up many, many pots of rice over the years. My sympathies to anyone who’s ever had to boil rice on a stovetop, cook it in a clay pot over a kerosene or charcoal burner, or prepare it in a cast-iron cauldron. All hail the 1955 invention of the automatic rice cooker!

    How the automatic rice cooker was invented

    It isn’t often that housewives get credit in the annals of invention, but in the story of the automatic rice cooker, a woman takes center stage. That happened only after the first attempts at electrifying rice cooking, starting in the 1920s, turned out to be utter failures. Matsushita, Mitsubishi, and Sony all experimented with variations of placing electric heating coils inside wooden tubs or aluminum pots, but none of these cookers automatically switched off when the rice was done. The human cook—almost always a wife or daughter—still had to pay attention to avoid burning the rice. These electric rice cookers didn’t save any real time or effort, and they sold poorly.

    But Shogo Yamada, the energetic development manager of the electric appliance division for Toshiba, became convinced that his company could do better. In post–World War II Japan, he was demonstrating and selling electric washing machines all over the country. When he took a break from his sales pitch and actually talked to women about their daily household labors, he discovered that cooking rice—not laundry—was their most challenging chore. Rice was a mainstay of the Japanese diet, and women had to prepare it up to three times a day. It took hours of work, starting with getting up by 5:00 am to fan the flames of a kamado, a traditional earthenware stove fueled by charcoal or wood on which the rice pot was heated. The inability to properly mind the flame could earn a woman the label of “failed housewife.”

    In 1951, Yamada became the cheerleader of the rice cooker within Toshiba, which was understandably skittish given the past failures of other companies. To develop the product, he turned to Yoshitada Minami, the manager of a small family factory that produced electric water heaters for Toshiba. The water-heater business wasn’t great, and the factory was on the brink of bankruptcy.

    How Sources Influence the Telling of History


    As someone who does a lot of research online, I often come across websites that tell very interesting histories, but without any citations. It takes only a little bit of digging before I find entire passages copied and pasted from one site to another, and so I spend a tremendous amount of time trying to track down the original source. Accounts of popular consumer products, such as the rice cooker, are particularly prone to this problem. That’s not to say that popular accounts are necessarily wrong; plus they are often much more engaging than boring academic pieces. This is just me offering a note of caution because every story offers a different perspective depending on its sources.

    For example, many popular blogs sing the praises of Fumiko Minami and her tireless contributions to the development of the rice maker. But in my research, I found no mention of Minami before Helen Macnaughtan’s 2012 book chapter, “Building up Steam as Consumers: Women, Rice Cookers and the Consumption of Everyday Household Goods in Japan,” which itself was based on episode 42 of the Project X: Challengers documentary series that was produced by NHK and aired in 2002.

    If instead I had relied solely on the description of the rice cooker’s early development provided by the Toshiba Science Museum (here’s an archived page from 2007), this month’s column would have offered a detailed technical description of how uncooked rice has a crystalline structure, but as it cooks, it becomes a gelatinized starch. The museum’s website notes that few engineers had ever considered the nature of cooking rice before the rice-cooker project, and it refers simply to the “project team” that discovered the process. There’s no mention of Fumiko.

    Both stories are factually correct, but they emphasize different details. Sometimes it’s worth asking who is part of the “project team” because the answer might surprise you. —A.M.


    Although Minami understood the basic technical principles for an electric rice cooker, he didn’t know or appreciate the finer details of preparing perfect rice. And so Minami turned to his wife, Fumiko.

    Fumiko, the mother of six children, spent five years researching and testing to document the ideal recipe. She continued to make rice three times a day, carefully measuring water-to-rice ratios, noting temperatures and timings, and prototyping rice-cooker designs. Conventional wisdom was that the heat source needed to be adjusted continuously to guarantee fluffy rice, but Fumiko found that heating the water and rice to a boil and then cooking for exactly 20 minutes produced consistently good results.

    But how would an automatic rice cooker know when the 20 minutes was up? A suggestion came from Toshiba engineers. A working model based on a double boiler (a pot within a pot for indirect heating) used evaporation to mark time. While the rice cooked in the inset pot, a bimetallic switch measured the temperature in the external pot. Boiling water would hold at a constant 100 °C, but once it had evaporated, the temperature would soar. When the internal temperature of the double boiler surpassed 100 °C, the switch would bend and cut the circuit. One cup of boiling water in the external pot took 20 minutes to evaporate. The same basic principle is still used in modern cookers.


    Photo of three parts of a round kitchen appliance, including the outside container, an inner metal pot, and a lid.


    Yamada wanted to ensure that the rice cooker worked in all climates, so Fumiko tested various prototypes in extreme conditions: on her rooftop in cold winters and scorching summers and near steamy bathrooms to mimic high humidity. When Fumiko became ill from testing outside, her children pitched in to help. None of the aluminum and glass prototypes, it turned out, could maintain their internal temperature in cold weather. The final design drew inspiration from the Hokkaidō region, Japan’s northernmost prefecture. Yamada had seen insulated cooking pots there, so the Minami family tried covering the rice cooker with a triple-layered iron exterior. It worked.

    How Toshiba sold its automatic rice cooker

    Toshiba’s automatic rice cooker went on sale on 10 December 1955, but initially, sales were slow. It didn’t help that the rice cooker was priced at 3,200 yen, about a third of the average Japanese monthly salary. It took some salesmanship to convince women they needed the new appliance. This was Yamada’s time to shine. He demonstrated using the rice cooker to prepare takikomi gohan, a rice dish seasoned with dashi, soy sauce, and a selection of meats and vegetables. When the dish was cooked in a traditional kamado, the soy sauce often burned, making the rather simple dish difficult to master. Women who saw Yamada’s demo were impressed with the ease offered by the rice cooker.

    Another clever sales technique was to get electricity companies to serve as Toshiba distributors. At the time, Japan was facing a national power surplus stemming from the widespread replacement of carbon-filament lightbulbs with more efficient tungsten ones. The energy savings were so remarkable that operations at half of the country’s power plants had to be curtailed. But with utilities distributing Toshiba rice cookers, increased demand for electricity was baked in.

    Within a year, Toshiba was selling more than 200,000 rice cookers a month. Many of them came from the Minamis’ factory, which was rescued from near-bankruptcy in the process.

    How the automatic rice cooker conquered the world

    From there, the story becomes an international one with complex localization issues. Japanese sushi rice is not the same as Thai sticky rice which is not the same as Persian tahdig, Indian basmati, Italian risotto, or Spanish paella. You see where I’m going with this. Every culture that has a unique rice dish almost always uses its own regional rice with its own preparation preferences. And so countries wanted their own type of automatic electric rice cooker (although some rejected automation in favor of traditional cooking methods).

    Yoshiko Nakano, a professor at the University of Hong Kong, wrote a book in 2009 about the localized/globalized nature of rice cookers. Where There Are Asians, There Are Rice Cookers traces the popularization of the rice cooker from Japan to China and then the world by way of Hong Kong. One of the key differences between the Japanese and Chinese rice cooker is that the latter has a glass lid, which Chinese cooks demanded so they could see when to add sausage. More innovation and diversification followed. Modern rice cookers have settings to give Iranians crispy rice at the bottom of the pot, one to let Thai customers cook noodles, one for perfect rice porridge, and one for steel-cut oats.


    A customer examines several shelves of round white appliances.


    My friend Hyungsub Choi, in his 2022 article “Before Localization: The Story of the Electric Rice Cooker in South Korea,” pushes back a bit on Nakano’s argument that countries were insistent on tailoring cookers to their tastes. From 1965, when the first domestic rice cooker appeared in South Korea, to the early 1990s, Korean manufacturers engaged in “conscious copying,” Choi argues. That is, they didn’t bother with either innovation or adaptation. As a result, most Koreans had to put up with inferior domestic models. Even after the Korean government made it a national goal to build a better rice cooker, manufacturers failed to deliver one, perhaps because none of the engineers involved knew how to cook rice. It’s a good reminder that the history of technology is not always the story of innovation and progress.

    Eventually, the Asian diaspora brought the rice cooker to all parts of the globe, including South Carolina, where I now live and which coincidentally has a long history of rice cultivation. I bought my first rice cooker on a whim, but not for its rice-cooking ability. I was intrigued by the yogurt-making function. Similar to rice, yogurt requires a constant temperature over a specific length of time. Although successful, my yogurt experiment was fleeting—store-bought was just too convenient. But the rice cooking blew my mind. Perfect rice. Every. Single. Time. I am never going back to overflowing pots of starchy water.

    Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

    An abridged version of this article appears in the November 2024 print issue as “The Automatic Rice Cooker’s Unlikely Inventor.”

    References


    Helen Macnaughtan’s 2012 book chapter, “Building up Steam as Consumers: Women, Rice Cookers and the Consumption of Everyday Household Goods in Japan,” was a great resource in understanding the development of the Toshiba ER-4. The chapter appeared in The Historical Consumer: Consumption and Everyday Life in Japan, 1850-2000, edited by Penelope Francks and Janet Hunter (Palgrave Macmillan).

    Yoshiko Nakano’s book Where There are Asians, There are Rice Cookers (Hong Kong University Press, 2009) takes the story much further with her focus on the National (Panasonic) rice cooker and its adaptation and adoption around the world.

    The Toshiba Science Museum, in Kawasaki, Japan, where we sourced our main image of the original ER-4, closed to the public in June. I do not know what the future holds for its collections, but luckily some of its Web pages have been archived to continue to help researchers like me.

  • Multiband Antenna Simulation and Wireless KPI Extraction
    by Ansys on 29. October 2024. at 13:07



    In this upcoming webinar, explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems.

    Overview

    This webinar will explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems. Attendees will learn how to optimize antenna performance and analyze installed performance within wireless networks. The session will also demonstrate how this approach enables users to extract valuable wireless and network KPIs, providing a comprehensive toolset for enhancing antenna design, optimizing multiband communication, and improving overall network performance. Join us to discover how Ansys HFSS can transform wireless system design and network efficiency approach.

    What Attendees will Learn

    • How to design interleaved multiband antenna systems using the latest capabilities in HFSS
    • How to extract Network Key Performance Indicators
    • How to run and extract RF Channels for the dynamic environment

    Who Should Attend

    This webinar is valuable to anyone involved in antenna, R&D, product design, and wireless networks.

    Register now for this free webinar!

  • For this Stanford Engineer, Frugal Invention Is a Calling
    by Greg Uyeno on 29. October 2024. at 13:00



    Manu Prakash spoke with IEEE Spectrum shortly after returning to Stanford University from a month aboard a research vessel off the coast of California, where he was testing tools to monitor oceanic carbon sequestration. The associate professor conducts fieldwork around the world to better understand the problems he’s working on, as well as the communities that will be using his inventions.

    Prakash develops imaging instruments and diagnostic tools, often for use in global health and environmental sciences. His devices typically cost radically less than conventional equipment—he aims for reductions of two or more orders of magnitude. Whether he’s working on pocketable microscopes, mosquito or plankton monitors, or an autonomous malaria diagnostic platform, Prakash always includes cost and access as key aspects of his engineering. He calls this philosophy “frugal science.”

    Why should we think about science frugally?

    Manu Prakash: To me, when we are trying to ask and solve problems and puzzles, it becomes important: In whose hands are we putting these solutions? A frugal approach to solving the problem is the difference between 1 percent of the population or billions of people having access to that solution.

    Lack of access creates these kinds of barriers in people’s minds, where they think they can or cannot approach a kind of problem. It’s important that we as scientists or just citizens of this world create an environment that feels that anybody has a chance to make important inventions and discoveries if they put their heart to it. The entrance to all that is dependent on tools, but those tools are just inaccessible.

    How did you first encounter the idea of “frugal science”?

    Prakash: I grew up in India and lived with very little access to things. And I got my Ph.D. at MIT. I was thinking about this stark difference in worlds that I had seen and lived in, so when I started my lab, it was almost a commitment to [asking]: What does it mean when we make access one of the critical dimensions of exploration? So, I think a lot of the work I do is primarily driven by curiosity, but access brings another layer of intellectual curiosity.

    How do you identify a problem that might benefit from frugal science?

    Prakash: Frankly, it’s hard to find a problem that would not benefit from access. The question to ask is “Where are the neglected problems that we as a society have failed to tackle?” We do a lot of work in diagnostics. A lot [of our solutions] beat the conventional methods that are neither cost effective nor any good. It’s not about cutting corners; it’s about deeply understanding the problem—better solutions at a fraction of the cost. It does require invention. For that order of magnitude change, you really have to start fresh.

    Where does your involvement with an invention end?

    Prakash: Inventions are part of our soul. Your involvement never ends. I just designed the 415th version of Foldscope [a low-cost “origami” microscope]. People only know it as version 3. We created Foldscope a long time ago; then I realized that nobody was going to provide access to it. So we went back and invented the manufacturing process for Foldscope to scale it. We made the first 100,000 Foldscopes in the lab, which led to millions of Foldscopes being deployed.

    So it’s continuous. If people are scared of this, they should never invent anything [laughs], because once you invent something, it’s a lifelong project. You don’t put it aside; the project doesn’t put you aside. You can try to, but that’s not really possible if your heart is in it. You always see problems. Nothing is ever perfect. That can be ever consuming. It’s hard. I don’t want to minimize this process in any way or form.

  • The Patent Battle That Won’t Quit
    by Harry Goldstein on 28. October 2024. at 21:00



    Just before this special issue on invention went to press, I got a message from IEEE senior member and patent attorney George Macdonald. Nearly two decades after I first reported on Corliss Orville “Cob” Burandt’s struggle with the U.S. Patent and Trademark Office, the 77-year-old inventor’s patent case was being revived.

    From 1981 to 1990, Burandt had received a dozen U.S. patents for improvements to automotive engines, starting with his 1990 patent for variable valve-timing technology (U.S. Patent No. 4,961,406A). But he failed to convince any automakers to license his technology. What’s worse, he claims, some of the world’s major carmakers now use his inventions in their hybrid engines.

    Shortly after reading my piece in 2005, Macdonald stepped forward to represent Burandt. By then, the inventor had already lost his patents because he hadn’t paid the US $40,000 in maintenance fees to keep them active.

    Macdonald filed a petition to pay the maintenance fees late and another to revive a related child case. The maintenance fee petition was denied in 2006. While the petition to revive was still pending, Macdonald passed the maintenance fee baton to Hunton Andrews Kurth (HAK), which took the case pro bono. HAK attorneys argued that the USPTO should reinstate the 1990 parent patent.

    The timing was crucial: If the parent patent was reinstated before 2008, Burandt would have had the opportunity to compel infringing corporations to pay him licensing fees. Unfortunately, for reasons that remain unclear, the patent office tried to paper Burandt’s legal team to death, Macdonald says. HAK could go no further in the maintenance-fee case after the U.S. Supreme Court declined to hear it in 2009.

    Then, in 2010, the USPTO belatedly revived Burandt’s child continuation application. A continuation application lets an inventor add claims to their original patent application while maintaining the earlier filing date—1988 in this case.

    However, this revival came with its own set of challenges. Macdonald was informed in 2011 that the patent examiner would issue the patent but later discovered that the application was placed into a then-secret program called the Sensitive Application Warning System (SAWS) instead. While touted as a way to quash applications for things like perpetual-motion machines, the SAWS process effectively slowed action on Burandt’s case.

    After several more years of motions and rulings, Macdonald met IEEE Member Edward Pennington, who agreed to represent Burandt. Earlier this year, Pennington filed a complaint in the Eastern District of Virginia seeking the issuance of Burandt’s patent on the grounds that it was wrongfully denied.

    As of this writing, Burandt still hasn’t seen a dime from his inventions. He subsists on his social security benefits. And while his case raises important questions about fairness, transparency, and the rights of individual inventors, Pennington says his client isn’t interested in becoming a poster boy for poor inventors.

    “We’re not out to change policy at the patent office or to give Mr. Burandt a framed copy of the patent to say, ‘Look at me, I’m an inventor,’ ” says Pennington. “This is just to say, ‘Here’s a guy that would like to benefit from his idea.’ It just so happens that he’s pretty much in need. And even the slightest royalty would go a long ways for the guy.”

  • Explore Virtual Solutions for A&D
    by Ansys on 28. October 2024. at 19:37



    Prepare yourself for the challenges of creating cutting-edge A&D autonomous tech. Download the e-book to explore how autonomy is transforming the aerospace & defense industry.

    Download this free whitepaper now!

  • NYU Researchers Develop New Real-Time Deepfake Detection Method
    by Michael W. Richardson on 28. October 2024. at 17:06



    This sponsored article is brought to you by NYU Tandon School of Engineering.

    Deepfakes, hyper-realistic videos and audio created using artificial intelligence, present a growing threat in today’s digital world. By manipulating or fabricating content to make it appear authentic, deepfakes can be used to deceive viewers, spread disinformation, and tarnish reputations. Their misuse extends to political propaganda, social manipulation, identity theft, and cybercrime.

    As deepfake technology becomes more advanced and widely accessible, the risk of societal harm escalates. Studying deepfakes is crucial to developing detection methods, raising awareness, and establishing legal frameworks to mitigate the damage they can cause in personal, professional, and global spheres. Understanding the risks associated with deepfakes and their potential impact will be necessary for preserving trust in media and digital communication.

    That is where Chinmay Hegde, an Associate Professor of Computer Science and Engineering and Electrical and Computer Engineering at NYU Tandon, comes in.

    A photo of a smiling man in glasses. Chinmay Hegde, an Associate Professor of Computer Science and Engineering and Electrical and Computer Engineering at NYU Tandon, is developing challenge-response systems for detecting audio and video deepfakes.NYU Tandon

    “Broadly, I’m interested in AI safety in all of its forms. And when a technology like AI develops so rapidly, and gets good so quickly, it’s an area ripe for exploitation by people who would do harm,” Hegde said.

    A native of India, Hegde has lived in places around the world, including Houston, Texas, where he spent several years as a student at Rice University; Cambridge, Massachusetts, where he did post-doctoral work in MIT’s Theory of Computation (TOC) group; and Ames, Iowa, where he held a professorship in the Electrical and Computer Engineering Department at Iowa State University.

    Hegde, whose area of expertise is in data processing and machine learning, focuses his research on developing fast, robust, and certifiable algorithms for diverse data processing problems encountered in applications spanning imaging and computer vision, transportation, and materials design. At Tandon, he worked with Professor of Computer Science and Engineering Nasir Memon, who sparked his interest in deepfakes.

    “Even just six years ago, generative AI technology was very rudimentary. One time, one of my students came in and showed off how the model was able to make a white circle on a dark background, and we were all really impressed by that at the time. Now you have high definition fakes of Taylor Swift, Barack Obama, the Pope — it’s stunning how far this technology has come. My view is that it may well continue to improve from here,” he said.

    Hegde helped lead a research team from NYU Tandon School of Engineering that developed a new approach to combat the growing threat of real-time deepfakes (RTDFs) – sophisticated artificial-intelligence-generated fake audio and video that can convincingly mimic actual people in real-time video and voice calls.

    High-profile incidents of deepfake fraud are already occurring, including a recent $25 million scam using fake video, and the need for effective countermeasures is clear.

    In two separate papers, research teams show how “challenge-response” techniques can exploit the inherent limitations of current RTDF generation pipelines, causing degradations in the quality of the impersonations that reveal their deception.

    In a paper titled “GOTCHA: Real-Time Video Deepfake Detection via Challenge-Response” the researchers developed a set of eight visual challenges designed to signal to users when they are not engaging with a real person.

    “Most people are familiar with CAPTCHA, the online challenge-response that verifies they’re an actual human being. Our approach mirrors that technology, essentially asking questions or making requests that RTDF cannot respond to appropriately,” said Hegde, who led the research on both papers.

    A series of images with people's faces in rows. Challenge frame of original and deepfake videos. Each row aligns outputs against the same instance of challenge, while each column aligns the same deepfake method. The green bars are a metaphor for the fidelity score, with taller bars suggesting higher fidelity. Missing bars imply the specific deepfake failed to do that specific challenge.NYU Tandon

    The video research team created a dataset of 56,247 videos from 47 participants, evaluating challenges such as head movements and deliberately obscuring or covering parts of the face. Human evaluators achieved about 89 percent Area Under the Curve (AUC) score in detecting deepfakes (over 80 percent is considered very good), while machine learning models reached about 73 percent.

    “Challenges like quickly moving a hand in front of your face, making dramatic facial expressions, or suddenly changing the lighting are simple for real humans to do, but very difficult for current deepfake systems to replicate convincingly when asked to do so in real-time,” said Hegde.

    Audio Challenges for Deepfake Detection

    In another paper called “AI-assisted Tagging of Deepfake Audio Calls using Challenge-Response,” researchers created a taxonomy of 22 audio challenges across various categories. Some of the most effective included whispering, speaking with a “cupped” hand over the mouth, talking in a high pitch, pronouncing foreign words, and speaking over background music or speech.

    “Even state-of-the-art voice cloning systems struggle to maintain quality when asked to perform these unusual vocal tasks on the fly,” said Hegde. “For instance, whispering or speaking in an unusually high pitch can significantly degrade the quality of audio deepfakes.”

    The audio study involved 100 participants and over 1.6 million deepfake audio samples. It employed three detection scenarios: humans alone, AI alone, and a human-AI collaborative approach. Human evaluators achieved about 72 percent accuracy in detecting fakes, while AI alone performed better with 85 percent accuracy.

    The collaborative approach, where humans made initial judgments and could revise their decisions after seeing AI predictions, achieved about 83 percent accuracy. This collaborative system also allowed AI to make final calls in cases where humans were uncertain.

    “The key is that these tasks are easy and quick for real people but hard for AI to fake in real-time” —Chinmay Hegde, NYU Tandon

    The researchers emphasize that their techniques are designed to be practical for real-world use, with most challenges taking only seconds to complete. A typical video challenge might involve a quick hand gesture or facial expression, while an audio challenge could be as simple as whispering a short sentence.

    “The key is that these tasks are easy and quick for real people but hard for AI to fake in real-time,” Hegde said. “We can also randomize the challenges and combine multiple tasks for extra security.”

    As deepfake technology continues to advance, the researchers plan to refine their challenge sets and explore ways to make detection even more robust. They’re particularly interested in developing “compound” challenges that combine multiple tasks simultaneously.

    “Our goal is to give people reliable tools to verify who they’re really talking to online, without disrupting normal conversations,” said Hegde. “As AI gets better at creating fakes, we need to get better at detecting them. These challenge-response systems are a promising step in that direction.”

  • Nuclear Fusion’s New Idea: An Off-the-Shelf Stellarator
    by Tom Clynes on 28. October 2024. at 13:00



    For a machine that’s designed to replicate a star, the world’s newest stellarator is a surprisingly humble-looking apparatus. The kitchen-table-size contraption sits atop stacks of bricks in a cinder-block room at the Princeton Plasma Physics Laboratory (PPPL) in Princeton, N.J., its parts hand-labeled in marker.

    The PPPL team invented this nuclear-fusion reactor, completed last year, using mainly off-the-shelf components. Its core is a glass vacuum chamber surrounded by a 3D-printed nylon shell that anchors 9,920 meticulously placed permanent rare-earth magnets. Sixteen copper-coil electromagnets resembling giant slices of pineapple wrap around the shell crosswise.

    This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

    The arrangement of magnets forms the defining feature of a stellarator: an entirely external magnetic field that directs charged particles along a spiral path to confine a superheated plasma. Within this enigmatic fourth state of matter, atoms that have been stripped of their electrons collide, their nuclei fusing and releasing energy in the same process that powers the sun and other stars. Researchers hope to capture this energy and use it to produce clean, zero-carbon electricity.

    PPPL’s new reactor is the first stellarator built at this government lab in 50 years. It’s also the world’s first stellarator to employ permanent magnets, rather than just electromagnets, to coax plasma into an optimal three-dimensional shape. Costing only US $640,000 and built in less than a year, the device stands in contrast to prominent stellarators like Germany’s Wendelstein 7-X, a massive, tentacled machine that took $1.1 billion and more than 20 years to construct.

    A tabletop machine with many wires coming from it in a research lab Sixteen copper-coil electromagnets resembling giant slices of pineapple wrap around the stellarator’s shell. Jayme Thornton

    PPPL researchers say their simpler machine demonstrates a way to build stellarators far more cheaply and quickly, allowing researchers to easily test new concepts for future fusion power plants. The team’s use of permanent magnets may not be the ticket to producing commercial-scale energy, but PPPL’s accelerated design-build-test strategy could crank out new insights on plasma behavior that could push the field forward more rapidly.

    Indeed, the team’s work has already spurred the formation of two stellarator startups that are testing their own PPPL-inspired designs, which their founders hope will lead to breakthroughs in the quest for fusion energy.

    Are Stellarators the Future of Nuclear Fusion?

    The pursuit of energy production through nuclear fusion is considered by many to be the holy grail of clean energy. And it’s become increasingly important as a rapidly warming climate and soaring electricity demand have made the need for stable, carbon-free power ever more acute. Fusion offers the prospect of a nearly limitless source of energy with no greenhouse gas emissions. And unlike conventional nuclear fission, fusion comes with no risk of meltdowns or weaponization, and no long-lived nuclear waste.

    Fusion reactions have powered the sun since it formed an estimated 4.6 billion years ago, but they have never served to produce usable energy on Earth, despite decades of effort. The problem isn’t whether fusion can work. Physics laboratories and even a few individuals have successfully fused the nuclei of hydrogen, liberating energy. But to produce more power than is consumed in the process, simply fusing atoms isn’t enough.

    A mosaic of square-shaped magnets inside a curved structure Fueled by free pizza, grad students meticulously placed 9,920 permanent rare-earth magnets inside the stellarator’s 3D-printed nylon shell. Jayme Thornton

    The past few years have brought eye-opening advances from government-funded fusion programs such as PPPL and the Joint European Torus, as well as private companies. Enabled by gains in high-speed computing, artificial intelligence, and materials science, nuclear physicists and engineers are toppling longstanding technical hurdles. And stellarators, a once-overlooked approach, are back in the spotlight.

    “Stellarators are one of the most active research areas now, with new papers coming out just about every week,” says Scott Hsu, the U.S. Department of Energy’s lead fusion coordinator. “We’re seeing new optimized designs that we weren’t capable of coming up with even 10 years ago. The other half of the story that’s just as exciting is that new superconductor technology and advanced manufacturing capabilities are making it more possible to actually realize these exquisite designs.”

    Why Is Plasma Containment Important in Fusion Energy?

    For atomic nuclei to fuse, the nuclei must overcome their natural electrostatic repulsion. Extremely high temperatures—in the millions of degrees—will get the particles moving fast enough to collide and fuse. Deuterium and tritium, isotopes of hydrogen with, respectively, one and two neutrons in their nuclei, are the preferred fuels for fusion because their nuclei can overcome the repulsive forces more easily than those of heavier atoms.

    Heating these isotopes to the required temperatures strips electrons from the atomic nuclei, forming a plasma: a maelstrom of positively charged nuclei and negatively charged electrons. The trick is keeping that searingly hot plasma contained so that some of the nuclei fuse.

    Currently, there are two main approaches to containing plasma. Inertial confinement uses high-energy lasers or ion beams to rapidly compress and heat a small fuel pellet. Magnetic confinement uses powerful magnetic fields to guide the charged particles along magnetic-field lines, preventing these particles from drifting outward.

    Many magnetic-confinement designs—including the $24.5 billion ITER reactor under construction since 2010 in the hills of southern France—use an internal current flowing through the plasma to help to shape the magnetic field. But this current can create instabilities, and even small instabilities in the plasma can cause it to escape confinement, leading to energy losses and potential damage to the hardware.

    Stellarators like PPPL’s are a type of magnetic confinement, with a twist.

    How the Stellarator Was Born

    Located at the end of Stellarator Road and a roughly 5-kilometer drive from Princeton University’s leafy campus, PPPL is one of 17 U.S. Department of Energy labs, and it employs about 800 scientists, engineers, and other workers. Hanging in PPPL’s lobby is a black-and-white photo of the lab’s founder, physicist Lyman Spitzer, smiling as he shows off the fanciful-looking apparatus he invented and dubbed a stellarator, or “star generator.”

    According to the lab’s lore, Spitzer came up with the idea while riding a ski lift at Aspen Mountain in 1951. Enrico Fermi had observed that a simple toroidal, or doughnut-shaped, magnetic-confinement system wouldn’t be sufficient to contain plasma for nuclear fusion because the charged particles would drift outward and escape confinement.

    “This technology is designed to be a stepping stone toward a fusion power plant.”

    Spitzer determined that a figure-eight design with external magnets could create helical magnetic-field lines that would spiral around the plasma and more efficiently control and contain the energetic particles. That configuration, Spitzer reasoned, would be efficient enough that it wouldn’t require large currents running through the plasma, thus reducing the risk of instabilities and allowing for steady-state operation.

    “In many ways, Spitzer’s brilliant idea was the perfect answer” to the problems of plasma confinement, says Steven Cowley, PPPL’s director since 2018. “The stellarator offered something that other approaches to fusion energy couldn’t: a stable plasma field that can sustain itself without any internal current.”

    Spitzer’s stellarator quickly captured the imagination of midcentury nuclear physicists and engineers. But the invention was ahead of its time.

    Tokamaks vs. Stellarators

    The stellarator’s lack of toroidal symmetry made it challenging to build. The external magnetic coils needed to be precisely engineered into complex, three-dimensional shapes to generate the twisted magnetic fields required for stable plasma confinement. In the 1950s, researchers lacked the high-performance computers needed to design optimal three-dimensional magnetic fields and the engineering capability to build machines with the requisite precision.

    Meanwhile, physicists in the Soviet Union were testing a new configuration for magnetically confined nuclear fusion: a doughnut-shaped device called a tokamak—a Russian acronym that stands for “toroidal chamber with magnetic coils.” Tokamaks bend an externally applied magnetic field into a helical field inside by sending a current through the plasma. They seemed to be able to produce plasmas that were hotter and denser than those produced by stellarators. And compared with the outrageously complex geometry of stellarators, the symmetry of the tokamaks’ toroidal shape made them much easier to build.

    Black and white photo of a man standing in front of a table-top-sized machine Lyman Spitzer in the early 1950s built the first stellarator, using a figure-eight design and external magnets. PPPL

    Following the lead of other nations’ fusion programs, the DOE shifted most of its fusion resources to tokamak research. PPPL converted Spitzer’s Model C stellarator into a tokamak in 1969.

    Since then, tokamaks have dominated fusion-energy research. But by the late 1980s, the limitations of the approach were becoming more apparent. In particular, the currents that run through a tokamak’s plasma to stabilize and heat it are themselves a source of instabilities as the currents get stronger.

    To force the restive plasma into submission, the geometrically simple tokamaks need additional features that increase their complexity and cost. Advanced tokamaks—there are about 60 currently operating—have systems for heating and controlling the plasma and massive arrays of magnets to create the confining magnetic fields. They also have cryogenics to cool the magnets to superconducting temperatures a few meters away from a 150 million °C plasma.

    Tokamaks thus far have produced energy only in short pulses. “After 70 years, nobody really has even a good concept for how to make a steady-state tokamak,” notes Michael Zarnstorff, a staff research physicist at PPPL. “The longest pulse so far is just a few minutes. When we talk to electric utilities, that’s not actually what they want to buy.”

    Computational Power Revives the Stellarator

    With tokamaks gobbling up most of the world’s public fusion-energy funds, stellarator research lay mostly dormant until the 1980s. Then, some theorists started to put increasingly powerful computers to work to help them optimize the placement of magnetic coils to more precisely shape the magnetic fields.

    The effort got a boost in 1981, when then-PPPL physicist Allen Boozer invented a coordinate system—known in the physics community as Boozer coordinates—that helps scientists understand how different configurations of magnets affect magnetic fields and plasma confinement. They can then design better devices to maintain stable plasma conditions for fusion. Boozer coordinates can also reveal hidden symmetries in the three-dimensional magnetic-field structure, which aren’t easily visible in other coordinate systems. These symmetries can significantly improve plasma confinement, reduce energy losses, and make the fusion process more efficient.

    “We’re seeing new optimized designs we weren’t capable of coming up with 10 years ago.”

    “The accelerating computational power finally allowed researchers to challenge the so-called fatal flaw of stellarators: the lack of toroidal symmetry,” says Boozer, who is now a professor of applied physics at Columbia University.

    The new insights gave rise to stellarator designs that were far more complex than anything Spitzer could have imagined [see sidebar, “Trailblazing Stellarators”]. Japan’s Large Helical Device came online in 1998 after eight years of construction. The University of Wisconsin’s Helically Symmetric Experiment, whose magnetic-field coils featured an innovative quasi-helical symmetry, took nine years to build and began operation in 1999. And Germany’s Wendelstein 7-X—the largest and most advanced stellarator ever built—produced its first plasma in 2015, after more than 20 years of design and construction.

    Experiment Failure Leads to New Stellarator Design

    In the late 1990s, PPPL physicists and engineers began designing their own version, called the National Compact Stellarator Experiment (NCSX). Envisioned as the world’s most advanced stellarator, it employed a new magnetic-confinement concept called quasi-axisymmetry—a compromise that mimics the symmetry of a tokamak while retaining the stability and confinement benefits of a stellarator by using only externally generated magnetic fields.

    “We tapped into every supercomputer we could find,” says Zarnstorff, who led the NCSX design team, “performing simulations of hundreds of thousands of plasma configurations to optimize the physics properties.”

    Three Ways to Send Atoms on a Fantastical Helical Ride


    An illustration of a 3 different types of stellerators.


    But the design was, like Spitzer’s original invention, ahead of its time. Engineers struggled to meet the precise tolerances, which allowed for a maximum variation from assigned dimensions of only 1.5 millimeters across the entire device. In 2008, with the project tens of millions of dollars over budget and years behind schedule, NCSX was canceled. “That was a very sad day around here,” says Zarnstorff. “We got to build all the pieces, but we never got to put it together.”

    Now, a segment of the NCSX vacuum vessel—a contorted hunk made from the superalloy Inconel—towers over a lonely corner of the C-Site Stellarator Building on PPPL’s campus. But if its presence is a reminder of failure, it is equally a reminder of the lessons learned from the $70 million project.

    For Zarnstorff, the most important insights came from the engineering postmortem. Engineers concluded that, even if they had managed to successfully build and operate NCSX, it was doomed by the lack of a viable way to take the machine apart for repairs or reconfigure the magnets and other components.

    With the experience gained from NCSX and PPPL physicists’ ongoing collaborations with the costly, delay-plagued Wendelstein 7-X program, the path forward became clearer. “Whatever we built next, we knew we needed to make it less expensively and more reliably,” says Zarnstorff. “And we knew we needed to build it in a way that would allow us to take the thing apart.”

    A Testbed for Fusion Energy

    In 2014, Zarnstorff began thinking about building a first-of-its-kind stellarator that would use permanent magnets, rather than electromagnets, to create its helical field, while retaining electromagnets to shape the toroidal field. (Electromagnets generate a magnetic field when an electric current flows through them and can be turned on or off, whereas permanent magnets produce a constant magnetic field without needing an external power source.)

    Even the strongest permanent magnets wouldn’t be capable of confining plasma robustly enough to produce commercial-scale fusion power. But they could be used to create a lower-cost experimental device that would be easier to build and maintain. And that, crucially, would allow researchers to easily adjust and test magnetic fields that could inform the path to a power-producing device.

    PPPL dubbed the device Muse. “Muse was envisioned as a testbed for innovative magnetic configurations and improving theoretical models,” says PPPL research physicist Kenneth Hammond, who is now leading the project. “Rather than immediate commercial application, it’s more focused on exploring fundamental aspects of stellarator design and plasma behavior.”

    The Muse team designed the reactor with two independent sets of magnets. To coax charged particles into a corkscrew-like trajectory, small permanent neodymium magnets are arranged in pairs and mounted to a dozen 3D-printed panels surrounding the glass vacuum chamber, which was custom-made by glass blowers. Adjacent rows of magnets are oriented in opposite directions, twisting the magnetic-field lines at the outside edges.

    Outside the shell, 16 electromagnets composed of circular copper coils generate the toroidal part of the magnetic field. These very coils were mass-produced by PPPL in the 1960s, and they have been a workhorse for rapid prototyping in numerous physics laboratories ever since.

    “In terms of its ability to confine particles, Muse is two orders of magnitude better than any stellarator previously built,” says Hammond. “And because it’s the first working stellarator with quasi-axisymmetry, we will be able to test some of the theories we never got to test on NCSX.”

    The neodymium magnets are a little bigger than a button magnet that might be used to hold a photo to a refrigerator door. Despite their compactness, they pack a remarkable punch. During my visit to PPPL, I turned a pair of magnets in my hands, alternating their polarities, and found it difficult to push them together and pull them apart.

    Graduate students did the meticulous work of placing and securing the magnets. “This is a machine built on pizza, basically,” says Cowley, PPPL’s director. “You can get a lot out of graduate students if you give them pizza. There may have been beer too, but if there was, I don’t want to know about it.”

    The Muse project was financed by internal R&D funds and used mostly off-the-shelf components. “Having done it this way, I would never choose to do it any other way,” Zarnstorff says.

    Stellarex and Thea Energy Advance Stellarator Concepts

    Now that Muse has demonstrated that stellarators can be made quickly, cheaply, and highly accurately, companies founded by current and former PPPL researchers are moving forward with Muse-inspired designs.

    Zarnstorff recently cofounded a company called Stellarex. He says he sees stellarators as the best path to fusion energy, but he hasn’t landed on a magnet configuration for future machines. “It may be a combination of permanent and superconducting electromagnets, but we’re not religious about any one particular approach; we’re leaving those options open for now.” The company has secured some DOE research grants and is now focused on raising money from investors.

    Thea Energy, a startup led by David Gates, who until recently was the head of stellarator physics at PPPL, is further along with its power-plant concept. Like Muse, Thea focuses on simplified manufacture and maintenance. Unlike Muse, the Thea concept uses planar (flat) electromagnetic coils built of high-temperature superconductors.


    Thea Energy


    “The idea is to use hundreds of small electromagnets that behave a lot like permanent magnets, with each creating a dipole field that can be switched on and off,” says Gates. “By using so many individually actuated coils, we can get a high degree of control, and we can dynamically adjust and shape the magnetic fields in real time to optimize performance and adapt to different conditions.”

    The company has raised more than $23 million and is designing and prototyping its initial project, which it calls Eos, in Kearny, N.J. “At first, it will be focused on producing neutrons and isotopes like tritium,” says Gates. “The technology is designed to be a stepping stone toward a fusion power plant called Helios, with the potential for near-term commercialization.”

    Stellarator Startup Leverages Exascale Computing

    Of all the private stellarator startups, Type One Energy is the most well funded, having raised $82.5 million from investors that include Bill Gates’s Breakthrough Energy Ventures. Type One’s leaders contributed to the design and construction of both the University of Wisconsin’s Helically Symmetric Experiment and Germany’s Wendelstein 7-X stellarators.

    The Type One stellarator design utilizes a highly optimized magnetic-field configuration designed to improve plasma confinement. Optimization can relax the stringent construction tolerances typically required for stellarators, making them easier and more cost-effective to engineer and build.

    Type One’s design, like that of Thea Energy’s Eos, makes use of high-temperature superconducting magnets, which provide higher magnetic strength, require less cooling power, and could lower costs and allow for a more compact and efficient reactor. The magnets were designed for a tokamak, but Type One is modifying the coil structure to accommodate the intricate twists and turns of a stellarator.

    In a sign that stellarator research may be moving from mainly scientific experiments into the race to field the first commercially viable reactor, Type One recently announced that it will build “the world’s most advanced stellarator” at the Bull Run Fossil Plant in Clinton, Tenn. To construct what it’s calling Infinity One—expected to be operational by early 2029—Type One is teaming up with the Tennessee Valley Authority and the DOE’s Oak Ridge National Laboratory.

    “As an engineering testbed, Infinity One will not be producing energy,” says Type One CEO Chris Mowry. “Instead, it will allow us to retire any remaining risks and sign off on key features of the fusion pilot plant we are currently designing. Once the design validations are complete, we will begin the construction of our pilot plant to put fusion electrons on the grid.”

    To help optimize the magnetic-field configuration, Mowry and his colleagues are utilizing Summit, one of Oak Ridge’s state-of-the-art exascale supercomputers. Summit is capable of performing more than 200 million times as many operations per second as the supercomputers of the early 1980s, when Wendelstein 7-X was first conceptualized.

    AI Boosts Fusion Reactor Efficiency

    Advances in computational power are already leading to faster design cycles, greater plasma stability, and better reactor designs. Ten years ago, an analysis of a million different configurations would have taken months; now a researcher can get answers in hours.

    And yet, there are an infinite number of ways to make any particular magnetic field. “To find our way to an optimum fusion machine, we may need to consider something like 10 billion configurations,” says PPPL’s Cowley. “If it takes months to make that analysis, even with high-performance computing, that’s still not a route to fusion in a short amount of time.”

    In the hope of shortcutting some of those steps, PPPL and other labs are investing in artificial intelligence and using surrogate models that can search and then rapidly home in on promising solutions. “Then, you start running progressively more precise models, which bring you closer and closer to the answer,” Cowley says. “That way we can converge on something in a useful amount of time.”

    But the biggest remaining hurdles for stellarators, and magnetic-confinement fusion in general, involve engineering challenges rather than physics challenges, say Cowley and other fusion experts. These include developing materials that can withstand extreme conditions, managing heat and power efficiently, advancing magnet technology, and integrating all these components into a functional and scalable reactor.

    Over the past half decade, the vibe at PPPL has grown increasingly optimistic, as new buildings go up and new researchers arrive on Stellarator Road to become part of what may be the grandest scientific challenge of the 21st century: enabling a world powered by safe, plentiful, carbon-free energy.

    PPPL recently broke ground on a new $110 million office and laboratory building that will house theoretical and computational scientists and support the work in artificial intelligence and high-performance computing that is increasingly propelling the quest for fusion. The new facility will also provide space for research supporting PPPL’s expanded mission into microelectronics, quantum sensors and devices, and sustainability sciences.

    PPPL researchers’ quest will take a lot of hard work and, probably, a fair bit of luck. Stellarator Road may be only a mile long, but the path to success in fusion energy will certainly stretch considerably farther.

    Trailblazing Stellarators


    In contrast to Muse’s relatively simple, low-cost approach, these pioneering stellarators are some of the most technically demanding machines ever built, with intricate electromagnetic coil systems and complex geometries that require precise engineering. The projects have provided valuable insights into plasma confinement, magnetic-field optimization, and the potential for steady-state operation, and moved the scientific community closer to achieving practical and sustainable fusion energy.

    Large Helical Device (LHD)


    An image of series of colored boxes and pipes.


    Helically Symmetric Experiment (HSX)


    A photo of a series of pipes and wires.


    Wendelstein 7-X (W7-X)


    A photo of a series of pipes and mechanical elements.

    This article appears in the November 2024 print issue as “An Off-the- Shelf Stellarator.”

  • Wireless Innovator Gerard J. “Jerry” Foschini Remembered
    by Amanda Davis on 27. October 2024. at 13:00



    IEEE Life Fellow Gerard J. “Jerry” Foschini, a Bell Labs researcher for more than 50 years, died on 17 September, 2023, at the age of 83.

    Foschini made groundbreaking contributions to the field of wireless communications that improved the quality of networks and paved the way for several important IEEE standards.

    In the early 1990s he helped to develop the multiple-input multiple-output (MIMO) method of using antennas to increase radio link capacity. A few years later he introduced the Bell Laboratories Layered Space-Time (BLAST) transceiver architecture, which advanced antenna systems by allowing multiple data streams to be transmitted on a single frequency.

    Foschini’s work is set to be honored in Los Angeles at the Italian American Museum’s “Creative Minds” exhibit, which is designed to spotlight inventors and innovators. The exhibit is scheduled to run at the museum from next month until next October.

    Decades of innovation at Bell Labs

    Foschini received a bachelor’s degree in electrical engineering in 1961 from the New Jersey Institute of Technology, in Newark. He earned a master’s degree in EE in 1963 from New York University and went on to earn a Ph.D. in EE in 1967 from Stevens Institute of Technology, in Hoboken, N.J.

    He began his career in 1961 as a researcher at Bell Labs, in Holmdel, N.J. (Bell Labs headquarters moved to nearby Murray Hill in 1967, but the Wireless Communications Lab remained in Holmdel.)

    Gerard Foschini, as a young adult, surrounded by male colleagues at Bell Labs. Gerard Foschini [bottom row, middle] and his colleagues Larry Greenstein [top row], Len Cimini [bottom row, left], and Isam Habbab at Bell Labs in Holmdel, N.J.Darlene Foschini-Field

    MIMO was one of his most well-known breakthroughs. Developed in the late 1980s, the technology became an essential element of wireless communication standards including IEEE 802.11n and IEEE 802.16 (known commercially as WiMAX). MIMO arrays can be found in many cellular and Wi-Fi systems.

    In the mid-1990s Foschini helped develop BLAST. He coauthored the seminal 1998 paper “V-BLAST: An Architecture for Realizing Very High Data Rates Over the Rich-Scattering Wireless Channel” with fellow Bell Labs researchers Glenn Golden, Reinaldo A. Valenzuela, and Peter Wolniansky. A simplified version known as V-BLAST is a multiantenna communication technique that detects and repropagates the strongest signal and eliminates interference, enhancing the data quality of wireless networks.

    Foschini retired in 2013.

    An often-cited researcher

    During his career, Foschini wrote more than 100 published works and was awarded 14 patents related to wireless communications technology. According to the Institute for Scientific Information (now part of Clarivate), Foschini was in the top 0.5 of 1 percent of publishing researchers. His works were cited more than 50,000 times.

    He was elected to the U.S. National Academy of Engineering in 2009 for “contributions to the science and technology of wireless communications with multiple antennas for transmission and receiving.” He was honored with the 2008 IEEE Alexander Graham Bell Medal and the 2006 IEEE Eric E. Sumner Award.

    A tribute published on the IEEE Communications Society website says:
    “Although Jerry was modest and unassuming, his brilliance and deep insight became apparent as soon as one engaged him in a technical conversation. His kindness and grace permeated all his interactions. A great mentor to all his colleagues, Jerry was particularly inspiring to young researchers, eager to hear about their work and provide them with guidance and encouragement.”

  • Video Friday: Swiss-Mile Robot vs. Humans
    by Evan Ackerman on 25. October 2024. at 21:15



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    Humanoids 2024: 22–24 November 2024, NANCY, FRANCE

    Enjoy today’s videos!

    Swiss-Mile’s robot (which is really any robot that meets the hardware requirement to run their software) is faster than “most humans.” So what does that mean, exactly?

    The winner here is Riccardo Rancan, who doesn’t look like he was trying especially hard—he’s the world champion in high-speed urban orienteering, which is a sport that I did not know existed but sounds pretty awesome.

    [ Swiss-Mile ]

    Thanks, Marko!

    Oh good, we’re building giant fruit fly robots now.

    But seriously, this is useful and important research because understanding the relationship between a nervous system and a bunch of legs can only be helpful as we ask more and more of legged robotic platforms.

    [ Paper ]

    Thanks, Clarus!

    Watching humanoids get up off the ground will never not be fascinating.

    [ Fourier ]

    The Kepler Forerunner K2 represents the Gen 5.0 robot model, showcasing a seamless integration of the humanoid robot’s cerebral, cerebellar, and high-load body functions.

    [ Kepler ]

    Diffusion Forcing combines the strength of full-sequence diffusion models (like SORA) and next-token models (like LLMs), acting as either or a mix at sampling time for different applications without retraining.

    [ MIT ]

    Testing robot arms for space is no joke.

    [ GITAI ]

    Welcome to the Modular Robotics Lab (ModLab), a subgroup of the GRASP Lab and the Mechanical Engineering and Applied Mechanics Department at the University of Pennsylvania under the supervision of Prof. Mark Yim.

    [ ModLab ]

    This is much more amusing than it has any right to be.

    [ Westwood Robotics ]

    Let’s go for a walk with Adam at IROS’24!

    [ PNDbotics ]

    From Reachy 1 in 2023 to our newly launched Reachy 2, our grippers have been designed to enhance precision and dexterity in object manipulation. Some of the models featured in the video are prototypes used for various tests, showing the innovation behind the scenes.

    [ Pollen ]

    I’m not sure how else you’d efficiently spray the tops of trees? Drones seem like a no-brainer here.

    [ SUIND ]

    Presented at ICRA40 in Rotterdam, we show the challenges faced by mobile manipulation platforms in the field. We at CSIRO Robotics are working steadily towards a collaborative approach to tackle such challenging technical problems.

    [ CSIRO ]

    ABB is best known for arms, but it looks like they’re exploring AMRs (autonomous mobile robots) for warehouse operations now.

    [ ABB ]

    Howie Choset, Lu Li, and Victoria Webster-Wood of the Manufacturing Futures Institute explain their work to create specialized sensors that allow robots to “feel” the world around them.

    [ CMU ]

    Columbia Engineering Lecture Series in AI: “How Could Machines Reach Human-Level Intelligence?” by Yann LeCun.

    Animals and humans understand the physical world, have common sense, possess a persistent memory, can reason, and can plan complex sequences of subgoals and actions. These essential characteristics of intelligent behavior are still beyond the capabilities of today’s most powerful AI architectures, such as Auto-Regressive LLMs.
    I will present a cognitive architecture that may constitute a path towards human-level AI. The centerpiece of the architecture is a predictive world model that allows the system to predict the consequences of its actions. and to plan sequences of actions that that fulfill a set of objectives. The objectives may include guardrails that guarantee the system’s controllability and safety. The world model employs a Joint Embedding Predictive Architecture (JEPA) trained with self-supervised learning, largely by observation.

    [ Columbia ]

  • This Inventor Is Molding Tomorrow’s Inventors
    by Rina Diane Caballar on 25. October 2024. at 13:00



    Marina Umaschi Bers has long been at the forefront of technological innovation for kids. In the 2010s, while teaching at Tufts University, in Massachusetts, she codeveloped the ScratchJr programming language and KIBO robotics kits, both intended for young children in STEM programs. Now head of the DevTech research group at Boston College, she continues to design learning technologies that promote computational thinking and cultivate a culture of engineering in kids.

    What was the inspiration behind creating ScratchJr and the KIBO robot kits?

    Marina Umaschi Bers: We want little kids—as they learn how to read and write, which are traditional literacies—to learn new literacies, such as how to code. To make that happen, we need to create child-friendly interfaces that are developmentally appropriate for their age, so they learn how to express themselves through computer programming.

    How has the process of invention changed since you developed these technologies?

    Bers: Now, with the maker culture, it’s a lot cheaper and easier to prototype things. And there’s more understanding that kids can be our partners as researchers and user-testers. They are not passive entities but active in expressing their needs and helping develop inventions that fit their goals.

    What should people creating new technologies for kids keep in mind?

    Bers: Not all kids are the same. You really need to look at the age of the kids. Try to understand developmentally where these children are in terms of their cognitive, social, emotional development. So when you’re designing, you’re designing not just for a user, but you’re designing for a whole human being.

    The other thing is that in order to learn, children need to have fun. But they have fun by really being pushed to explore and create and make new things that are personally meaningful. So you need open-ended environments that allow children to explore and express themselves.

    A photo of two children playing with blocks. The KIBO kits teach kids robotics coding in a playful and screen-free way. KinderLab Robotics

    How can coding and learning about robots bring out the inner inventors in kids?

    Bers: I use the words “coding playground.” In a playground, children are inventing games all the time. They are inventing situations, they’re doing pretend play, they’re making things. So if we’re thinking of that as a metaphor when children are coding, it’s a platform for them to create, to make characters, to create stories, to make anything they want. In this idea of the coding playground, creativity is welcome—not just “follow what the teacher says” but let children invent their own projects.

    What do you hope for in terms of the next generation of technologies for kids?

    Bers: I hope we would see a lot more technologies that are outside. Right now, one of our projects is called Smart Playground [a project that will incorporate motors, sensors, and other devices into playgrounds to bolster computational thinking through play]. Children are able to use their bodies and run around and interact with others. It’s kind of getting away from the one-on-one relationship with the screen. Instead, technology is really going to augment the possibilities of people to interact with other people, and use their whole bodies, much of their brains, and their hands. These technologies will allow children to explore a little bit more of what it means to be human and what’s unique about us.

    This article appears in the November 2024 print issue as “The Kids’ Inventor.”

  • A Picture Is Worth 4.6 Terabits
    by Dina Genkina on 24. October 2024. at 18:00



    Clark Johnson says he has wanted to be a scientist ever since he was 3. At age 8, he got bored with a telegraph-building kit he received as a gift and repurposed it into a telephone. By age 12, he set his sights on studying physics because he wanted to understand how things worked at the most basic level.

    “I thought, mistakenly at the time, that physicists were attuned to the left ear of God,” Johnson says.

    Clark Johnson


    Employer

    Wave Domain

    Title

    CFO

    Member grade

    Life Fellow

    After graduating at age 19 with a bachelor’s degree in physics in 1950 from the University of Minnesota Twin Cities, he was planning to go to graduate school when he got a call from the head of the physics section at 3M’s R&D laboratory with a job offer. Tempted by the promise of doing things with his own hands, Johnson accepted the role of physicist at the company’s facility in St. Paul, Minn. Thus began his more than seven-decade-long career as an electrical engineer, inventor, and entrepreneur—which continues to this day.

    Johnson, an IEEE Life Fellow, is an active member of the IEEE Magnetics Society and served as its 1983–1984 president.

    He was on the science committee of the U.S. House of Representatives, and then was recruited by the Advanced Research Projects Agency (ARPA) and assigned to assist in MIT’s Research Program on Communications Policy, where he contributed to the development of HDTV.

    He went on to help found Wave Domain in Monson, Mass. Johnson and his Wave Domain collaborators have been granted six patents for their latest invention, a standing-wave storage (SWS) system that houses archival data in a low-energy-use, tamper-proof way using antiquated photography technology.

    3M, HDTV, and a career full of color

    3M turned out to be fertile ground for Johnson’s creativity.

    “You could spend 15 percent of your time working on things you liked,” he says. “The president of the company believed that new ideas sort of sprung out of nothing, and if you poked around, you might come across something that could be useful.”

    Johnson’s poking around led him to contribute to developing an audio tape cartridge and Scotchlite, the reflective film seen on roads, signs, and more.

    In 1989 he was tapped to be an IEEE Congressional Fellow. He chose to work with Rep. George Brown Jr., a Democrat representing the 42nd district in central California. Brown was a ranking member of the House committee on science, space, and technology, which oversees almost all non-defense and non-health related research.

    “It was probably the most exciting year of my entire life,” Johnson says.

    While on the science committee, he met Richard Jay Solomon, who was associate director of MIT’s Research Program on Communications Policy, testifying for the committee on video and telecom issues. Solomon’s background is diverse. He studied physics and electrical engineering in the early 1960s at Brooklyn Polytechnic and general science at New York University. Before becoming a research associate at MIT in 1969, he held a variety of positions. He ran a magazine about scientific photography, and he founded a business that provided consulting on urban planning and transportation. He authored four textbooks on transportation planning, three of which were published by the American Society of Civil Engineers. At the magazine, Solomon gained insights into arcane, long-forgotten 19th-century photographic processes that turned out to be useful in future inventions.

    a man standing at the end of a brown and orange train car Johnson and Solomon bonded over their shared interest in trains. Johnson’s refurbished Pullman car has traveled some 850,000 miles across the continental U.S.Clark Johnson

    Johnson and Solomon clicked over a shared interest in trains. At the time they met, Johnson owned a railway car that was parked in the District of Columbia’s Union Station, and he used it to move throughout North America, traveling some 850,000 miles before selling the car in 2019. Johnson and Solomon shared many trips aboard the refurbished Pullman car.

    Now they are collaborators on a new method to store big data in a tamperproof, zero-energy-cost medium.

    Conventional storage devices such as solid-state drives and hard disks take energy to maintain, and they might degrade over time, but Johnson says the technique he, Solomon, and collaborators developed requires virtually no energy and can remain intact for centuries under most conditions.

    Long before collaborating on their latest project, Johnson and Solomon teamed up on another high-profile endeavor: the development of HDTV. The project arose through their work on the congressional science committee.

    In the late 1980s, engineers in Japan were working on developing an analog high-definition television system.

    “My boss on the science committee said, ‘We really can’t let the Japanese do this. There’s all this digital technology and digital computers. We’ve got to do this digitally,’” Johnson says.

    That spawned a collaborative project funded by NASA and ARPA (the predecessor of modern-day DARPA). After Johnson’s tenure on the science committee ended, he and Solomon joined a team at MIT that participated in the collaboration. As they developed what would become the dominant TV technology, Johnson and Solomon became experts in optics. Working with Polaroid, IBM, and Philips in 1992, the team demonstrated the world’s first digital, progressive-scanned, high-definition camera at the annual National Association of Broadcasters conference.

    A serendipitous discovery

    Around 2000, Clark and Solomon, along with a new colleague, Eric Rosenthal, began working as independent consultants to NASA and the U.S. Department of Defense. Rosenthal had been a vice president of research and development at Walt Disney Imagineering and general manager of audiovisual systems engineering at ABC television prior to joining forces with Clark and Solomon.

    While working on one DARPA-funded project, Solomon stumbled upon a page in a century-old optics textbook that caught his eye. It described a method developed by noted physicist Gabriel Lippmann for producing color photographs. Instead of using film or dyes, Lippmann created photos by using a glass plate coated with a specially formulated silver halide emulsion.

    When exposed to a bright, sunlit scene, the full spectrum of light reflected off a mercury-based mirror coating on the back of the glass. It created standing waves inside the emulsion layer of the colors detected. The silver grains in the brightest parts of the standing wave became oxidized, as if remembering the precise colors they saw. (It was in stark contrast to traditional color photographs and television, which store only red, green, and blue parts of the spectrum.) Then, chemical processing turned the oxidized silver halide grains black, leaving the light waves imprinted in the medium in a way that is nearly impossible to tamper with. Lippmann received the 1908 Nobel Prize in Physics for his work.

    Lippmann’s photography technique did not garner commercial success, because there was no practical way to duplicate the images or print them. And at the time, the emulsions needed the light to be extremely bright to be properly imprinted in the medium.

    Nevertheless, Solomon was impressed with the durability of the resulting image. He explained the process to his colleagues, who recognized the possibility of using the technique to store information for archival purposes. Johnson saw Lippmann’s old photographs at the Museum for Photography, in Lausanne, Switzerland, where he noticed that the colors appeared clear and intense despite being more than a century old.

    The silver halide method stuck with Solomon, and in 2013 he and Johnson returned to Lippmann’s emulsion photography technique.

    “We got to talking about how we could take all this information we knew about color and use it for something,” Johnson says.

    Data in space and on land

    While Rosenthal was visiting the International Space Station headquarters in Montgomery, Ala., in 2013, a top scientist said, “‘The data stored on the station gets erased every 24 hours by cosmic rays,’” Rosenthal recalls. “‘And we have to keep rewriting the data over and over and over again.’” Cosmic rays and solar flares can damage electronic components, causing errors or outright erasures on hard disks and other traditional data storage systems.

    Rosenthal, Johnson, and Solomon knew that properly processed silver halide photographs would be immune to such hazards, including electromagnetic pulses from nuclear explosions. The team examined Lippmann’s photographic emulsion anew.

    Solomon’s son, Brian Solomon, a professional photographer and a specialist in making photographic emulsions, also was concerned about the durability of conventional dye-based color photographs, which tend to start fading after a few decades.

    The team came up with an intriguing idea: Given how durable Lippmann’s photographs appeared to be, what if they could use a similar technique—not for making analog images but for storing digital data? Thus began their newest engineering endeavor: changing how archival data—data that doesn’t need to be overwritten but simply preserved and read occasionally—is stored.

    black text with red and green wavy lines and black dots in a gray box with another gray box next to it The standing wave storage technique works by shining bright LEDs onto a specially formulated emulsion of silver grains in gelatin. The light reflects off the substrate layer (which could be air), and forms standing waves in the emulsion. Standing waves oxidize the silver grains at their peaks, and a chemical process turns the oxidized silver grains black, imprinting the pattern of colors into the medium. Wave Domain

    Conventionally stored data sometimes is protected by making multiple copies or continuously rewriting it, Johnson says. The techniques require energy, though, and can be labor-intensive.

    The amount of data that needs to be stored on land is also growing by leaps and bounds. The market for data centers and other artificial intelligence infrastructure is growing at an annual rate of 44 percent, according to Data Bridge Market Research. Commonly used hard drives and solid-state drives consume some power, even when they are not in use. The drives’ standby power consumption varies between 0.05 and 2.5 watts per drive. And data centers contain an enormous number of drives requiring tremendous amounts of electricity to keep running.

    Johnson estimates that about 25 percent of the data held in today’s data centers is archival in nature, meaning it will not need to be overwritten.

    The ‘write once, read forever’ technology

    The technology Johnson, Solomon, and their collaborators have developed promises to overcome the energy requirements and vulnerabilities of traditional data storage for archival applications.

    The design builds off of Lippmann’s idea. Instead of taking an analog photograph, the team divided the medium into pixels. With the help of emulsion specialist Yves Gentet, they worked to improve Lippmann’s emulsion chemistry, making it more sensitive and capable of storing multiple wavelengths at each pixel location. The final emulsion is a combination of silver halide and extremely hardened gelatin. Their technique now can store up to four distinct narrow-band, superimposed colors in each pixel.

    black text with squares with red, green, blue, yellow and pink in them with another large rectangle below with a spectrum of the rainbow in colors The standing wave storage technique can store up to four colors out of a possible 32 at each pixel location. This adds up to an astounding storage capacity of 4.6 terabits (or roughly 300 movies) in the area of a single photograph. Wave Domain

    “The textbooks say that’s impossible,” Solomon says, “but we did it, so the textbooks are wrong.”

    For each pixel, they can choose four colors out of a possible 32 to store.

    That amounts to more than 40,000 possibilities. Thus, the technique can store more than 40,000 bits (although the format need not be binary) in each 10-square-micrometer pixel, or 4.6 terabits in a 10.16 centimeter by 12.7 cm modified Lippmann plate. That’s more than 300 movies’ worth of data stored in a single picture.

    To write on the SWS medium, the plate—coated with a thin layer of the specially formulated emulsion—is exposed to light from an array of powerful color LEDs.

    That way, the entire plate is written simultaneously, greatly reducing the writing time per pixel.

    The plate then gets developed through a chemical process that blackens the exposed silver grains, memorizing the waves of color it was exposed to.

    Finally, a small charged-couplet-device camera array, like those used in cellphones, reads out the information. The readout occurs for the entire plate at once, so the readout rate, like the writing rate, is fast.

    “The data that we read is coming off the plate at such a high bandwidth,” Solomon says. “There is no computer on the planet that can absorb it without some buffering.”

    The entire memory cell is a sandwich of the LED array, the photosensitive plate, and the CCD. All the elements use off-the-shelf parts.

    “We took a long time to figure out how to make this in a very inexpensive, reproducible, quick way,” Johnson says. “The idea is to use readily available parts.” The entire storage medium, along with its read/write infrastructure, is relatively inexpensive and portable.

    To test the durability of their storage method, the team sent their collaborators at NASA some 150 samples of their SWS devices to be hung by astronauts outside the International Space Station for nine months in 2019. They then tested the integrity of the stored data after the SWS plates were returned from space, compared with another 150 plates stored in Rosenthal’s lab on the ground.

    “There was absolutely zero degradation from nine months of exposure to cosmic rays,” Solomon says. Meanwhile, the plates on Rosenthal’s desk were crawling with bacteria, while the ISS plates were sterile. Silver is a known bactericide, though, so the colors were immune, Solomon says.

    Their most recent patent, granted earlier this year, describes a method of storing data that requires no power to maintain when not actively reading or writing data. Team members say the technique is incorruptible: It is immune to moisture, solar flares, cosmic rays, and other kinds of radiation. So, they argue, it can be used both in space and on land as a durable, low-cost archival data solution.

    Passing on the torch

    The new invention has massive potential applications. In addition to data centers and space applications, Johnson says, scientific enterprises such as the Rubin Observatory being built in Chile, will produce massive amounts of archival data that could benefit from SWS technology.

    “It’s all reference data, and it’s an extraordinary amount of data that’s being generated every week that needs to be kept forever,” Johnson says.

    Johnson says, however, that he and his team will not be the ones to bring the technology to market: “I’m 94 years old, and my two partners are in their 70s and 80s. We’re not about to start a company.”

    He is ready to pass on the torch. The team is seeking a new chief executive to head up Wave Domain, which they hope will continue the development of SWS and bring it to mass adoption.

    Johnson says he has learned that people rarely know which new technologies will eventually have the most impact. Perhaps, though few people are aware of it now, storing big data using old photographic technology will become an unexpected success.

  • Gandhi Inspired a New Kind of Engineering
    by Edd Gent on 24. October 2024. at 13:00



    The teachings of Mahatma Gandhi were arguably India’s greatest contribution to the 20th century. Raghunath Anant Mashelkar has borrowed some of that wisdom to devise a frugal new form of innovation he calls “Gandhian engineering.” Coming from humble beginnings, Mashelkar is driven to ensure that the benefits of science and technology are shared more equally. He sums up his philosophy with the epigram “more from less for more.” This engineer has led India’s preeminent R&D organization, the Council of Scientific and Industrial Research, and he has advised successive governments.

    What was the inspiration for Gandhian engineering?

    Raghunath Anant Mashelkar: There are two quotes of Gandhi’s that were influential. The first was, “The world has enough for everyone’s need, but not enough for everyone’s greed.” He was saying that when resources are exhaustible, you should get more from less. He also said the benefits of science must reach all, even the poor. If you put them together, it becomes “more from less for more.”

    My own life experience inspired me, too. I was born to a very poor family, and my father died when I was six. My mother was illiterate and brought me to Mumbai in search of a job. Two meals a day was a challenge, and I walked barefoot until I was 12 and studied under streetlights. So it also came from my personal experience of suffering because of a lack of resources.

    How does Gandhian engineering differ from existing models of innovation?

    Mashelkar: Conventional engineering is market or curiosity driven, but Gandhian engineering is application and impact driven. We look at the end user and what we want to achieve for the betterment of humanity.

    Most engineering is about getting more from more. Take an iPhone: They keep creating better models and charging higher prices. For the poor it is less from less: Conventional engineering looks at removing features as the only way to reduce costs.

    In Gandhian engineering, the idea is not to create affordable [second-rate] products, but to make high technology work for the poor. So we reinvent the product from the ground up. While the standard approach aims for premium price and high margins, Gandhian engineering will always look at affordable price, but high volumes.

    A photo of rows of artificial feet.  The Jaipur foot is a light, durable, and affordable prosthetic.Gurinder Osan/AP

    What is your favorite example of Gandhian engineering?

    Mashelkar: My favorite is the Jaipur foot. Normally, a sophisticated prosthetic foot costs a few thousand dollars, but the Jaipur foot does it for [US] $20. And it’s very good technology; there is a video of a person wearing a Jaipur foot climbing a tree, and you can see the flexibility is like a normal foot. Then he runs one kilometer in 4 minutes, 30 seconds.

    What is required for Gandhian engineering to become more widespread?

    Mashelkar: In our young people, we see innovation and we see passion, but compassion is the key. We also need more soft funding [grants or zero-interest loans], because venture capital companies often turn out to be “vulture capital” in a way, because they want immediate returns.

    We need a shift in the mindset of businesses—they can make money not just from premium products for those at the top of the pyramid, but also products with affordable excellence designed for large numbers of people.

    This article appears in the November 2024 print issue as “The Gandhi Inspired Inventor.”

  • Google Is Now Watermarking Its AI-Generated Text
    by Eliza Strickland on 23. October 2024. at 15:00



    The chatbot revolution has left our world awash in AI-generated text: It has infiltrated our news feeds, term papers, and inboxes. It’s so absurdly abundant that industries have sprung up to provide moves and countermoves. Some companies offer services to identify AI-generated text by analyzing the material, while others say their tools will “humanize“ your AI-generated text and make it undetectable. Both types of tools have questionable performance, and as chatbots get better and better, it will only get more difficult to tell whether words were strung together by a human or an algorithm.

    Here’s another approach: Adding some sort of watermark or content credential to text from the start, which lets people easily check whether the text was AI-generated. New research from Google DeepMind, described today in the journal Nature, offers a way to do just that. The system, called SynthID-Text, doesn’t compromise “the quality, accuracy, creativity, or speed of the text generation,” says Pushmeet Kohli, vice president of research at Google DeepMind and a coauthor of the paper. But the researchers acknowledge that their system is far from foolproof, and isn’t yet available to everyone—it’s more of a demonstration than a scalable solution.

    Google has already integrated this new watermarking system into its Gemini chatbot, the company announced today. It has also open-sourced the tool and made it available to developers and businesses, allowing them to use the tool to determine whether text outputs have come from their own large language models (LLMs), the AI systems that power chatbots. However, only Google and those developers currently have access to the detector that checks for the watermark. As Kohli says: “While SynthID isn’t a silver bullet for identifying AI-generated content, it is an important building block for developing more reliable AI identification tools.”

    The Rise of Content Credentials

    Content credentials have been a hot topic for images and video, and have been viewed as one way to combat the rise of deepfakes. Tech companies and major media outlets have joined together in an initiative called C2PA, which has worked out a system for attaching encrypted metadata to image and video files indicating if they’re real or AI-generated. But text is a much harder problem, since text can so easily be altered to obscure or eliminate a watermark. While SynthID-Text isn’t the first attempt at creating a watermarking system for text, it is the first one to be tested on 20 million prompts.

    Outside experts working on content credentials see the DeepMind research as a good step. It “holds promise for improving the use of durable content credentials from C2PA for documents and raw text,” says Andrew Jenks, Microsoft’s director of media provenance and executive chair of the C2PA. “This is a tough problem to solve, and it is nice to see some progress being made,” says Bruce MacCormack, a member of the C2PA steering committee.

    How Google’s Text Watermarks Work

    SynthID-Text works by discreetly interfering in the generation process: It alters some of the words that a chatbot outputs to the user in a way that’s invisible to humans but clear to a SynthID detector. “Such modifications introduce a statistical signature into the generated text,” the researchers write in the paper. “During the watermark detection phase, the signature can be measured to determine whether the text was indeed generated by the watermarked LLM.”

    The LLMs that power chatbots work by generating sentences word by word, looking at the context of what has come before to choose a likely next word. Essentially, SynthID-Text interferes by randomly assigning number scores to candidate words and having the LLM output words with higher scores. Later, a detector can take in a piece of text and calculate its overall score; watermarked text will have a higher score than non-watermarked text. The DeepMind team checked their system’s performance against other text watermarking tools that alter the generation process, and found that it did a better job of detecting watermarked text.

    However, the researchers acknowledge in their paper that it’s still easy to alter a Gemini-generated text and fool the detector. Even though users wouldn’t know which words to change, if they edit the text significantly or even ask another chatbot to summarize the text, the watermark would likely be obscured.

    Testing Text Watermarks at Scale

    To be sure that SynthID-Text truly didn’t make chatbots produce worse responses, the team tested it on 20 million prompts given to Gemini. Half of those prompts were routed to the SynthID-Text system and got a watermarked response, while the other half got the standard Gemini response. Judging by the “thumbs up” and “thumbs down” feedback from users, the watermarked responses were just as satisfactory to users as the standard ones.

    Which is great for Google and the developers building on Gemini. But tackling the full problem of identifying AI-generated text (which some call AI slop) will require many more AI companies to implement watermarking technologies—ideally, in an interoperable manner so that one detector could identify text from many different LLMs. And even in the unlikely event that all the major AI companies signed on to some agreement, there would still be the problem of open-source LLMs, which can easily be altered to remove any watermarking functionality.

    MacCormack of C2PA notes that detection is a particular problem when you start to think practically about implementation. “There are challenges with the review of text in the wild,” he says, “where you would have to know which watermarking model has been applied to know how and where to look for the signal.” Overall, he says, the researchers still have their work cut out for them. This effort “is not a dead end,” says MacCormack, “but it’s the first step on a long road.”

  • This Engineer Became a Star in Technology Publishing
    by Glenn Zorpette on 23. October 2024. at 14:00



    Donald Christiansen, who transformed IEEE Spectrum from a promising but erratic technology magazine into a repeat National Magazine Award winner, died on 2 October 2024, at the age of 97, in Huntington, N.Y.

    After growing up in Plainfield, N.J., Don joined the U.S. Navy during World War II as an 18-year-old. He served aboard the aircraft carrier San Jacinto, an experience that led many years later to a book, The Saga of the San Jac. After the war, in 1950, he received a bachelor’s degree in electrical engineering at Cornell University. From 1950 to 1962 he worked for CBS’s Electronics division, an arm of the broadcasting network headquartered in Danvers, Mass. It manufactured vacuum tubes for radios and televisions, and later, semiconductors.

    But Don wasn’t a typical engineer. He had a burning desire to write and had a knack for crafting deft, engaging stories. By 1959 he was a regular contributor to Electronics World, a popular newsstand magazine published by Hugo Gernsback.

    It was a modest start to what would be a rapid rise in publishing. A couple of years later, at age 35, he became a full-time editor at Electronic Design. In 1966, he moved to McGraw-Hill’s Electronics magazine, the kingpin publication of a thriving subsegment of the business press. And a few years after that, he was editor in chief. In those days, an issue of Electronics might have as many as 250 pages. The magazine had an editorial staff of about 50 people, with bureaus in Bonn, London, and Tokyo.

    IEEE Spectrum, meanwhile, was a fledgling magazine. Following the IEEE’s formation in 1963, Spectrum made its debut in January 1964. Those early issues of Spectrum were a funky hybrid of house organ and magazine. There was a “News of the IEEE” column, often illustrated with posed pictures of conference organizers holding printed programs and smiling resolutely. A “People” segment noted career milestones of IEEE members, illustrated with yet more resolute smiles.

    Three smiling men in suits hold a plaque and an Ellie award. In April 1993, IEEE Spectrum won a National Magazine Award for its reporting on Iraq’s effort to build an atomic bomb. The staffers who worked on the report were John A. Adam [center] and Glenn Zorpette [right]. Editor in Chief Donald Christiansen is at left.IEEE Spectrum

    The features were a varied mix, generally illustrated with graphs, charts, and tables. Some articles were only marginally more readable than technical papers, while others were sprawling, thinky pieces of actual or imagined social relevance. For example, the second issue of Spectrum featured an article titled “Graduate Education: A Basic National Resource.” During this era, mathematical equations sporadically swarmed into the feature well like ants at a picnic.

    After about seven years of this, Donald G. Fink, the IEEE’s executive director (then called a “general manager”), decided it was time for Spectrum to have a full-time professional editor in chief. By then, Fink had grown weary of fielding the question “Who’s really running this magazine?” Fink also knew who he wanted for the role: Christiansen. Like Christiansen, Fink had been the top editor at Electronics.

    Fink asked Christiansen to write a proposal describing what he would do with Spectrum if he were the editor. Christiansen’s plan was straightforward: Ban mathematical equations; publish shorter, tightly edited feature articles; and include more staff-written features. And he insisted on being not just the editor but also the publisher of Spectrum. Fink submitted Christiansen’s proposal to the IEEE board of directors, which agreed to all the conditions.

    As editor in chief, Don showed an enduring interest in ethical conflicts experienced by engineers. In 2014, he told me how this preoccupation began. In the late 1950s, CBS was competing with RCA for a big contract from Motorola to produce tubes, including cathode-ray tubes, for color televisions. A group of Motorola executives wanted to visit CBS’s production facilities to see the CRTs being produced. The problem was that at the time, CBS had only six working CRTs and was experiencing problems with the manufacturing line. So they basically orchestrated a phony demonstration, making it appear as though the line was completing the CRTs in real time, before the visitors’ eyes.

    The ruse worked. CBS landed the contract and soon fixed the problems with the manufacturing line. But Don never forgot that experience.

    A November 1979 cover of IEEE Spectrum magazine shows a graphic grid of cooling towers, one of which is red and larger. Spectrum’s November 1979 issue, containing a special report on the accident at the Three Mile Island nuclear plant, won a National Magazine Award.IEEE Spectrum

    Don hired me to be a Spectrum staff editor in 1984. In those days, the IEEE occupied a couple of floors in a building on the northwest corner of 47th Street and First Ave., in the tony Turtle Bay area of Manhattan. Don’s office, on the 11th floor, was in the southeast corner of the building, overlooking the United Nations rose garden and, beyond that, the East River and Queens. The immense office, flooded with natural light, was like a museum, decked out with various certificates, diplomas, recognitions, and awards Don had won in connection with Spectrum or one of his other ventures, including McGraw-Hill’s Standard Handbook of Electronic Engineering, a cash cow for many years.

    Don, it might be said, was not a gregarious man. During a typical day, he mostly kept to his office, his solitude gently but firmly protected by his assistant, the late Nancy Hantman. Still, he occasionally surprised us staffers. One day, out of the blue, he announced a photography contest and then submitted some entries of his own. These included a couple of very slickly lit portraits of fashion models wearing leotards.

    Don’s rigorously top-down managerial style was very much a product of his time. He had an eye for talent, and he believed in giving people plenty of room to maneuver. It led to many great stories—and journalism awards. In 1979, Spectrum explained to the world exactly what caused the partial meltdown in a reactor core at the Three Mile Island nuclear plant in Pennsylvania. In 1982, just after the war in the Falklands, the magazine made a wide-ranging assessment of rapidly advancing military technologies. In 1985, we unraveled the chain of events that led inexorably to the breakup of AT&T and correctly predicted what it would mean for the future of communications. And in 1992, we detailed how Iraq tried to build an atomic bomb, and how the discovery of that clandestine effort led to new ideas about safeguarding nuclear weaponry. All four of those investigations won National Magazine Awards, putting Spectrum among the very few—count ’em on one hand—association magazines ever to win the awards repeatedly.

    For many years after retiring from the IEEE, Don wrote a popular column called “Backscatter” for Today’s Engineer, a publication of IEEE-USA, the IEEE’s advocacy group for U.S. engineers. He wrote about pretty much whatever he wanted, but many columns drew on his firsthand exposure to some of the great events and people during an amazing time in technology. He never lost his passion for professional concerns: For several years he organized a seminar on engineering ethics for the Long Island, N.Y., IEEE section, of which he was an active member.

    Don straddled the worlds of engineering and publishing in a way that few others ever did, before or after him. In doing so, he left an indelible mark on IEEE Spectrum, which still bears traces of his editorship. He also showed many of us how expansive an engineering magazine could be.

  • Hardware Startups Navigate Global Trends with Hax’s Help
    by Matthew S. Smith on 23. October 2024. at 13:00



    Duncan Turner is the managing director at Hax, a startup accelerator that specializes in “hard tech”—innovations in physical science and engineering. Hax offers up to US $500,000 in funding alongside resources that include chemical, mechanical, and electronics labs, and access to a global team of engineers and scientists. Turner’s group has worked with more than 300 hard tech companies with the goal of accelerating their pace of innovation to match that of software companies.

    The pandemic, and the supply-chain issues that followed, were a hurdle for start-ups. Do these issues continue to challenge inventors?

    Duncan Turner: [Pre-pandemic] investors were realizing that with climate issues, you need to start investing in the hardware that makes a difference. That interest and capital was met by supply-chain challenges. It was felt by our later-stage companies in the consumer sector, who found it hard to get parts. The good news is the supply-chain challenges have died down. We’ve seen an incredible uptake in interest and investors in hard tech, that previously had gone into software.

    Why does Hax have a presence in India and China?

    Turner: There are areas with national incentives to do things within borders, but in general you need a global supply chain. [In Shenzhen, China] we had a presence, then pulled it back and changed it. We had moved towards deeper tech, the size of which had grown beyond even what could fit in a [shipping] container, so we asked, What is the point of coming over to China to do this? But we realized for electrical engineering and for manufacturing of PCBs on a quick turnaround, there’s just no other option. And when companies like Apple put manufacturing in India, you get an ecosystem of suppliers. We wanted two equal supply chains to source from.

    Have geopolitical trade tensions changed how you can support innovators?

    Turner: A lot of the [U.S.] Inflation Reduction Act is centered around technologies we’re investing in, but there’s a theme of fully “made in America.” We’re not there yet. I think it’s going to take a decade, but we want to be a part of that. That doesn’t mean we’re abandoning a global approach. But when we see a company doing something that was done offshore, onshore in the United States, and it’s helping with the environment, we want to dig in.

    Artificial intelligence is a massive trend. How are you helping inventors navigate it?

    Turner: AI is focusing investment into areas investors had been hesitant about. Between a third and a half of our portfolio is in robotics. Investors understood the opportunity of robotics but were stuck on the machine learning aspects. Now they’re seeing the potential. We’re also looking at what we can do with materials in the energy sector, and to decarbonize manufacturing. You’ll see AI used to discover materials that meet these goals.

    Going into 2025, what are the big themes innovators need to think about?

    Turner: Corporations are responsible for positive changes in how their products impact [greenhouse gas] emissions. The commitment will vary, but it won’t disappear. Another theme is infrastructure and reindustrialization. I think there’s so much opportunity for innovators to come with a fresh approach and say, “Look, we can disrupt this one area.” Any way you can bring manufacturing onshore and make it sustainable is a wonderful place to be.

  • Why Simone Giertz, the Queen of Useless Robots, Got Serious
    by Stephen Cass on 22. October 2024. at 14:00



    Simone Giertz came to fame in the 2010s by becoming the self-proclaimed “queen of shitty robots.” On YouTube she demonstrated a hilarious series of self-built mechanized devices that worked perfectly for ridiculous applications, such as a headboard-mounted alarm clock with a rubber hand to slap the user awake.

    But Giertz has parlayed her Internet renown into Yetch, a design company that makes commercial consumer products. (The company name comes from how Giertz’s Swedish name is properly pronounced.) Her first release, a daily habit-tracking calendar, was picked up by prestigious outlets such as the Museum of Modern Art design store in New York City. She has continued to make commercial products since, as well as one-off strange inventions for her online audience.

    Where did the motivation for your useless robots come from?

    Simone Giertz: I just thought that robots that failed were really funny. It was also a way for me to get out of creating from a place of performance anxiety and perfection. Because if you set out to do something that fails, that gives you a lot of creative freedom.


    You built up a big online following. A lot of people would be happy with that level of success. But you moved into inventing commercial products. Why?

    Giertz: I like torturing myself, I guess! I’d been creating things for YouTube and for social media for a long time. I wanted to try something new and also find longevity in my career. I’m not super motivated to constantly try to get people to give me attention. That doesn’t feel like a very good value to strive for. So I was like, “Okay, what do I want to do for the rest of my career?” And developing products is something that I’ve always been really, really interested in. And yeah, it is tough, but I’m so happy to be doing it. I’m enjoying it thoroughly, as much as there’s a lot of face-palm moments.

    A graphical illustration of a calendar. Giertz’s every day goal calendar was picked up by the Museum of Modern Art’s design store. Yetch

    What role does failure play in your invention process?

    Giertz: I think it’s inevitable. Before, obviously, I wanted something that failed in the most unexpected or fun way possible. And now when I’m developing products, it’s still a part of it. You make so many different versions of something and each one fails because of something. But then, hopefully, what happens is that you get smaller and smaller failures. Product development feels like you’re going in circles, but you’re actually going in a spiral because the circles are taking you somewhere.

    What advice do you have for aspiring inventors?

    Giertz: Make things that you want. A lot of people make things that they think that other people want, but the main target audience, at least for myself, is me. I trust that if I find something interesting, there are probably other people who do too. And then just find good people to work with and collaborate with. There is no such thing as the lonely genius, I think. I’ve worked with a lot of different people and some people made me really nervous and anxious. And some people, it just went easy and we had a great time. You’re just like, “Oh, what if we do this? What if we do this?” Find those people.

    This article appears in the November 2024 print issue as “The Queen of Useless Robots.”

  • This Startup Shows Why the U.S. CHIPS Act Is Needed
    by Samuel K. Moore on 21. October 2024. at 13:00



    There’s a certain sameness to spaces meant for tech startups: flexible cubicle arrangements, glass-encased executive offices, whiteboard walls awaiting equations and ideas, basement laboratories for the noisier and more dangerous parts of the process. In some ways the home of Ideal Semiconductor on the campus of Lehigh University, in Bethlehem, Penn., is just like that. The most noticeable difference is a life-size statue of 18th-century inventor and electricity enthusiast Benjamin Franklin seated on the bench outside.

    Ideal cofounder and CEO Mark Granahan admits to having had a quiet moment or two with ole Benny Kite-and-Key, but it takes a lot more than inspiration from a founder of your home country to turn a clever idea into a valuable semiconductor company. Navigating from lightbulb moment to laboratory demo and finally to manufactured reality has always been the defining struggle of hardware startups. But Ideal’s journey is particularly illustrative of the state of invention in the U.S. semiconductor industry today and, in particular, how the CHIPS and Science Act, a law the startup’s founders personally and exhaustively advocated for, might change things for the better.

    This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

    That law, passed in 2022, is best known for pumping tens of billions of dollars into the construction of new leading-edge CMOS fabs in the United States, a country that had exactly zero such facilities at the time. But there’s another side to the effort, one that’s intended to speed the critical lab-to-fab process for new technologies and lead to more and better semiconductor-based inventions that can be manufactured (mostly) in the United States.

    And it’s this side that Ideal’s founders think will make the biggest difference for semiconductor startups. How big? While the CHIPS Act comes for the most part too late for Ideal’s first product, its executives think that if the law had been around and implemented, the company’s seven-year journey to a marketed product would have been done in half the time and maybe 60 percent of the cost. If it could do that for one startup, imagine the effect on the industrial and innovation ecosystem of a hundred such accelerated startups. Or a thousand.

    “If you’ve got time and money, it solves a lot of things,” says Granahan. “But as a startup, time and money—those are the two things you don’t have enough of, ever.” The hope is that the CHIPS Act and similar efforts in Europe and elsewhere can save startups a bit of both.

    Ideal’s Big Idea

    To understand Ideal’s path and how the CHIPS Act could have changed it, you first need to know what invention Ideal was built around. It’s not some new kind of AI processor, exotic memory device, or cryogenic quantum interface chip. In fact, it’s just about as humble-seeming as it gets in the semiconductor space—a discrete silicon metal-oxide-semiconductor field-effect transistor designed for power-delivery circuits.

    Similar devices are employed everywhere you look to convert one voltage to another. The dimmer switch on your wall has at least one; cars have hundreds, a humanoid robot probably needs more than 60 to drive the motors in its joints; you’re almost certainly within 10 meters of one right now. Such discrete devices composed a US $34 billion market in 2022 that’s predicted to grow to $50 billion by 2030, according to the Semiconductor Industry Association 2023 Factbook.

    Three block-like illustrations made up of sections with different colors.

    The ideal power transistor blocks high voltages when it’s off, conducts current with no resistance when it’s on, and switches between states rapidly with no loss of power. No device is truly ideal, but Granahan and the company’s other cofounders, David Jauregui and Michael Burns, thought they could get a lot closer to it than today’s market-leading silicon devices could.

    To see how, you have to start with the transistor architecture that is now a generation behind the leading silicon performers. Called the HEXFET and first developed at International Rectifier, it changed the game by turning the transistor from a device built primarily in the plane of the silicon into one with a vertical structure.

    That structure evolved to become a layer cake that gets more complex as you move from the bottom to the top. Starting at the bottom is a region of silicon that has been chemically doped to contain a high concentration of excess mobile electrons, making it n-type silicon. This is the device’s drain. Above that is a thicker region with a lower concentration of excess electrons. And atop this is the more complex layer. Here the device’s source, a region of n-type silicon, is vertically separated from the rest of the device by the channel, a region of silicon with excess of mobile positive charge (holes), making it p-type. Embedded at the center of the channel is the transistor’s gate, which is electrically separated from everything else by a narrow layer of insulation.

    Positive voltage at the gate shoves the positive charge in the p-type silicon aside, creating a conductive path from the source to the drain, switching the device on. Real HEXFETs are made up of many such vertical devices in parallel.

    HEXFET was a great leap forward, but higher voltages are its Achilles heel. If you design it to block more voltage—by making the middle layer thicker, say—the resistance of the device when it’s supposed to be conducting current shoots up, increasing faster than the square of the voltage you’re trying to block. Higher voltage operation is important, because it leads to less loss in transmission, even across fairly short distances such as the those inside electric cars and computers.

    “When COVID hit, all of a sudden...the phone started ringing off the hook” –Mark Granahan

    The solution, and the leading architecture for silicon power transistors today, is called RESURF Superjunction. It allows the blocking of higher voltages in a less resistive structure by replacing part of the middle n-type layer with p-type material. The result is a structure with a balance of charge, which blocks high voltages. But this solution effectively cuts the device’s conductive area in half, meaning it’s difficult to improve performance by reducing resistance.

    Ideal’s big idea is a way to have your silicon layer cake and eat it too. Called SuperQ, it restores the HEXFET’s conductive area while keeping the RESURF’s ability to block high voltages. Instead of blocking voltage by devoting a large volume of p-type silicon to balancing the device’s internal charges, SuperQ gets the same effect using a nanometers-thin proprietary film formed within narrow, deep trenches. Thus, the transistor regains its wide, low-resistance structure while still handling high voltage.

    But this win-win needed some chipmaking techniques not found in the world of silicon power devices—namely, the ability to etch a deep, narrow (high-aspect ratio) trench and the tools to lay down material one atomic layer at a time. Both are common in advanced CMOS and memory-chip fabrication, but getting hold of them in a manufacturing environment for discrete devices was a major roadblock for Ideal.

    An Idea and Its Environment

    In 2014, Granahan had recently retired after selling his previous startup Ciclon to Texas Instruments. “I took some time off to basically relax and think,” he says. For Granahan relaxing and thinking involved reading IEEE publications and other technical journals.

    And there, he saw the glimmerings of a way past the limitations of the silicon power MOSFET. In particular, he noted experimental work attempting to execute a charge balancing act in photovoltaic cells. It relied on two things. The first were high-k dielectrics—alumina, hafnia, and other insulators that are good at holding back charge while at the same time transmitting the charge’s electric field. These had come into use barely five years earlier in Intel CPUs. The second was a method of building nanometers-thin films of these insulators. This technique is called atomic layer deposition, or ALD.

    Purchasing time at Pennsylvania State University’s Nanofabrication Laboratory, Granahan got to work trying out different combinations of dielectrics and processing recipes, finally proving that the SuperQ concept could work but that it would need some advanced processing equipment to get there.

    Lit in red and blue, a electronic component lies on a surface with regular divisions. The fruit of Ideal Semiconductor’s labor is a power transistor based on its SuperQ technology. Jayme Thornton

    “There wasn’t this aha moment,” he says of the initial part of the invention process. “But there was this learning process that I had to go through to get us to the starting point.”

    That starting point might have been an ending point, as it is for so many potentially transformative ideas. The big, early, hurdle was the usual one: money.

    U.S. venture capital was generally not interested in semiconductor startups at the time, according to Granahan and one of those venture capitalists, Celesta Capital’s Nic Braithwaite. Brathwaite had spent decades in semiconductor-technology development and chip packaging, before cofounding his first fund in 2008 and then Celesta in 2013. At the time “nobody was a VC in semiconductors,” he says.

    Nevertheless, there was a ready source of cash out there, says Granahan—China-based or Chinese-backed funds. But Granahan and his partners were reluctant to accept funding from China, for a couple of reasons. It usually came with strings attached, such as requiring that devices be manufactured in the country and that intellectual property be transferred there. Also, Granahan and his colleagues had been burned before. His previous startup’s secrets had somehow escaped the fab they were using in Singapore and turned up in competing devices in China.

    “We lost our IP in very short order,” he says. So they were determined not just to avoid Chinese funding but to develop and ultimately manufacture the devices domestically.

    “We needed a partner to go off and develop the device architecture and the process technology that went with that,” he explains. What Ideal’s founders were looking for was a U.S.-based foundry that had specialized equipment and a willingness to help them develop a new process using it. Unfortunately, in 2017, such a creature did not exist.

    Determined to find a domestic partner, Ideal’s executives decided to settle on a “suboptimal solution.” They found a small manufacturer in California (which the executives decline to name) that was not up to snuff in terms of its capabilities and the pace at which it could help Ideal develop SuperQ devices. Ideal even had to invest in equipment for this company, so it could do the job.

    The NSTC Opens to Members


    The National Semiconductor Technology Center (NSTC) is the part of the CHIPS Act meant to give the United States a durable position in the semiconductor industry by providing access to the tools and training to develop new generations of chips. A public-private partnership, the entity will run as a membership organization. Natcast, its non-profit operator, recently opened up membership in NSTC with the aim of ultimately giving everyone from materials startups to gigantic cloud computing providers access.

    Natcast wants “to make sure that membership is accessible to all,” says Susan Feindt, senior vice president of ecosystem development at Natcast. “We need broad representation from all the stakeholders in the semiconductor ecosystem.” Membership is on a sliding scale according to the size and nature of the organization involved—as little as US $1,600 for the smallest startup to $660,000 for an Nvidia-scale entity.

    But such a diversity of members means not everyone will want the same things out of the NSTC. Feindt anticipates that startups will likely take advantage of some NSTC’s earliest offerings. One is access to advanced electronic design automation (EDA) design tools, through what NSTC’s calling a design enablement gateway. Another is the arrangement of multiproject wafers, which are opportunities to aggregate chips from a number of organizations to fill a wafer in a fab, cutting down on development costs.

    Eventually, Natcast will be running a design center, an advanced equipped with extreme-ultraviolet lithography, and a pilot line for new semiconductor and packaging tech. And NSTC’s members will be directing the organization’s R&D priorities.

    “The investment we’re making in manufacturing is going to clearly change the trajectory” of the U.S. semiconductor industry, says Feindt. “But the investment in R&D should ensure that it’s enduring.”

    The experience of getting to that point revealed some things about the U.S. semiconductor industry that Ideal’s founders found quite alarming. The most critical of them was the extreme concentration of chip manufacturing in Asia in general and Taiwan in particular. In 2018, most of the biggest names in advanced semiconductors were so-called fabless companies headquartered in the United States. That is, they designed chips and then hired a foundry, such as Taiwan Semiconductor Manufacturing Co. (TSMC) or Samsung, to make them. Then typically a third company tested and packaged the chips, also in Asia, and shipped them back to the designer.

    All this is still true. It’s standard operating procedure for U.S-based tech titans like AMD, Apple, Google, Nvidia, Qualcomm, and many others.

    By 2018, the ability to manufacture cutting-edge logic in the United States had atrophied and was nearing death. Intel, which at the time made its own chips and is only now becoming a proper foundry, stumbled badly in its development of new process technology, falling behind TSMC for the first time. And Malta, N.Y.–based GlobalFoundries, the third-largest foundry, abruptly abandoned its development of advanced-process technologies, because continuing on would have sent the company into a financial doom loop.

    The situation was so skewed that 100 percent of advanced logic manufacturing was being done in Asia at the time, and by itself, TSMC did 92 percent of that. (Things weren’t that much different for less advanced chips—77 percent were made in Asia, with China making up 30 percent of that.)

    “Asia had a pocket veto on semiconductor development in the United States,” Granahan concluded. “The U.S. had lost its startup semiconductor ecosystem.”

    Mr. Burns Goes to Washington

    Concerned and frustrated, Granahan, with cofounder and executive chairman Mike Burns, did something positive: They took their experiences to the government. “Mike and myself, but Mike in particular, spent a lot of time in D.C. talking to people in the House and Senate—staff, [Republicans, Democrats], anyone who would listen to us,” he relates. Burns reckons they had as many as 75 meetings. The response, he says, was generally “a lot of disbelief.” Many of the political powers they spoke to simply didn’t believe that the United States had fallen so far behind in semiconductor production.

    But there were certain sectors of the U.S. government that were already concerned, seeing semiconductors as an issue of national security. Taiwan and South Korea, are, after all, geographically cheek by jowl with the United States’ rival China. So by late 2019, the seeds of a future CHIPS Act that would seek to onshore advanced semiconductor manufacturing and more were beginning to germinate in D.C. And although there was some bipartisan support in both houses of Congress, it wasn’t a priority.

    Then came COVID-19.

    Supply-Chain Focus

    Remember the crash course in supply-chain logistics that came with the terrifying global pandemic in 2020? For many of the things consumers wanted but couldn’t get in that first year of contagion-fueled confusion, the reason for the unavailability was, either directly or indirectly, a shortage of semiconductors.

    “When COVID hit, all of a sudden…the phone started ringing off the hook,” says Granahan.“The CHIPS bill predates the pandemic, but the pandemic really exposed why we need this bill,” says Greg Yeric, formerly CTO of a semiconductor startup, and now director of research at the U.S. Commerce Department office that executes the CHIPS Act.

    Momentum started to swing behind a legislative fix, and in early January 2021 Congress overrode a presidential veto to pass a defense bill that included the framework of what would become the CHIPS and Science Act. The later bill, signed into law in August 2022, promises $52 billion for the project—$39 billion to fund new manufacturing, $2 billion for semiconductors for the defense sector, and $11 billion for R&D. The R&D allocation includes funding for a concept Burns and his colleagues had been pushing for, called the National Semiconductor Technology Center (NSTC).

    From a startup’s point of view, the purpose of the NSTC is to bridge the lab-to-fab doldrums that Ideal found itself stuck in for so many years by providing a place to test and pilot new technology. In the strategy paper laying out the plan for the NSTC, the government says it is meant to “expand access to design and manufacturing resources” and “reduce the time and cost of bringing technologies to market.”

     A man stands hunched over a laboratory bench with many wires. A whiteboard with equations is seen over his shoulder. Orion Kress-Sanfilippo, an applications engineer at Ideal Semiconductor, tests the performance of a SuperQ device in a power supply. Jayme Thornton

    Some of the details of how NSTC is going to do that have begun to emerge. The center will be operated by a public-private partnership called Natcast, and a CEO was recently chosen in Cisco Systems’ former chief security officer, Deirdre Hanford. And in July, the government settled on the formation of three main NSTC facilities—a prototyping and advanced-packaging pilot plant, an administrative and design site, and a center built around extreme ultraviolet lithography. (EUV lithography is the $100-million-plus linchpin technology for cutting-edge CMOS development.) The administration intends for the NSTC design facility to be operational next year, followed by the EUV center in 2026, and the prototyping and packaging facility in 2028.

    “If we would have had access to this NSTC-type function, then I think that that would have fulfilled that gap area,” says Granahan.

    Manufacturing the Future

    Today, after seven years, Ideal is nearing commercial release of its first SuperQ device. The startup has also found a manufacturer, Bloomington, Minn.–based Polar Semiconductor. In late September, Polar became the first company to be awarded funds from the CHIPS Act—$123 million to help expand and modernize its fab with the aim of doubling U.S. production and turning itself into a foundry.

    The NSTC’s prototyping facility might come too late for Ideal, but it might be just in time for a fresh crop of hardware startups. And R&D pushed by Yeric’s branch of the CHIPS office is intended to help chip startups in the next generation after that to move even faster.

    But just as important, the CHIPS Act is scaling up the domestic manufacturing environment in ways that can also help startups. About $36 billion is in some stage of commitment to some 27 manufacturing and technology development projects around the country as of late September. “If your design is limited by what a fab can do, then it limits, to some extent, some of your innovation capabilities.” says Celesta Capital’s Brathwaite. “The hope is that if you have U.S.-based foundry services you’ll get better support for U.S.-based startups.”

    This article appears in the November 2024 print issue as “Will the U.S. CHIPS Act Speed the Lab-to-Fab Transition?.”

  • Build a Sci-Fi Aerial Display
    by Markus Mierse on 20. October 2024. at 13:00



    On a star base far far away, a dashing hero presses a button on a control panel and a schematic appears in midair. Deftly touching her fingers to the ethereal display, the hero shuts down an energy shield and moves on with her secret mission. If you’ve watched any science fiction, you’re probably familiar with this kind of scenario. But what you may not know is that while star bases and energy shields are still beyond us, floating displays are not.

    By this I mean displays that produce two-dimensional images that truly float in empty air and can be interacted with, not displays based on the Pepper’s ghost illusion, where an image is projected onto a transparent surface that has to be kept away from prying fingers. The optical principles to make floating images are well understood, and since the pandemic stoked interest in touch-free controls of all kinds, a number of companies such as Toppan and Kyocera have attempted to commercialize such aerial displays. However, rollouts have been slow, and the intended applications—elevator controls and the like—are not exactly cool.

    I decided to build my own aerial display, one that would honor the sci-fi awesomeness of the concept.

    I’m no stranger to building offbeat displays. In 2022 I presented in IEEE Spectrum’s Hands On my color electromechanical display, which harked back to the very first days of television. This time, as I was going for something almost from the future, I decided to style my system after the kind of props seen in Star Wars movies. But first, I needed to get the optics working.

    Major components of the aerial display. The heart of the aerial display is a bright flat screen [top] powered by a single-board Intel-based computer [bottom left]. Detecting fingertips is the job of an Arduino Nano and three distance sensors [bottom right].James Provost

    How Do Aerial Displays Work?

    A little optical refresher: Normally, rays from a light source, such as a display, spread out from the source as distance increases. If these diverging rays are, say, reflected by a mirror, the eye perceives the display as being located behind the mirror. This is known as a virtual image. But if you can get the light rays that are emanating from the display to converge at some point in space before spreading out again, the eye perceives the display as if it were located at the point of convergence, even if it’s in midair. This is known as a real image.

    The key to making this convergence happen in midair is to use a retroreflective material. Normal reflectors follow the familiar rule that the angle of incidence equals the angle of reflection—that is, a light ray coming into a mirror at a shallow angle from the left will bounce off at the same shallow angle and continue traveling toward the right. But a retroreflector bounces incident light straight back on itself. So, if you mounted a retroreflector directly in front of a screen, all the diverging rays would be reflected back along their own paths, creating a real image as they converge at the surface of the screen. Obviously, this is completely pointless in itself, so we need to introduce another optical element—a semireflector, or beam splitter.

    This tech is within reach of most makers today—no hyperdrives required!

    This material reflects about half the incident light falling on it and lets the other half pass straight through. And here’s the clever bit: The screen and retroreflector are mounted at 90 degrees to each other, and the semireflector is placed opposite that right angle, putting it at 45 degrees to both the screen and the retroreflector. Now let’s follow the light: The diverging rays emitted from the screen hit the beam splitter, and half are reflected toward the retroreflector, which bounces them back toward the beam splitter. The semireflector allows half of those now-converging rays to pass through. As they finally converge in the air above the display, the rays form a real image.

    Clearly, this optical legerdemain is inefficient, with most of the original light being lost to the system. But it wasn’t hard to find a small, modern flat-screen panel bright enough to produce a passable aerial image, at least under indoor (or star-base) lighting conditions. To drive this 7-inch display, I used a LattePanda 3, which is an Intel-based single-board computer capable of running Windows or Linux and supporting multiple displays. (A full bill of materials is available on my project page on hackster.io).

    A screen and sheet of retroreflector material sit at angles of 45 degrees to a sheet of a beam splitter. Rays follow the path of light through the system. The display creates a image in midair by bouncing the diverging rays from a bright screen off a beam splitter, which reflects half the rays toward a retroreflector. Unlike a mirror, which would make the rays diverge even further, the retroreflector sends converging rays back toward the beam splitter, which lets half of them through to form a real, if dim, floating image.James Provost

    Finding the Right Retroreflector

    My biggest obstacle was finding a suitable retroreflector material. I eventually settled on a foil that I could cut to the dimensions I desired, produced a sharp image, and wasn’t too expensive. This was Oralite 3010 prismatic photoelectric sheeting, and I was able to buy a 77-centimeter-by-1-meter roll (the shortest available) for about US $90.

    The next step was to make the display interactive. After some experimentation, I settled on a $5 laser-based, time-of-flight sensor that reports distance measurements along a narrow cone. I mounted three of these sensors to cover three columns in the plane of the aerial display and connected them to an Arduino Nano via I2C. When a user’s fingertip enters a sensor’s detection cone, the Nano looks to see if the fingertip’s distance from the sensor falls into one of three predefined ranges. With three sensors and three segments per sensor, the aerial display has nine areas that can react to fingers. The area being activated is reported back to the LattePanda via USB.

    The optical components and computer were all mounted in a 33 x 25 x 24-centimeter frame made out of aluminum extrusion bars. I also mounted a small touchscreen on the front that lets me control what the LattePanda shows on the aerial display. I added side panels to the frame and attached metallized 3D-printed strips and other adornments that made it look like something that wouldn’t be out of place on the set of a sci-fi show.

    The result works beautifully and is as futuristic as I’d hoped, yet also demonstrates that this tech is within reach of most makers today—no hyperdrives required!

  • U.S. Engineers’ Salaries Up in 2023
    by Kathy Pretz on 18. October 2024. at 18:00



    There’s good earnings news for U.S. members: Salaries are rising. Base salaries increased by about 5 percent from 2022 to 2023, according to the IEEE-USA 2024 Salary and Benefits Survey Report.

    Last year’s report showed that inflation had outpaced earnings growth but that’s not the case this year.

    In current dollars, the median income of U.S. engineers and other tech professionals who are IEEE members was US $174,161 last year, up about 5 percent from $169,000 in 2022, excluding overtime pay, profit sharing, and other supplemental earnings. Unemployment fell to 1.2 percent in this year’s survey, down from 1.4 percent in the previous year.

    As with prior surveys, earned income is measured for the year preceding the survey’s date of record—so the 2024 survey reports income earned in 2023.

    To calculate the median salary, IEEE-USA considered only respondents who were tech professionals working full time in their primary area of competence—a sample of 4,192 people.

    Circuits and device engineers earn the most

    Those specializing in circuits and devices earned the highest median income, $196,614, followed by those working in communications ($190,000) and computers/software technology ($181,000).

    Specific lucrative subspecialties include broadcast technology ($226,000), image/video ($219,015), and hardware design or hardware support ($215,000).

    Engineers in the energy and power engineering field earned the lowest salary: $155,000.

    Higher education affects how well one is paid. On average, those with a Ph.D. earned the highest median income: $193,636. Members with a master’s degree in electrical engineering or computer engineering reported a salary of $182,500. Those with a bachelor’s degree in electrical engineering or computer engineering earned a median income of $159,000.

    Earning potential also depends on geography within the United States. Respondents in IEEE Region 6 (Western U.S.) fared substantially better than those in Region 4 (Central U.S.), earning nearly $48,500 more on average. However, the report notes, the cost of living in the western part of the country is significantly higher than elsewhere.

    The top earners live in California, Maryland, and Oregon, while those earning the least live in Arkansas, Nebraska, and South Carolina.

    Academics are among the lowest earners

    Full professors earned an average salary of $190,000, associate professors earned $118,000, and assistant professors earned $104,500.

    Almost 38 percent of the academics surveyed are full professors, 16.6 percent are associate professors, and 11.6 percent are assistant professors. About 10 percent of respondents hold a nonteaching research appointment. Nearly half (46.8 percent) are tenured, and 10.7 percent are on a tenure track.

    Gender and ethnic gaps widen

    The gap between women’s and men’s salaries increased. Even considering experience levels, women earned $30,515 less than their male counterparts.

    The median primary income is highest among Asian/Pacific Islander technical professionals, at $178,500, followed by White engineers ($176,500), Hispanic engineers ($152,178), African-American engineers ($150,000), and Native American/Alaskan Native engineers ($148,000). The salary gap between Black engineers and the average salary reported is $3,500 more than in last year’s report.

    Asians and Pacific Islanders are the largest minority group, at 14.4 percent. Only 5 percent of members are Hispanic, 2.6 percent are African Americans, and American Indians/Alaskan Natives account for 0.9 percent of the respondents.

    More job satisfaction

    According to the report, overall job satisfaction is higher than at any time in the past 10 years. Members reported that their work was technically challenging and meaningful to their company. On the whole, they weren’t satisfied with advancement opportunities or their current compensation, however.

    The 60-page report is available for purchase at the member price of US $125. Nonmembers pay $225.

  • Video Friday: Mobile Robot Upgrades
    by Evan Ackerman on 18. October 2024. at 15:30



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    ROSCon 2024: 21–23 October 2024, ODENSE, DENMARK
    ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
    Cybathlon 2024: 25–27 October 2024, ZURICH
    Humanoids 2024: 22–24 November 2024, NANCY, FRANCE

    Enjoy today’s videos!

    One of the most venerable (and recognizable) mobile robots ever made, the Husky, has just gotten a major upgrade.

    Shipping early next year.

    [ Clearpath Robotics ]

    MAB Robotics is developing legged robots for the inspection and maintenance of industrial infrastructure. One of the initial areas for deploying this technology is underground infrastructure, such as water and sewer canals. In these environments, resistance to factors like high humidity and working underwater is essential. To address these challenges, the MAB team has built a walking robot capable of operating fully submerged, based on exceptional self-developed robotics actuators. This innovation overcomes the limitations of current technologies, offering MAB’s first clients a unique service for trenchless inspection and maintenance tasks.

    [ MAB Robotics ]

    Thanks, Jakub!

    The G1 robot can perform a standing long jump of up to 1.4 meters, possibly the longest jump ever achieved by a humanoid robot of its size in the world, standing only 1.32 meters tall.

    [ Unitree Robotics ]

    Apparently, you can print out a functional four-fingered hand on an inkjet.

    [ UC Berkeley ]

    We present SDS (``See it. Do it. Sorted’), a novel pipeline for intuitive quadrupedal skill learning from a single demonstration video leveraging the visual capabilities of GPT-4o. We validate our method on the Unitree Go1 robot, demonstrating its ability to execute variable skills such as trotting, bounding, pacing, and hopping, achieving high imitation fidelity and locomotion stability.

    [ Robot Perception Lab, University College London ]

    You had me at “3D desk octopus.”

    [ UIST 2024 ACM Symposium on User Interface Software and Technology ]

    Top-notch swag from Dusty Robotics

    [ Dusty Robotics ]

    I’m not sure how serious this shoes-versus-no-shoes test is, but it’s an interesting result nonetheless.

    [ Robot Era ]

    Thanks, Ni Tao!

    Introducing TRON 1, the first multimodal biped robot! With its innovative “Three-in-One” modular design, TRON 1 can easily switch among Point-Foot, Sole, and Wheeled foot ends.

    [ LimX Dynamics ]

    Recent works in the robot-learning community have successfully introduced generalist models capable of controlling various robot embodiments across a wide range of tasks, such as navigation and locomotion. However, achieving agile control, which pushes the limits of robotic performance, still relies on specialist models that require extensive parameter tuning. To leverage generalist-model adaptability and flexibility while achieving specialist-level agility, we propose AnyCar, a transformer-based generalist dynamics model designed for agile control of various wheeled robots.

    [ AnyCar ]

    Discover the future of aerial manipulation with our untethered soft robotic platform with onboard perception stack! Presented at the 2024 Conference on Robot Learning, in Munich, this platform introduces autonomous aerial manipulation that works in both indoor and outdoor environments—without relying on costly off-board tracking systems.

    [ Paper ] via [ ETH Zurich Soft Robotics Laboratory ]

    Deploying perception modules for human-robot handovers is challenging because they require a high degree of reactivity, generalizability, and robustness to work reliably for diverse cases. Here, we show hardware handover experiments using our efficient and object-agnostic real-time tracking framework, specifically designed for human-to-robot handover tasks with legged manipulators.

    [ Paper ] via [ ETH Zurich Robotic Systems Lab ]

    Azi and Ameca are killing time, but Azi struggles being the new kid around. Engineered Arts desktop robots feature 32 actuators, 27 for facial control alone, and 5 for the neck. They include AI conversational ability including GPT-4o support, which makes them great robotic companions, even to each other. The robots are following a script for this video, using one of their many voices.

    [ Engineered Arts ]

    Plato automates carrying and transporting, giving your staff more time to focus on what really matters, improving their quality of life. With a straightforward setup that requires no markers or additional hardware, Plato is incredibly intuitive to use—no programming skills needed.

    [ Aldebaran ]

    This UPenn GRASP Lab seminar is from Antonio Loquercio, on “Simulation: What made us intelligent will make our robots intelligent.”

    Simulation-to-reality transfer is an emerging approach that enables robots to develop skills in simulated environments before applying them in the real world. This method has catalyzed numerous advancements in robotic learning, from locomotion to agile flight. In this talk, I will explore simulation-to-reality transfer through the lens of evolutionary biology, drawing intriguing parallels with the function of the mammalian neocortex. By reframing this technique in the context of biological evolution, we can uncover novel research questions and explore how simulation-to-reality transfer can evolve from an empirically driven process to a scientific discipline.

    [ University of Pennsylvania ]

  • Peek into the Future of A&D with Ansys
    by Ansys on 17. October 2024. at 15:26



    Across industries, autonomous technology is driving innovation at a rapid pace. This is especially true in the aerospace and defense (A&D) industry, where autonomous technology can potentially be used for everything from conducting search and rescue missions in dangerous conditions via unmanned aerial vehicles (UAVs) to transporting passengers in busy urban areas with electric vertical takeoff and landing (eVTOL) vehicles.

    Interested in learning more about this exciting technology and how simulation software can help engineers and researchers gain a strategic advantage?

    In this e-book, you’ll learn:

    • How autonomous technology impacts the A&D industry both today and in the future
    • The core concepts behind today’s autonomous technology and future advancements
    • What challenges innovators are facing in this space
    • A quick look at the autonomous system development process
    • What the future of the autonomous technology market may look like

    Download the e-book to get an overview of autonomous technology in the A&D industry and discover how autonomous technology will rapidly push boundaries in the coming years

    Download this free whitepaper now!

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 09. February 2022. at 15:31



    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


    Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

    Andrew Ng on...

    The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

    Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

    When you say you want a foundation model for computer vision, what do you mean by that?

    Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

    What needs to happen for someone to build a foundation model for video?

    Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

    Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

    Back to top

    It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

    Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

    “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
    —Andrew Ng, CEO & Founder, Landing AI

    I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

    I expect they’re both convinced now.

    Ng: I think so, yes.

    Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

    Back to top

    How do you define data-centric AI, and why do you consider it a movement?

    Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

    When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

    The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

    You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

    Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

    When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

    Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

    “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
    —Andrew Ng

    For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

    Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

    Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

    One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

    When you talk about engineering the data, what do you mean exactly?

    Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

    For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

    Back to top

    What about using synthetic data, is that often a good solution?

    Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

    Do you mean that synthetic data would allow you to try the model on more data sets?

    Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

    “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
    —Andrew Ng

    Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

    Back to top

    To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

    Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

    One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

    How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

    Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

    In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

    So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

    Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

    Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

    Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

    Back to top

    This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 08. February 2022. at 14:00



    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

    Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

    But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

    How is AI currently being used to design the next generation of chips?

    Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

    Portrait of a woman with blonde-red hair smiling at the camera Heather GorrMathWorks

    Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

    What are the benefits of using AI for chip design?

    Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

    So it’s like having a digital twin in a sense?

    Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

    So, it’s going to be more efficient and, as you said, cheaper?

    Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

    We’ve talked about the benefits. How about the drawbacks?

    Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

    Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

    One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

    How can engineers use AI to better prepare and extract insights from hardware or sensor data?

    Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

    One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

    What should engineers and designers consider when using AI for chip design?

    Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

    How do you think AI will affect chip designers’ jobs?

    Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

    How do you envision the future of AI and chip design?

    Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 07. February 2022. at 16:12



    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

    IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

    Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

    “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

    The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

    Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

    Golden dilution refrigerator hanging vertically Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

    In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

    As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

    In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

    “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

    On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

    While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

    “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

    This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

    “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

    Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.