IEEE News

IEEE Spectrum IEEE Spectrum

  • The Inside Story of Google’s Quiet Nuclear Quest
    by Ross Koningstein on 25. November 2024. at 13:00



    In 2014 I went to my managers with an audacious proposal: Let’s create a nuclear energy research and development group at Google. I didn’t get laughed out of the room, maybe because Google has a storied history of supporting exploratory research. While I did not propose that Google build a nuclear lab, I felt certain that we could contribute in other ways.

    I had some credibility within the company. I joined Google in 2000 as its first director of engineering, and helped make the company profitable with the pay-per-click advertising system AdWords, in which companies bid to place ads on our search-results page. In subsequent years I got interested in energy and was part of the design team for Google’s first energy-efficient data center. Then, in 2009, I was recruited into Google’s effort to make renewable energy cheaper than coal (an initiative we called RE

    While that last project didn’t pan out as hoped, I learned a lot from it. A Google-McKinsey study conducted as part of the project drove home the point that the intermittent power sources, solar and wind, need reliable backup. Therefore, efforts to decarbonize the grid affordably depend on what happens with always-on or always-available hydro, geothermal, and nuclear power plants.

    I grew up in Ontario, Canada, which achieved a climate-friendly electric grid in the 1970s by deploying nuclear power plants. It seemed to me that recent improvements in reactor designs gave nuclear plants even more potential to deeply decarbonize societies at reasonable cost, while operating safely and dealing with nuclear waste in a responsible way. In 2012, after RE Pandora’s Promise, in which environmentalists argued that nuclear power could help us transition away from fossil fuels while lifting people in developing countries out of poverty. I came away from this filmmaking experience with a handful of solid contacts and a determination to get Google involved in advancing nuclear.

    The proposed plan for the nuclear energy R&D group (affectionately known as NERD) was based on input from similarly minded colleagues. The problems we could address were determined by who we could work with externally, as well as Google’s usual strengths: people, tools, capabilities, and reputation. I proposed a three-pronged effort consisting of immediately impactful fusion research, a long shot focusing on an “out there” goal, and innovation advocacy in Washington, D.C. Some years later, we added sponsored research into the cutting-edge field of nuclear excitation. The NERD effort, started 10 years ago, is still bearing fruit today.

    These programs all came from a question that I asked anybody who would listen: What can Google do to accelerate the future of nuclear energy?

    Google’s Work on Fusion

    The first research effort came from a proposal by my colleague Ted Baltz, a senior Google engineer, who wanted to bring the company’s computer-science expertise to fusion experiments at TAE Technologies in Foothill Ranch, Calif. He believed machine learning could improve plasma performance for fusion.

    In 2014, TAE was experimenting with a warehouse-size plasma machine called C-2U. This machine heated hydrogen gas to over a million degrees Celsius and created two rings of plasma, which were slammed together at a speed of more than 960,000 kilometers per hour. Powerful magnets compressed the combined plasma rings, with the goal of fusing the hydrogen and producing energy. The challenge for TAE, as for all other companies trying to build commercial fusion reactors, was how to heat, contain, and control the plasma long enough to achieve real energy output, without damaging its machine.

    A diagram shows a long tube on a metal track. The tube is surrounded by red hoops, and at the tube\u2019s midpoint there are six shorter tubes projecting out in different directions. Google collaborated with the fusion company TAE Technologies to improve the performance of the plasma within its C-2U machine. The goal was to keep the plasma stable and drive it to fusion conditions. TAE Technologies

    The TAE reactor could fire a “shot” about every 10 minutes, each of which lasted about 10 milliseconds and produced a treasure trove of data. There were more than 100 settings that could be adjusted between shots, including parameters like the timing and energy of plasma-formation pulses and how the magnets were controlled. Baltz realized that TAE’s researchers had an engineering-optimization problem: Which knobs and switches should they fiddle with to learn, as quickly as possible, the best ways to keep their plasma steady and drive it to fusion conditions?

    To contain, squeeze, and shape the plasma, TAE developed a special way of using magnetic fields, called a field-reversed configuration. This implementation was predicted to become more stable as the energy went up—an advantage over other methods, in which plasmas get harder to control as you heat them. But TAE needed to do the experiments to confirm that those predictions were correct.

    To help them figure out which settings to try for each new shot, Baltz and his team developed the optometrist algorithm. Just like when you’re at the eye doctor and the optometrist flips lenses, saying, “Can you see more clearly with A or B?,” the algorithm presents a human operator with a pair of recent experimental outcomes. That human, who is an expert plasma physicist, then chooses which experiment to riff on with further parameter tweaks.

    This was machine learning and human expertise at their best. The algorithm searched through thousands of options, and humans made the call. With the help of the optometrist algorithm, TAE achieved the longest-lived plasmas of that experimental campaign. The algorithm also identified a set of parameters that surprised physicists by causing plasma temperatures to rise after the initial blast.

    A photo shows the plasma formation section of the Norman reactor, a very large clear horizontal cylinder through which a series of red coils can be seen. With the help of Google’s algorithms, TAE’s Norman machine achieved higher plasma temperatures than expected: 75 million °C. Erik Lucero

    The collaboration continued with TAE’s next machine, Norman, which achieved even higher plasma temperatures than TAE’s original goal. The Google team also created algorithms to infer the evolving shape of the plasma over time from multiple indirect measurements, helping TAE understand how the plasma changed over the life of a shot. TAE is now building a new and bigger machine called Copernicus, with a goal of achieving energy breakeven: the point at which the energy released from a fusion reaction is equal to the amount of energy needed to heat the plasma.

    A nice side benefit from our multiyear collaboration with TAE was that people within the company—engineers and executives—became knowledgeable about fusion. And that resulted in Alphabet investing in two fusion companies in 2021, TAE and Commonwealth Fusion Systems. By then, my colleagues at Google DeepMind were also using deep reinforcement learning for plasma control within tokamak fusion reactors.

    Low-Energy Nuclear Reactions

    NERD’s out-there pursuit was low-energy nuclear reactions (LENR)—still popularly known as cold fusion. This research field was so thoroughly lambasted in the early 1990s that it was effectively off-limits for decades.

    The saga of cold fusion goes back to 1989, when electrochemists Martin Fleischmann and B. Stanley Pons claimed that electrochemical cells operating near room temperature were producing excess heat that they said could only be explained by “cold fusion”—reactions that didn’t require the enormous temperatures and high pressures of typical fusion reactions. Their rushed announcement created a media circus, and when hasty attempts to replicate their results were unsuccessful, the discrediting of their claims was rapid and vehement. Decades later, there had been no confirmations in credible peer-reviewed journals. So, case closed.

    Or perhaps not. In the early 2010s, an Italian entrepreneur named Andrea Rossi was getting some press for a low-energy nuclear device he called an energy catalyzer, or E-Cat. Googlers tend to be curious, and a few of us took skeptical interest in this development. I’d already been discussing LENR with Matt Trevithick, a venture capitalist whom I’d met at the premiere of Pandora’s Promise, in 2013. He had an interesting idea: What would happen if a fresh group of reputable scientists investigated the circumstances under which cold fusion had been hypothesized to exist? Google could provide the necessary resources and creative freedom for teams of external experts to do objective research and could also provide cover. Trevithick’s proposal was the second pillar of NERD.

    A wire descends from an apparatus into a metal cage. The photo is suffused with a purple glow, which is brightest around the wire.  During Google-sponsored work on low-energy nuclear reactions, one group used pulsed plasma to drive hydrogen ions toward a palladium wire target. The researchers didn’t detect the fusion by-products they were looking for. Thomas Schenkel

    Trevithick had been scouting for scientists who were open to the idea that unusual states of solid matter could lead to cold fusion. Google greenlit the program and recruited Trevithick to lead it, and we ended up funding about 12 projects that involved some 30 researchers. During these investigations, we hoped the researchers might find credible evidence of an anomaly, such as distinct and unexplainable thermal spikes or evidence of nuclear activity beyond the error bars of the measurement apparatus. The stretch goal was to develop a reference experiment: an experimental protocol that could consistently reproduce the anomaly. Our commitment to publish whatever we learned, including findings that supported simpler non-nuclear explanations, established an expectation of scientific rigor that motivated our academic collaborators.

    The group had great morale and communication, with quarterly in-person check-ins for the principal investigators to compare notes, and annual retreats for the academic research teams. This was some of the most fun I’ve ever had with a scientific group. The principal investigators and students were smart and inquisitive, their labs had expertise in building things, and everyone was genuinely curious about the experiments being designed and performed.

    A complex metal apparatus has a circular opening in the middle. Two pieces of metal protrude from the top and bottom of the circle to nearly meet in the middle. The area inside the circle is suffused with a purple glow. Google’s sponsorship of research on low-energy nuclear reactions has led to continued work in the field. At Lawrence Berkeley National Laboratory, researchers are still experimenting with pulsed plasma and palladium wires. Marilyn Chung/Lawrence Berkeley National Laboratory

    During the four-year duration of the program (from 2015 to 2018), our sponsored researchers did not find credible evidence of anomalies associated with cold fusion. However, everyone involved had a positive experience with the work and the rigorous way in which it was done. The program yielded 28 peer-reviewed publications, the crown jewel of which was “Revisiting the Cold Case of Cold Fusion,” in 2019. In this Nature article, we described our program’s motivations and results and showed that solid scientific research in this area can yield peer-reviewed papers.

    The project ratified a longstanding belief of mine: that credible scientists should not be discouraged from doing research on unfashionable topics, because good science deepens our understanding of the world and can lead to unanticipated applications. For example, Google-funded experiments performed at the University of British Columbia later led to the discovery of a new way to make deuterated drugs, in which one or more hydrogen atoms is replaced with the heavier hydrogen isotope deuterium. Such drugs can be effective at lower doses, potentially with reduced side effects.

    Despite not obtaining reliable evidence for cold fusion, we consider the project a success. In October 2021, Trevithick was invited to present at a workshop on low-energy nuclear reactions hosted by the Advanced Research Projects Agency–Energy. In September 2022, ARPA-E announced that it would spend up to US $10 million to investigate LENR as an exploratory topic. The ARPA-E announcement mentioned that it was building on recent advances in “LENR-relevant state-of-the-art capabilities and methodologies,” including those sponsored by Google and published in Nature.

    Nuclear Advocacy in Washington

    A challenge as large as creating a new nuclear energy industry is beyond what any single company can do; a supportive policy environment is critical. Could Google help make that happen? We set out to answer that question as the third NERD effort. A year after meeting at the premiere of Pandora’s Promise, climate philanthropist Rachel Pritzker, venture capitalist Ray Rothrock, and some Googlers gathered at Google to discuss next steps. Pritzker suggested that we partner with Third Way, a think tank based in Washington, D.C., to see if there was a feasible path to policy that would accelerate innovation in advanced nuclear energy. By advanced nuclear, we were primarily talking about new reactor designs that differ from today’s typical water-cooled fission reactors.

    Advanced reactors can offer improvements in safety, efficiency, waste management, and proliferation resistance—but because they’re new, they’re unlikely to succeed commercially without supportive government policies. Third Way’s analysts had found that, even in these highly partisan times, advanced nuclear was nonpartisan, and they believed that an opportunity existed to push for new legislation.

    At the time, the only framework that the U.S. Nuclear Regulatory Commission (NRC) had for approving commercial reactor designs was based on light-water reactors, technology dating from the 1950s. This was exasperating for innovators and investors and created unnecessary hurdles before new technologies could get to market. For advanced nuclear energy to move forward, policy change was needed.

    Seven bills were signed into law by three presidents, including bills to fund the demonstration of new reactor designs and to compel the NRC to modernize its licensing procedures.

    Third Way helped organize a meeting at the White House Executive Office Building in June 2015 on the topic of advanced nuclear energy. This meeting was an amazing gathering of about 60 representatives from the Department of Energy, National Nuclear Security Administration, NRC, National Security Agency, State Department, and Senate. Many spoke passionately about their concern that the United States had ceded leadership in advanced nuclear. People in many branches of the U.S. government wanted to change this situation through new policy. We listened.

    In 2015, Google supported Third Way and another advocacy organization, the Clean Air Task Force, to start working with legislators to craft bills that promoted innovation in nuclear energy. That same year, the Gateway for Advanced Innovation in Nuclear Act (GAIN) was passed, which connected nuclear developers with the U.S. national labs and their vast R&D capabilities. The initial two groups were soon joined by another advocacy group, ClearPath; eventually more than a dozen organizations were involved, representing the entire spectrum of political ideologies. They in turn engaged with industrial labor unions, advanced nuclear developers, and potential electricity purchasers like Amazon, Dow Chemical, and Microsoft. As an advisor to Third Way, I got invited to meetings in D.C., where people appreciated hearing my outsider and Silicon Valley perspective on innovation.

    This advanced nuclear policy campaign shows how the U.S. government became a partner in enabling private-sector innovation in nuclear technology; it also cemented nuclear innovation as one of the most nonpartisan issues in Washington. Starting in 2015, seven bills were signed into law by three presidents, including bills to fund the demonstration of new reactor designs and to compel the NRC to modernize its licensing procedures. In one welcome development, the NRC ruled that new fusion reactors will be regulated under different statutes than today’s fission reactors.

    Today, the U.S. federal government is providing more than $2.5 billion to help developers build the first advanced reactors, and $2.7 billion to produce the new forms of nuclear fuel required by most advanced reactors. Many advanced nuclear companies have benefited, and recently Google signed the world’s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs), to be developed by Kairos Power.

    Contrary to what you might see in the press about stalemates in D.C., my brush with policy left me optimistic. I found people on both sides of the aisle who cared about the issue and worked to create meaningful positive change.

    The Possibility of Designer Nuclear Reactions

    In 2018, Google’s funding of cold fusion was winding down. My manager, John Platt, asked me: What should we do next? I wondered if it might be possible to create designer nuclear reactions—ones that affected only specific atoms, extracting energy and creating only harmless by-products. As I surveyed the cutting edge of nuclear science, I saw that advances in nuclear excitation might offer such a possibility.

    Nuclear excitation is the phenomenon in which the nucleus inside an atom transitions to a different energy state, changing the possibilities for its decay. I was intrigued by a brand-new paper from Argonne National Laboratory, in Illinois, about experimental observation of nuclear excitation by electron capture, which the researchers achieved by slamming molybdenum atoms into lead at high speed. Soon after that, scientists at EPFL in Switzerland proposed a scientifically provocative approach to achieving nuclear excitation with a tabletop laser and electron accelerator setup that, under the right circumstances, might also allow exact control of the end products. I wanted to find out what could be done with this type of excitation technology.

    After speaking with researchers at those institutions, I met with Lee Bernstein, the head of the nuclear data group at the University of California, Berkeley. He offered an idea for a related experiment that had been sitting on the shelf for 20 years. He wanted to see if he could use high-energy electrons to excite the nucleus of the radioactive element americium, a component of nuclear waste, potentially transmuting it into something more benign. I was deeply intrigued. These conversations suggested two complementary paths to achieving nuclear excitation, and Google is funding academic research on both.

    A graphic shows a corkscrew-shaped structure pointed at a cluster of spheres representing an atomic nucleus. EPFL’s Fabrizio Carbone is exploring the low-energy path to nuclear excitation. His group plans to use vortex beams of electrons to excite nuclei and release energy. Simone Gargiulo/EFPL

    EPFL’s Fabrizio Carbone is exploring the low-energy path. His approach uses an ultrafast laser and precisely tailored electron pulses to excite specific nuclei, which should then undergo a desired transition. Carbone’s team first worked on the theoretical foundation for this work with Adriana Pálffy-Buß, now at the University of Würzburg, and then performed initial baseline experiments. The next experiments aim to excite gold nuclei using vortex beams of electrons, something not found in nature. This technique might be a route to compact power generation with designer nuclear reactions.

    Bernstein is exploring the high-energy path, where high-energy electrons excite the nuclei of americium atoms, which should cause them to decay much faster and turn into less toxic end products. Bernstein’s original plan was to custom-build an apparatus, but during the COVID-19 pandemic he switched to a simpler approach using Lawrence Berkeley Laboratory’s BELLA laser facility. The flexibility of Google’s research funding allowed Bernstein’s team to pivot.

    Still, it turns out you can’t easily get a sample of nuclear waste like americium; you have to work up to it. Bernstein’s first experiment showed that high-energy electrons and photons excited the nuclei of bromine atoms and created long-lived excited nuclear states, making the case for using americium-242 in the next experiment. In 2025, we should know if this approach offers a way to convert waste into a useful product, such as fuel for the nuclear generators used in space missions. If successful, this process could deal with the americium that is the most dangerous and long-lived component of spent reactor fuel.

    Solid science can have good side effects. Bernstein’s work attracted the attention of DARPA, which is now funding his lab to apply his excitation technique for a different application: creating actinium-225, a rare and short-lived radioactive isotope used in highly targeted cancer therapy.

    Nuclear Energy Could Be a Big Win for Climate

    When it comes to tackling climate change, some people advocate for putting all our resources into technologies that are fairly mature today. This strategy of “playing not to lose” makes sense if you have a good chance of winning. But this strategy doesn’t work in climate, because the odds of winning with today’s technologies are not in our favor. The Intergovernmental Panel on Climate Change (IPCC) has reported that business-as-usual emissions put our planet on a path to more than 2 °C of warming. In climate, humankind needs to use the strategy of “playing to win.” Humanity needs to place many big and audacious bets on game-changing technologies—ones that decrease energy costs so much that in the long run, their adoption is economically and politically sustainable.

    With luck, hard work, and allies, the program’s successes have been more than we expected.

    I’m proud of Google for placing bets across the near-term and long-term spectrum, including those made through our NERD program, which showed how the company could help advance nuclear energy R&D. Our projects addressed these questions: why this research, why these people, why now, and why Google? I’m grateful to my managers in Google’s energy research division for their support of exploratory research and innovation-friendly policy advocacy, and I appreciate my colleagues in the larger Google ecosystem who are working toward similar goals. With luck, hard work, and allies, the program’s successes have been more than we expected. In one form or another, these efforts have grown and strengthened through other people’s ongoing work and through diversified funding.

    I never would have guessed that a couple of chance discussions at the premiere of Pandora’s Promise would have delivered 10 of the most energizing years of my career. The hard work and dedication I’ve observed gives me confidence that better energy sources will be developed that can pull a billion people out of energy poverty and help our energy systems decarbonize. And one big win in nuclear energy could make all the difference.

  • Robot Photographer Takes the Perfect Picture
    by Kohava Mendelsohn on 23. November 2024. at 14:00



    Finding it hard to get the perfect angle for your shot? PhotoBot can take the picture for you. Tell it what you want the photo to look like, and your robot photographer will present you with references to mimic. Pick your favorite, and PhotoBot—a robot arm with a camera—will adjust its position to match the reference and your picture. Chances are, you’ll like it better than your own photography.

    “It was a really fun project,” says Oliver Limoyo, one of the creators of PhotoBot. He enjoyed working at the intersection of several fields; human robot interaction, large language models, and classical computer vision were all necessary to create the robot.

    Limoyo worked on PhotoBot while at Samsung, with his manager Jimmy Li. They were working on a project to have a robot take photographs but were struggling to find a good metric for aesthetics. Then they saw the Getty Image Challenge, where people recreated famous artwork at home during the COVID lockdown. The challenge gave Limoyo and Li the idea to have the robot select a reference image to inspire the photograph.

    To get PhotoBot working, Limoyo and Li had to figure out two things: how best to find reference images of the kind of photo you want and how to adjust the camera to match that reference.

    Suggesting a Reference Photograph

    To start using PhotoBot, first you have to provide it with a written description of the photo you want. (For example, you could type “a picture of me looking happy”.) Then PhotoBot scans the environment around you, identifying the people and objects it can see. It next finds a set of similar photos from a database of labeled images that have those same objects.

    Next an LLM compares your description and the objects in the environment with that smaller set of labeled images, providing the closest matches to use as reference images. The LLM can be programmed to return any number of reference photographs.

    For example, when asked for “a picture of me looking grumpy” it might identify a person, glasses, a jersey, and a cup, in the environment. PhotoBot would then deliver a reference image of a frazzled man holding a mug in front of his face among other choices.

    After the user selects the reference photograph they want their picture to mimic, PhotoBot moves its robot arm to correctly position the camera to take a similar picture.

    Adjusting the Camera to Fit a Reference

    To move the camera to the perfect position, PhotoBot starts by identifying features that are the same in both images, for example, someone’s chin, or the top of a shoulder. It then solves a “perspective-n-point” (PnP) problem, which involves taking a camera’s 2D view and matching it to a 3D position in space. Once PhotoBot has located itself in space, it then solves how to move the robot’s arm to transform its view to look like the reference image. It repeats this process a few times, making incremental adjustments as it gets closer to the correct pose.

    Then PhotoBot takes your picture.

    A college of photographs of people in different poses and outfits. Photobot’s developers compared portraits with and without their system.Samsung/IEEE

    To test if images taken by PhotoBot were more appealing than amateur human photography, Limoyo’s team had eight people use the robot’s arm and camera to take photographs of themselves and then use PhotoBot to take a robot-assisted photograph. They then asked 20 new people to evaluate the two photographs, asking which was more aesthetically pleasing while addressing the user’s specifications (happy, excited, surprised, etc). Overall, PhotoBot was the preferred photographer 242 times out of 360 photographs, 67 percent of the time.

    PhotoBot was presented on 16 October at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

    Although the project is no longer in development, Li thinks someone should create an app based on the underlying programming, enabling friends to take better photos of each other. “Imagine right on your phone, you see a reference photo. But you also see what the phone is seeing right now, and then that allows you to move around and align.”

  • Video Friday: Cobot Proxie
    by Evan Ackerman on 22. November 2024. at 18:00



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    Humanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

    Enjoy today’s videos!

    Proxie represents the future of automation, combining advanced AI, mobility, and modular manipulation systems with refined situational awareness to support seamless human-robot collaboration. The first of its kind, highly adaptable, collaborative robot takes on the demanding material handling tasks that keep the world moving. Cobot is incredibly proud to count as some of its first customers industry leaders Maersk, Mayo Clinic, Moderna, Owens & Minor, and Tampa General Hospital.

    [ Cobot ]

    It’s the world’s first successful completion of a full marathon (42.195km) by a quadruped robot, and RaiLab KAIST has helpfully uploaded all 4 hours 20 minutes of it.

    [ RaiLab KAIST ]

    Figure 02 has been keeping busy.

    I’m obligated to point out that without more context, there are some things that are not clear in this video. For example, “reliabilty increased 7x” doesn’t mean anything when we don’t know what the baseline was. There’s also a jump cut right before the robot finishes the task. Which may not mean anything, but, you know, it’s a robot video, so we always have to be careful.

    [ Figure ]

    We conducted a 6-hour continuous demonstration and testing of HECTOR in the Mojave Desert, battling unusually strong gusts and low temperatures. For fair testing, we purposely avoided using any protective weather covers on HECTOR, leaving its semi-exposed leg transmission design vulnerable to dirt and sand infiltrating the body and transmission systems. Remarkably, it exhibited no signs of mechanical malfunction—at least until the harsh weather became too unbearable for us humans to continue!

    [ USC ]

    A banked turn is a common flight maneuver observed in birds and aircraft. To initiate the turn, whereas traditional aircraft rely on the wing ailerons, most birds use a variety of asymmetric wing-morphing control techniques to roll their bodies and thus redirect the lift vector to the direction of the turn. Here, we developed and used a raptor-inspired feathered drone to find that the proximity of the tail to the wings causes asymmetric wing-induced flows over the twisted tail and thus lift asymmetry, resulting in both roll and yaw moments sufficient to coordinate banked turns.

    [ Paper ] via [ EPFLLIS ]

    A futuristic NASA mission concept envisions a swarm of dozens of self-propelled, cellphone-size robots exploring the oceans beneath the icy shells of moons like Jupiter’s Europa and Saturn’s Enceladus, looking for chemical and temperature signals that could point to life. A series of prototypes for the concept, called SWIM (Sensing With Independent Micro-swimmers), braved the waters of a competition swim pool at Caltech in Pasadena, California, for testing in 2024.

    [ NASA ]

    The Stanford Robotics Center brings together cross-disciplinary world-class researchers with a shared vision of robotics’ future. Stanford’s robotics researchers, once dispersed in labs across campus, now have a unified, state-of-the-art space for groundbreaking research, education, and collaboration.

    [ Stanford ]

    Agility Robotics’ Chief Technology Officer, Pras Velagapudi, explains what happens when we use natural language voice commands and tools like an LLM to get Digit to do work.

    [ Agility ]

    Agriculture, fisheries and aquaculture are important global contributors to the production of food from land and sea for human consumption. Unmanned underwater vehicles (UUVs) have become indispensable tools for inspection, maintenance, and repair (IMR) operations in aquaculture domain. The major focus and novelty of this work is collision-free autonomous navigation of UUVs in dynamically changing environments.

    [ Paper ] via [ SINTEF ]

    Thanks, Eleni!

    —O_o—

    [ Reachy ]

    Nima Fazeli, assistant professor of robotics, was awarded the National Science Foundation’s Faculty Early Career Development (CAREER) grant for a project “to realize intelligent and dexterous robots that seamlessly integrate vision and touch.”

    [ MMint Lab ]

    This video demonstrates the process of sealing a fire door using a sealant application. In cases of radioactive material leakage at nuclear facilities or toxic gas leaks at chemical plants, field operators often face the risk of directly approaching the leakage site to block it. This video showcases the use of a robot to safely seal doors or walls in the event of hazardous material leakage accidents at nuclear power plants, chemical plants, and similar facilities.\

    [ KAERI ]

    How is this thing still so cool?

    [ OLogic ]

    Drag your mouse or move your phone to explore this 360-degree panorama provided by NASA’s Curiosity Mars rover. This view was captured just before the rover exited Gediz Vallis channel, which likely was formed by ancient floodwaters and landslides.

    [ NASA ]

    This GRASP on Robotics talk is by Damion Shelton of Agility Robotics, on “What do we want from our machines?”

    The purpose of this talk is twofold. First, humanoid robots – since they look like us, occupy our spaces, and are able to perform tasks in a manner similar to us – are the ultimate instantiation of “general purpose” robots. What are the ethical, legal, and social implications of this sort of technology? Are robots like Digit actually different from a pick and place machine, or a Roomba? And second, does this situation change when you add advanced AI?

    [ UPenn ]

  • Double Your Impact This Giving Tuesday
    by IEEE Foundation on 21. November 2024. at 19:00



    GivingTuesday—3 December this year—is an international day of generosity that unleashes the power of people and organizations to transform lives and communities. During this year’s event, the organization and the IEEE Foundation encourage members to help provide the necessary financial resources for IEEE’s charitable programs to flourish.

    Donors to the IEEE Foundation drive the change we see when technology and philanthropy intersect. Whether it’s igniting a passion for engineering in the next generation, bringing sustainable energy to those in need, or deploying resources to help with emergency situations, the potent combination of the IEEE’s community’s intelligence and passion cannot be denied. Every member can help drive important change with their generosity on Giving Tuesday.

    Become an active donor

    This year members can double the impact of their giving. The first US $60,500 donated to the Giving Tuesday campaign will be matched by the Foundation, dollar for dollar, up to $121,000. Members may direct their donations to an IEEE or IEEE Foundation program they are passionate about. The breadth and impact of IEEE programs is inspiring. They include efforts that:

    • Illuminate the possibilities of technology to address global challenges.
    • Educate the next generation of innovators and engineers.
    • Engage a wider audience to appreciate the impact of engineering.
    • Energize innovation by celebrating excellence.
    • Help the next generation of engineers through educational, inspirational, and foundational programs.

    Going beyond giving

    Donating is not the only way to make an impact on IEEE Giving Tuesday. Here are other ways you can help move the future forward.

    • Start your own community fundraiser by creating a personalized page on the IEEE Foundation website to benefit the program you want to support. After you create your page, you can start raising funds by sharing the link on your social media pages or with an email message to your friends, relatives, or professional network.
    • Share, like, and comment on IEEE Giving Tuesday posts on LinkedIn and Facebook leading up to and on the day of the celebration.
    • Post a #Unselfie photo—a picture of yourself accompanied by why you support IEEE’s philanthropic programs—on social media using a Giving Tuesday template to share why you support IEEE and its programs. Don’t forget to tag the IEEE Foundation and include #IEEEGivingTuesday.

    For updates, check the IEEE Foundation Giving Tuesday website and follow the Foundation on LinkedIn.

  • French Startup Aims to Make Fuel Out of Thin Air
    by Willie D. Jones on 21. November 2024. at 17:00



    As transportation sectors like shipping and aviation remain difficult to decarbonize, a French startup claims to have developed a promising solution to reduce carbon emissions in these industries. Aerleum, founded in 2023, says its technology can pull carbon dioxide from the air and convert it into methanol, which can be used to fuel cargo ships and serve as a base chemical in producing aviation fuel.

    The French utility company Électricité de France recently awared Aerleum the EDF Pulse Award in the carbon capture category for this innovation, which may help offset some of the emissions from transportation sectors that depend on liquid fuels. Since 2019, EDF’s innovation accelerator, Blue Lab, has presented the awards to highlight the efforts of start-ups, innovators and entrepreneurs with new ideas that could move the energy sector towards net-zero carbon emissions.

    Aerleum’s proprietary reactor, which combines CO2 capture and conversion in a single device, uses a sponge-like material that can adsorb CO2 concentrations as high as 15 percent, making it effective for both direct air capture and point source carbon capture, in which CO2 is captured directly from industrial exhausts. In direct air capture, CO2 makes up roughly 0.04 percent of the gas being filtered; for point source capture; that rises to 10 percent for something like a natural gas power plant smokestack. Aerleum’s founders say their system could be a good complement to such industrial facilities by filtering and converting CO2 out of the exhaust before it reaches the atmosphere.

    Aerleum CEO Sebastien Fiedorow says that it takes about one hour for the proprietary sponge-like material the company developed to pull in as much carbon dioxide as it can hold. “The second stage, the conversion to methanol, is completed in about 20 minutes,” says Fiedorow. After that, the reactor begins another cycle of adsorbing CO2 from the air. How much CO2 is captured and how much methanol is produced, says Fiedorow, will depend on the size of the reactor.

    “Carbon capture is costly,” says David Sholl, the director of the Transformational Decarbonization Initiative at Oak Ridge National Laboratory in Tennesse. “Because you’re trying to either mitigate or offset some emission, the ultimate economic question is how much will people pay to do this.” Oak Ridge employs a liquid solvent for CO2 absorption in a 3D-printed carbon capture device they developed to perform direct air capture. Whether the mechanism is a sponge like Aerleum or a solvent like Oak Ridge, the question of cost remains.

    Aerleum’s approach addresses the top economic challenges facing carbon capture and conversion. Sholl points out that converting CO2 economically is tough. CO2 is chemically nonreactive, making conversion both complex and expensive. But Aerleum says its reactor relies on a simplified process that eliminates energy-intensive steps that would keep the cost prohibitive.

    Making Money by Making Methanol

    There will surely be a market for all the methanol Aerleum plans to produce. According to the National Renewable Energy Lab in Golden, Colo., the U.S. government has set ambitious targets for sustainable aviation fuel production, with a goal to produce all aviation fuel from renewable sources by 2050. Sholl says achieving this would require the production of tens of billions of liters of sustainable aviation fuel each year. Scaling up to that amount in 25 years “is possible,” he says. Aerleum CEO Sebastien Fiedorow says Aerleum’s technology will play a big role in reaching that goal.

    Currently, most methanol is derived from fossil fuels like natural gas, with CO2 being a climate-changing byproduct. Aerleum’s renewable “E-methanol” process could help reduce or offset these emissions in addition to mitigating the CO2 output from other industrial processes.

    “Because you’re trying to either mitigate or offset some emission, the ultimate economic question is how much will people pay to do this.” –David Sholl, Oak Ridge National Laboratory

    The major challenge that Aerleum must solve is that methanol created from thin air is more costly than conventional, fossil-fuel-based methanol. According to a recent report from IDTechEx, a market research firm that specializes in emerging technologies, sustainable aviation fuel is currently 10 times as expensive as conventional jet fuel.

    Aerleum says it is working to improve its fuel’s cost-competitiveness through economies of scale and the use of green hydrogen generated using renewable sources such as solar and wind. Its cost-cutting measures will also be aided by incentives from national governments in the form of tax credits and aviation fuel production credits, like the ones the United States currently offers under the Inflation Reduction Act.

    Others are betting that Aerleum can ramp up production, reach cost parity, and eventually turn a profit. Venture capital firms 360 Capital and High-Tech Gründerfonds have invested huge sums because of their belief that Aerleum has a winning formula.

    Aerleum is making an early entry to an industry that Oak Ridge’s Sholl says will have total annual revenues approaching US $100 billion if carbon capture-and-conversion targets are met. The startup announced in October that it had raised $6 million to finance the construction of a pilot-stage facility as it proceeds toward full industrialization. The output of the pilot-stage plant will be as much as 3 tonnes of methanol per month or roughly 3,800 liters.

    According to Fiedorow, once Aerleum overcomes the engineering challenges inherent in scaling up from the lab to the pilot stage, the startup will be on track for full industrialization of its capture-and-conversion process. He says another of the company’s goals is to build a first-of-its-kind factory whose output will be about 300,000 tonnes of methanol—which equates to just under 380 million liters—per year by 2030. From there, he says, Aerleum will likely rely on licensing its technology to giant industrial firms like oil and gas companies which can afford to build Aerleum-style reactors at their facilities to meet their greenhouse gas emission curtailment goals.

    Aerleum joins other innovators in carbon capture who are trying to make it scalable and affordable. Earlier this year, Climeworks, a Zurich-based startup, introduced new direct air capture technology designed to remove millions of tonnes of CO2 annually by the end of the decade. According to projections from the Intergovernmental Panel on Climate Change, the world may need to capture between 6 billion and 16 billion tonnes of CO2 annually by 2050 to mitigate climate change impacts.

    If Aerleum’s technology scales effectively, it could play a vital role in meeting these targets by reducing emissions while also creating renewable fuel for hard-to-decarbonize sectors like aviation and shipping. In other words, Aerleum is actively working to transform part of the problem into part of the solution.

  • AI Alone Isn’t Ready for Chip Design
    by Somdeb Majumdar on 21. November 2024. at 14:00



    Chip design has come a long way since 1971, when Federico Faggin finished sketching the first commercial microprocessor, the Intel 4004, using little more than a straightedge and colored pencils. Today’s designers have a plethora of software tools at their disposal to plan and test new integrated circuits. But as chips have grown staggeringly complex—with some comprising hundreds of billions of transistors—so have the problems designers must solve. And those tools aren’t always up to the task.

    Modern chip engineering is an iterative process of nine stages, from system specification to packaging. Each stage has several substages, and each of those can take weeks to months, depending on the size of the problem and its constraints. Many design problems have only a handful of viable solutions out of 10100 to 101000 possibilities—a needle-in-a-haystack scenario if ever there was one. Automation tools in use today often fail to solve real-world problems at this scale, which means that humans must step in, making the process more laborious and time-consuming than chipmakers would like.

    Not surprisingly, there is a growing interest in using machine learning to speed up chip design. However, as our team at the Intel AI Lab has found, machine-learning algorithms are often insufficient on their own, particularly when dealing with multiple constraints that must be satisfied.

    In fact, our recent attempts at developing an AI-based solution to tackle a tricky design task known as floorplanning (more about that task later) led us to a far more successful tool based on non-AI methods like classical search. This suggests that the field shouldn’t be too quick to dismiss traditional techniques. We now believe that hybrid approaches combining the best of both methods, although currently an underexplored area of research, will prove to be the most fruitful path forward. Here’s why.

    The Perils of AI Algorithms

    One of the biggest bottlenecks in chip design occurs in the physical-design stage, after the architecture has been resolved and the logic and circuits have been worked out. Physical design involves geometrically optimizing a chip’s layout and connectivity. The first step is to partition the chip into high-level functional blocks, such as CPU cores, memory blocks, and so on. These large partitions are then subdivided into smaller ones, called macros and standard cells. An average system-on-chip (SoC) has about 100 high-level blocks made up of hundreds to thousands of macros and thousands to hundreds of thousands of standard cells.

    Next comes floorplanning, in which functional blocks are arranged to meet certain design goals, including high performance, low power consumption, and cost efficiency. These goals are typically achieved by minimizing wirelength (the total length of the nanowires connecting the circuit elements) and white space (the total area of the chip not occupied by circuits). Such floorplanning problems fall under a branch of mathematical programming known as combinatorial optimization. If you’ve ever played Tetris, you’ve tackled a very simple combinatorial optimization puzzle.

    An illustration of a chart.  Floorplanning, in which CPU cores and other functional blocks are arranged to meet certain goals, is one of many stages of chip design. It is especially challenging because it requires solving large optimization problems with multiple constraints.Chris Philpot

    Chip floorplanning is like Tetris on steroids. The number of possible solutions, for one thing, can be astronomically large—quite literally. In a typical SoC floorplan, there are approximately 10250 possible ways to arrange 120 high-level blocks; by comparison, there are an estimated 1024 stars in the universe. The number of possible arrangements for macros and standard cells is several orders of magnitude larger still.

    Given a single objective—squeezing functional blocks into the smallest possible silicon area, for example—commercial floorplanning tools can solve problems of such scale in mere minutes. They flounder, however, when faced with multiple goals and constraints, such as rules about where certain blocks must go, how they can be shaped, or which blocks must be placed together. As a result, human designers frequently resort to trial and error and their own ingenuity, adding hours or even days to the production schedule. And that’s just for one substage.

    Despite the triumphs in machine learning over the past decade, it has so far had relatively little impact on chip design. Companies like Nvidia have begun training large language models (LLMs)—the form of AI that powers services like Copilot and ChatGPT—to write scripts for hardware design programs and analyze bugs. But such coding tasks are a far cry from solving hairy optimization problems like floorplanning.

    At first glance, it might be tempting to throw transformer models, the basis for LLMs, at physical-design problems, too. We could, in theory, create an AI-based floorplanner by training a transformer to sequentially predict the physical coordinates of each block on a chip, similarly to how an AI chatbot sequentially predicts words in a sentence. However, we would quickly run into trouble if we tried to teach the model to place blocks so that they do not overlap. Though simple for a human to grasp, this concept is nontrivial for a computer to learn and thus would require inordinate amounts of training data and time. The same thing goes for further design constraints, like requirements to place blocks together or near a certain edge.

    An illustration of a floorplan and a B*-tree data structure. A simple floorplan [left] can be represented by a B*-tree data structure [right].Chris Philpot

    So, we took a different approach. Our first order of business was to choose an effective data structure to convey the locations of blocks in a floorplan. We landed on what is called a B*-tree. In this structure, each block is represented as a node on a binary tree. The block in the bottom left corner of the floorplan becomes the root. The block to the right becomes one branch; the block on top becomes the other branch. This pattern continues for each new node. Thus, as the tree grows, it encapsulates the floorplan as it fans rightward and upward.

    A big advantage of the B*-tree structure is that it guarantees an overlap-free floorplan because block locations are relative rather than absolute—for example, “above that other block” rather than “at this spot.” Consequently, an AI floorplanner does not need to predict the exact coordinates of each block it places. Instead, it can trivially calculate them based on the block’s dimensions and the coordinates and dimensions of its relational neighbor. And voilà—no overlaps.

    With our data structure in place, we then trained several machine-learning models—specifically, graph neural networks, diffusion models, and transformer-based models—on a dataset of millions of optimal floorplans. The models learned to predict the best block to place above or to the right of a previously placed block to generate floorplans that are optimized for area and wirelength. But we quickly realized that this step-by-step method was not going to work. We had scaled the floorplanning problems to around 100 blocks and added hard constraints beyond the no-overlap rule. These included requiring some blocks to be placed at a predetermined location like an edge or grouping blocks that share the same voltage source. However, our AI models wasted time pursuing suboptimal solutions.

    We surmised that the hangup was the models’ inability to backtrack: Because they place blocks sequentially, they cannot retrospectively fix earlier bad placements. We could get around this hurdle using techniques like a reinforcement-learning agent, but the amount of exploration such an agent required to train a good model would be impractical. Having reached a dead end, we decided to ditch block-by-block decision making and try a new tack.

    Returning to Chip Design Tradition

    A common way to solve massive combinatorial optimization problems is with a search technique called simulated annealing (SA). First described in 1983, SA was inspired by metallurgy, where annealing refers to the process of heating metal to a high temperature and then slowly cooling it. The controlled reduction of energy allows the atoms to settle into an orderly arrangement, making the material stronger and more pliable than if it had cooled quickly. In an analogous manner, SA progressively homes in on the best solution to an optimization problem without having to tediously check every possibility.

    Here’s how it works. The algorithm starts with a random solution—for our purposes, a random floorplan represented as a B*-tree. We then allow the algorithm to take one of three actions, again at random: It can swap two blocks, move a block from one position to another, or adjust a block’s width-to-height ratio (without changing its area). We judge the quality of the resulting floorplan by taking a weighted average of the total area and wirelength. This number describes the “cost” of the action.

    If the new floorplan is better—that is, it decreases the cost—we accept it. If it’s worse, we also initially accept it, knowing that some “bad” decisions could lead in good directions. Over time, however, as the algorithm keeps adjusting blocks randomly, we accept cost-increasing actions less and less frequently. As in metalworking, we want to make this transition gradually. Just as cooling a metal too quickly can trap its atoms in disorderly arrangements, restricting the algorithm’s explorations too soon can trap it in suboptimal solutions, called local minima. By giving the algorithm enough leeway to dodge these pitfalls early on, we can then coax it toward the solution we really want: the global minimum (or a good approximation of it).

    We had much more success solving floorplanning problems with SA than with any of our machine-learning models. Because the SA algorithm has no notion of placement order, it can make changes to any block at any time, essentially allowing the algorithm to correct for earlier mistakes. Without constraints, we found it could solve highly complex floorplans with hundreds of blocks in minutes. By comparison, a chip designer working with commercial tools would need hours to solve the same puzzles.

    An illustration of a series of numbered squares.  Using a search technique called simulated annealing, a floorplanning algorithm starts with a random layout [top]. It then tries to improve the layout by swapping two blocks, moving a block to another position, or adjusting a block’s aspect ratio.Chris Philpot

    Of course, real-world design problems have constraints. So we gave our SA algorithm some of the same ones we had given our machine-learning model, including restrictions on where some blocks are placed and how they are grouped. We first tried addressing these hard constraints by adding the number of times a floorplan violated them to our cost function. Now, when the algorithm made random block changes that increased constraint violations, we rejected these actions with increasing probability, thereby instructing the model to avoid them.

    Unfortunately, though, that tactic backfired. Including constraints in the cost function meant that the algorithm would try to find a balance between satisfying them and optimizing the area and wirelength. But hard constraints, by definition, can’t be compromised. When we increased the weight of the constraints variable to account for this rigidity, however, the algorithm did a poor job at optimization. Instead of the model’s efforts to fix violations resulting in global minima (optimal floorplans), they repeatedly led to local minima (suboptimal floorplans) that the model could not escape.

    Moving Forward with Machine Learning

    Back at the drawing board, we conceived a new twist on SA, which we call constraints-aware SA (CA-SA). This variation employs two algorithmic modules. The first is an SA module, which focuses on what SA does best: optimizing for area and wirelength. The second module picks a random constraint violation and fixes it. This repair module kicks in very rarely—about once every 10,000 actions—but when it does, its decision is always accepted, regardless of the effect on area and wirelength. We can thus guide our CA-SA algorithm toward solutions that satisfy hard constraints without hamstringing it.

    Using this approach, we developed an open-source floorplanning tool that runs multiple iterations of CA-SA simultaneously. We call it parallel simulated annealing with constraints awareness, or Parsac for short. Human designers can choose from the best of Parsac’s solutions. When we tested Parsac on popular floorplanning benchmarks with up to 300 blocks, it handily beat every other published formulation, including other SA-based algorithms and machine-learning models.

    An illustration a series of colored blocks. Without constraints awareness, a regular simulated-annealing algorithm produces a suboptimal floorplan that cannot be improved. In this case, Block X gets trapped in an invalid position. Any attempt to fix this violation leads to several other violations.Chris Philpot

    These established benchmarks, however, are more than two decades old and do not reflect modern SoC designs. A major drawback is their lack of hard constraints. To see how Parsac performed on more realistic designs, we added our own constraints to the benchmark problems, including stipulations about block placements and groupings. To our delight, Parsac successfully solved high-level floorplanning problems of commercial scale (around 100 blocks) in less than 15 minutes, making it the fastest known floorplanner of its kind.

    We are now developing another non-AI technique based on geometric search to handle floorplanning with oddly shaped blocks, thus diving deeper into real-world scenarios. Irregular layouts are too complex to be represented with a B*-tree, so we went back to sequential block placing. Early results suggest this new approach could be even faster than Parsac, but because of the no-backtracking problem, the solutions may not be optimal.

    Meanwhile, we are working to adapt Parsac for macro placements, one level more granular than block floorplanning, which means scaling from hundreds to thousands of elements while still obeying constraints. CA-SA alone is likely too slow to efficiently solve problems of this size and complexity, which is where machine learning could help.

    An illustration of 3 charts and a series of colored squares.  Parsac solves commercial-scale floorplanning problems within 15 minutes, making it the fastest known algorithm of its kind. The initial layout contains many blocks that violate certain constraints [red]. Parsac alters the floorplan to minimize the area and wire-length while eliminating any constraint violations.Chris Philpot

    Given an SA-generated floorplan, for instance, we could train an AI model to predict which action will improve the layout’s quality. We could then use this model to guide the decisions of our CA-SA algorithm. Instead of taking only random—or “dumb”—actions (while accommodating constraints), the algorithm would accept the model’s “smart” actions with some probability. By co-operating with the AI model, we reasoned, Parsac could dramatically reduce the number of actions it takes to find an optimal solution, slashing its run time. However, allowing some random actions is still crucial because it enables the algorithm to fully explore the problem. Otherwise, it’s apt to get stuck in suboptimal traps, like our failed AI-based floorplanner.

    This or similar approaches could be useful in solving other complex combinatorial optimization problems beyond floorplanning. In chip design, such problems include optimizing the routing of interconnects within a core and Boolean circuit minimization, in which the challenge is to construct a circuit with the fewest gates and inputs to execute a function.

    A Need for New Benchmarks

    Our experience with Parsac also inspired us to create open datasets of sample floorplans, which we hope will become new benchmarks in the field. The need for such modern benchmarks is increasingly urgent as researchers seek to validate new chip-design tools. Recent research, for instance, has made claims about the performance of novel machine-learning algorithms based on old benchmarks or on proprietary layouts, inviting questions about the claims’ legitimacy.

    We released two datasets, called FloorSet-Lite and FloorSet-Prime, which are available now on GitHub. Each dataset contains 1 million layouts for training machine-learning models and 100 test layouts optimized for area and wirelength. We designed the layouts to capture the full breadth and complexity of contemporary SoC floorplans. They range from 20 to 120 blocks and include practical design constraints.

    An illustration of a series of red and blue geometric shapes. To develop machine learning for chip design, we need many sample floorplans. A sample from one of our FloorSet datasets has constraints [red] and irregularly shaped blocks, which are common in real-world designs.Chris Philpot

    The two datasets differ in their level of complexity. FloorSet-Lite uses rectangular blocks, reflecting early design phases, when blocks are often configured into simple shapes. FloorSet-Prime, on the other hand, uses irregular blocks, which are more common later in the design process. At that point, the placement of macros, standard cells, and other components within blocks has been refined, leading to nonrectangular block shapes.

    Although these datasets are artificial, we took care to incorporate features from commercial chips. To do this, we created detailed statistical distributions of floorplan properties, such as block dimensions and types of constraints. We then sampled from these distributions to create synthetic floorplans that mimic real chip layouts.

    Such robust, open repositories could significantly advance the use of machine learning in chip design. It’s unlikely, however, that we will see fully AI based solutions for prickly optimization problems like floorplanning. Deep-learning models dominate tasks like object identification and language generation because they are exceptionally good at capturing statistical regularities in their training data and correlating these patterns with desired outputs. But this method does not work well for hard combinatorial optimization problems, which require techniques beyond pattern recognition to solve.

    Instead, we expect that hybrid algorithms will be the ultimate winners. By learning to identify the most promising types of solution to explore, AI models could intelligently guide search agents like Parsac, making them more efficient. Chip designers could solve problems faster, enabling the creation of more complex and power-efficient chips. They could even combine several design stages into a single optimization problem or pursue multiple designs concurrently. AI might not be able to create a chip—or even resolve a single design stage—entirely on its own. But when combined with other innovative approaches, it will be a game changer for the field.

  • New "E-nose" Samples Odors 60 Times Per Second
    by Liam Critchley on 20. November 2024. at 18:00



    Odors are all around us, and often disperse fast—in hazardous situations like wildfires, for example, wind conditions quickly carry any smoke (and the smell of smoke) away from its origin. Sending people to check out disaster zones is always a risk, so what if a robot equipped with an electronic nose, or e-nose, could track down a hazard by “smelling” for it?

    This concept motivated a recent study in Science Advances, in which researchers built an e-nose that can not only detect odors at the same speed as a mouse’s olfactory system, but also distinguish between odors by the specific patterns they produce over time when interacting with the e-nose’s sensor.

    “When odorants are carried away by turbulent airflow, they get chopped into smaller packets,” says Michael Schmuker, a professor at the University of Hertfordshire in the United Kingdom. Schmuker says that these odor packets can rapidly change, which means that an effective odor-sensing system needs to be fast to detect them. And the way in which packets change—and how frequently that happens—can give clues about how far away the odor’s source is.

    How the E-nose Works

    The e-nose uses metal oxide gas sensors with a sensing surface heated and cooled to between 150 °C and 400 °C at up to 20 times per second. Redox reactions take place on the sensing surface when it comes into direct contact with an odorant.

    The electronic nose's circuitry and a microscopy image of the sensor with its housing removed. The new electronic nose is smaller than a credit card, and includes several sensors such as the one on the right.Nik Dennler et al.

    The e-nose is smaller than a credit card, with a power consumption of only 1.2 to 1.5 watts (including the microprocessor and USB readout). The researchers built the system with off-the-shelf components, with custom-designed digital interfaces to allow odor dynamics to be probed more precisely when they encounter the heated electrodes making up the sensing surface. “Odorants flow around us in the air and some of them react with that hot surface,” says Schmuker. “How they react with it depends on their own chemical composition—they might oxidize or reduce the surface—but a chemical reaction takes place.”

    As a result, the resistance of the metal oxide electrodes changes, which can be measured. The amount and dynamics of this change are different for different combinations of odorants and sensor materials. The e-nose uses two pairs of four distinct sensors to build a pattern of resistance response curves. Resistance response curves illustrate how a sensor’s resistance changes over time in response to a stimulus, such as an odor. These curves capture the sensor’s conversion of a physical interaction—like an odor molecule binding to its surface—into an electrical signal. Because each odor generates a distinct response pattern, analyzing how the electrical signal evolves over time enables the identification of specific odors.


    “We discovered that rapidly switching the temperature back and forth between 150°C and 400°C about 20 times per second produced distinctive data patterns that made it easier to identify specific odors,” says Nik Dennler, a dual Ph.D. student at the University of Hertfordshire and Western Sydney University. By building up a picture of how the odorant reacts at these different temperatures, the response curves can be plugged into a machine learning algorithm to spot the patterns that relate to a specific odor.

    While the e-nose does not “sniff” like a regular nose, the periodic heating cycle for detecting odors is reminiscent of the periodic sniffing that mammals perform.

    Using the E-nose in Disaster Management

    A discovery in 2021 by researchers at the Francis Crick Institute in London and the University College London showed that mice can discriminate odor fluctuations up to 40 times per second—contrary to a long-held belief that mammals require one or several sniffs to obtain any meaningful odor information.

    In the new work—conducted in part by the same researchers behind the 2021 discovery—the researchers found that the e-nose can detect odors as quickly as a mouse can, with the ability to resolve and decode odor fluctuations up to 60 times per second. The e-nose can currently differentiate between 5 different odors when presented individually or in a mixture of two odors. The e-nose could detect additional odors if it is trained to do so.

    “We found it could accurately identify odors in just 50 milliseconds and decode patterns between odors switching up to 40 times per second,” says Dennler. For comparison, recent research in humans suggests the threshold for distinguishing between two odors binding to the same olfactory receptors is about 60 ms.

    The small scale and moderate power requirements could enable the e-nose to be deployed in robots used to pinpoint an odor’s source. “Other fast technologies exist, but are usually very bulky and you would need a large battery to power them,” says Schmuker. “We can put our device on a small robot and evaluate its use in applications that you use a sniffer dog for today.”

    “As soon as you’re driving, walking, or flying around, you need to be really fast at sensing,” says Dennler. “With our e-nose, we can capture odor information at high speeds. Primary applications could involve odor-guided navigation tasks, or, more generally, collecting odor information while on the move.”

    The researchers are looking at using these small e-nose robots in disaster management applications, including locating wildfires and gas leaks, and finding people buried in rubble after an earthquake.

  • Packaging and Robots
    by Dexter Johnson on 19. November 2024. at 20:00



    This is a sponsored article brought to you by Amazon.

    The journey of a package from the moment a customer clicks “buy” to the moment it arrives at their doorstep is one of the most complex and finely tuned processes in the world of e-commerce. At Amazon, this journey is constantly being optimized, not only for speed and efficiency, but also for sustainability. This optimization is driven by the integration of cutting-edge technologies like artificial intelligence (AI), machine learning (ML), and robotics, which allow Amazon to streamline its operations while working towards minimizing unnecessary packaging.

    The use of AI and ML in logistics and packaging is playing an increasingly vital role in transforming the way packages are handled across Amazon’s vast global network. In two interviews — one with Clay Flannigan, who leads manipulation robotics programs at Amazon, and another with Callahan Jacobs, an owner of the Sustainable Packaging team’s technology products — we gain insights into how Amazon is using AI, ML, and automation to push the boundaries of what’s possible in the world of logistics, while also making significant strides in sustainability-focused packaging.

    The Power of AI and Machine Learning in Robotics

    One of the cornerstones of Amazon’s transformation is the integration of AI and ML into its robotics systems. Flannigan’s role within the Fulfillment Technologies Robotics (FTR) team, Amazon Robotics, centers around manipulation robotics — machines that handle the individual items customers order on amazon.com. These robots, in collaboration with human employees, are responsible for picking, sorting, and packing millions of products every day. It’s an enormously complex task, given the vast diversity of items in Amazon’s inventory.

    “Amazon is uniquely positioned to lead in AI and ML because of our vast data,” Flannigan explained. “We use this data to train models that enable our robots to perform highly complex tasks, like picking and packing an incredibly diverse range of products. These systems help Amazon solve logistics challenges that simply wouldn’t be possible at this scale without the deep integration of AI.”

    At the core of Amazon’s robotic systems is machine learning, which allows the machines to “learn” from their environment and improve their performance over time. For example, AI-powered computer vision systems enable robots to “see” the products they are handling, allowing them to distinguish between fragile items and sturdier ones, or between products of different sizes and shapes. These systems are trained using expansive amounts of data, which Amazon can leverage due to its immense scale.

    One particularly important application of machine learning is in the manipulation of unstructured environments. Traditional robotics have been used in industries where the environment is highly structured and predictable. But Amazon’s warehouses are anything but predictable. “In other industries, you’re often building the same product over and over. At Amazon, we have to handle an almost infinite variety of products — everything from books to coffee makers to fragile collectibles,” Flannigan said.

    “There are so many opportunities to push the boundaries of what AI and robotics can do, and Amazon is at the forefront of that change.” —Clay Flannigan, Amazon

    In these unstructured environments, robots need to be adaptable. They rely on AI and ML models to understand their surroundings and make decisions in real-time. For example, if a robot is tasked with picking a coffee mug from a bin full of diverse items, it needs to use computer vision to identify the mug, understand how to grip it without breaking it, and move it to the correct packaging station. These tasks may seem simple, but they require advanced ML algorithms and extensive data to perform them reliably at Amazon’s scale.

    Sustainability and Packaging: A Technology-Driven Approach

    While robotics and automation are central to improving efficiency in Amazon’s fulfillment centers, the company’s commitment to sustainability is equally important. Callahan Jacobs, product manager on FTR’s Mechatronics & Sustainable Packaging (MSP) team, is focused on preventing waste and aims to help reduce the negative impacts of packaging materials. The company has made significant strides in this area, leveraging technology to improve the entire packaging experience.

    A photo of a packaging machine.  Amazon

    “When I started, our packaging processes were predominantly manual,” Jacobs explained. “But we’ve moved toward a much more automated system, and now we use machines that custom-fit packaging to items. This has drastically reduced the amount of excess material we use, especially in terms of minimizing the cube size for each package, and frees up our teams to focus on harder problems like how to make packaging out of more conscientious materials without sacrificing quality.”

    Since 2015, Amazon has decreased its average per-shipment packaging weight by 43 percent, which represents more than 3 million metric tons of packaging materials avoided. This “size-to-fit” packaging technology is one of Amazon’s most significant innovations in packaging. By using automated machines that cut and fold boxes to fit the dimensions of the items being shipped, Amazon is able to reduce the amount of air and unused space inside packages. This not only reduces the amount of material used but also optimizes the use of space in trucks, planes, and delivery vehicles.

    “By fitting packages as closely as possible to the items they contain, we’re helping to reduce both waste and shipping inefficiencies,” Jacobs explained.

    Advanced Packaging Technology: The Role of Machine Learning

    AI and ML play a critical role in Amazon’s efforts to optimize packaging. Amazon’s packaging technology doesn’t just aim to prevent waste but also ensures that items are properly protected during their journey through the fulfillment network. To achieve this balance, the company relies on advanced machine learning models that evaluate each item and determine the optimal packaging solution based on various factors, including the item’s fragility, size, and the route it needs to travel.

    “We’ve moved beyond simply asking whether an item can go in a bag or a box,” said Jacobs. “Now, our AI and ML models look at each item and say, ‘What are the attributes of this product? Is it fragile? Is it a liquid? Does it have its own packaging, or does it need extra protection?’ By gathering this information, we can make smarter decisions about packaging, helping to result in less waste or better protection for the items.”

    “By fitting packages as closely as possible to the items they contain, we’re helping to reduce both waste and shipping inefficiencies.” —Callahan Jacobs, Amazon

    This process begins as soon as a product enters Amazon’s inventory. Machine Learning models analyze each product’s data to determine key attributes. These models may use computer vision to assess the item’s packaging or natural language processing to analyze product descriptions and customer feedback. Once the product’s attributes have been determined, the system decides which type of packaging is most suitable, helping to prevent waste while ensuring the item’s safe arrival.

    “Machine learning allows us to make these decisions dynamically,” Jacobs added. “For example, an item like a t-shirt doesn’t need to be packed in a box—it can go in a paper bag. But a fragile glass item might need additional protection. By using AI and ML, we can make these decisions at scale, ensuring that we’re always prioritizing for the option that aims to benefits the customer and the planet.”

    Dynamic Decision-Making With Real-Time Data

    Amazon’s use of real-time data is a game-changer in its packaging operations. By continuously collecting and analyzing data from its fulfillment centers, Amazon can rapidly adjust its packaging strategies, optimizing for efficiency at scale. This dynamic approach allows Amazon to respond to changing conditions, such as new packaging materials, changes in shipping routes, or feedback from customers.

    “A huge part of what we do is continuously improving the process based on what we learn,” Jacobs explained. “For example, if we find that a certain type of packaging isn’t satisfactory, we can quickly adjust our criteria and implement changes across our delivery network. This real-time feedback loop is critical in making our system more resilient and keeping it aligned with our team’s sustainability goals.”

    This continuous learning process is key to Amazon’s success. The company’s AI and ML models are constantly being updated with new data, allowing them to become more accurate and effective over time. For example, if a new type of packaging material is introduced, the models can quickly assess its effectiveness and make adjustments as needed.

    Jacobs also emphasized the role of feedback in this process. “We’re always monitoring the performance of our packaging,” she said. “If we receive feedback from customers that an item arrived damaged or that there was too much packaging, we can use that information to improve model outputs, which ultimately helps us continually reduce waste.”

    Robotics in Action: The Role of Gripping Technology and Automation

    One of the key innovations in Amazon’s robotic systems is the development of advanced gripping technology. As Flannigan explained, the “secret sauce” of Amazon’s robotic systems is not just in the machines themselves but in the gripping tools they use. These tools are designed to handle the immense variety of products Amazon processes every day, from small, delicate items to large, bulky packages.

    A photo of a robot. Amazon

    “Our robots use a combination of sensors, AI, and custom-built grippers to handle different types of products,” Flannigan said. “For example, we’ve developed specialized grippers that can handle fragile items like glassware without damaging them. These grippers are powered by AI and machine learning, which allow them to plan their movements based on the item they’re picking up.”

    The robotic arms in Amazon’s fulfillment centers are equipped with a range of sensors that allow them to “see” and “feel” the items they’re handling. These sensors provide real-time data to the machine learning models, which then make decisions about how to handle the item. For example, if a robot is picking up a fragile item, it will use gentler strategy, whereas it might optimize for speed when handling a sturdier item.

    Flannigan also noted that the use of robotics has significantly improved the safety and efficiency of Amazon’s operations. By automating many of the repetitive and physically demanding tasks in fulfillment centers, Amazon has been able to reduce the risk of injuries among its employees while also increasing the speed and accuracy of its operations. It also provides the opportunity to focus on upskilling. “There’s always something new to learn,” Flannigan said, “there’s no shortage of training and advancement options.”

    Continuous Learning and Innovation: Amazon’s Culture of Growth

    Both Flannigan and Jacobs emphasized that Amazon’s success in implementing these technologies is not just due to the tools themselves but also the culture of innovation that drives the company. Amazon’s engineers and technologists are encouraged to constantly push the boundaries of what’s possible, experimenting with new solutions and improving existing systems.

    “Amazon is a place where engineers thrive because we’re always encouraged to innovate,” Flannigan said. “The problems we’re solving here are incredibly complex, and Amazon gives us the resources and freedom to tackle them in creative ways. That’s what makes Amazon such an exciting place to work.”

    Jacobs echoed this sentiment, adding that the company’s commitment to sustainability is one of the things that makes it an attractive place for engineers. “Every day, I learn something new, and I get to work on solutions that have a real impact at a global scale. That’s what keeps me excited about my work. That’s hard to find anywhere else.”

    The Future of AI, Robotics, and Innovation at Amazon

    Looking ahead, Amazon’s vision for the future is clear: to continue innovating in the fields of AI, ML, and robotics for maximum customer satisfaction. The company is investing heavily in new technologies that are helping to progress its sustainability initiatives while improving the efficiency of its operations.

    “We’re just getting started,” Flannigan said. “There are so many opportunities to push the boundaries of what AI and robotics can do, and Amazon is at the forefront of that change. The work we do here will have implications not just for e-commerce but for the broader world of automation and AI.”

    Jacobs is equally optimistic about the future of the Sustainable Packaging team. “We’re constantly working on new materials and new ways to reduce waste,” she said. “The next few years are going to be incredibly exciting as we continue to refine our packaging innovations, making them more scalable without sacrificing quality.”

    As Amazon continues to evolve, the integration of AI, ML, and robotics will be key to achieving its ambitious goals. By combining cutting-edge technology with a deep commitment to sustainability, Amazon is setting a new standard for how e-commerce companies can operate in the 21st century. For engineers, technologists, and environmental advocates, Amazon offers an unparalleled opportunity to work on some of the most challenging and impactful problems of our time.

    Learn more about becoming part of Amazon’s Team.

  • New Fastest Supercomputer Will Simulate Nuke Testing
    by Dina Genkina on 19. November 2024. at 19:00



    In 1965, the United States and other nuclear powers committed to the Comprehensive Nuclear-Test-Ban Treaty, which prohibited nuclear tests. The National Nuclear Security Administration (NNSA), a successor to the Manhattan Project, now tests nukes only in simulation. To that end, the NNSA yesterday unveiled the world’s fastest supercomputer to air in its mission to maintain a safe, secure, and reliable nuclear stockpile.

    El Capitan was announced yesterday at the SC Conference for supercomputing in Atlanta, Georgia, and it debuted at #1 in the newest Top500 list, a twice-yearly ranking of the world’s highest performing supercomputers. El Capitan, housed at Lawrence Livermore National Laboratory in Livermore, Calif., can perform over 2700 quadrillion operations per second at its peak. The previous record holder, Frontier, could do just over 2000 quadrillion peak operations per second.

    Alongside El Capitan, the NNSA announced its unclassified cousin, Toulumne, which debuted at #10 on the Top500 list and can perform a peak of 288 quadrillion operations per second.

    The NNSA—which oversees Lawrence Livermore as well as Los Alamos National Laboratory and Sandia National Laboratories—plans to use El Capitan to “model and predict nuclear weapon performance, aging effects, and safety,” says Corey Hinderstein, acting principal deputy administrator at NNSA. Hinderstein says the 3D modeling of multiple physics processes will be significantly enhanced by the new supercomputer’s speed. The team also plans to use El Capitan to aid in its inertial confinement fusion efforts, as well as to train artificial intelligence in support of both of those efforts.

    Planning for El Capitan began in 2018, and construction has been ongoing for the past four years. The system is built by Hewlett Packard Enterprise, which has built all of the current top 3 supercomputers on the Top500 list. El Capitan uses AMD’s MI300a chip, dubbed an accelerated processing unit, which combines a CPU and GPU in one package. In total, the system boasts 44,544 MI300As, connected together by HPE’s Slingshot interconnects.

    A hand holds a square showing AMD's MI300A chip, which has four squares that meet in the center with a rectangular chip. El Capitan uses 44,544 of AMD’s MI300A chips, which combine a CPU and GPU in one package.Garry McLeod/Lawrence Livermore National Laboratory

    Scientists are already at work porting their code over to the new machine, and they are enthusiastic about its promise. “We’re seeing significant speed ups compared to running on old chips versus this new thing,” says Luc Peterson, computational physicist at Lawrence Livermore National Laboratory. “We are at the point where our time to science is shrinking. We can do things in a few days that would have taken a few months. So we’re pretty excited about the applications.”

    Yet the appetite for ever larger supercomputers lives on. “We are already working on the next [high performance computing] acquisition,” says Thuc Hoang, director of the advanced simulation and computing program at NNSA.

  • Smartwatch Speakers Slim Down With Silicon
    by Gwendolyn Rak on 19. November 2024. at 14:00



    A year after introducing the first in-ear, silicon-based earbuds, xMEMS has unveiled a prototype of the latest version of its microspeakers—this time, for use as an open-air speaker, which is a more challenging task.

    The Silicon Valley-based startup’s previous microspeakers brought microelectromechanical systems (MEMS) to wireless earbuds and boasted excellent sound quality. By modulating ultrasound signals, the speakers create high-fidelity sound in a light and compact device. The clarity of sound that the new silicon-and-piezoelectric chip, called Sycamore, produces is more like that of a smartphone speaker—decent, but far from the sound quality of in-ear alternatives. And like a smartphone speaker, Sycamore is intended for open-air audio produced by devices near or on the body. In particular, the speaker could be used in various wearable devices, like smart watches, XR glasses, or open earbuds, which clip around the ear instead of nestling within it.

    For these applications, the advantage of using MEMS drivers instead of a conventional speaker is less about sound quality, and more about size. The microspeaker is about 1 millimeter thick, one-third the thickness of a coil driver, and removing the magnetic coils of conventional speakers brings its weight down by roughly 70 percent to 150 milligrams. Other speakers also require empty space behind the diaphragm, called back volume. The MEMS-based speakers significantly reduce the back volume needed.

    For wearables, every millimeter and milligram matters; a heavy or bulky design could deter users, says Mike Housholder, xMEMS’ vice president of marketing and business development. That’s why the microspeakers are “perfect for smart watches.” Users seeking excellent audio quality would likely opt for in-ear buds or over-ear headphones. The thin, open-air microspeakers instead help deliver a sleek, “fashion-forward” product, Housholder says.

    Sound From Ultrasound

    Sycamore uses the same “sound from ultrasound” technology introduced in Cypress, xMEMS’ in-ear microspeaker. This tech produces ultrasound by vibrating robust silicon flaps coated in piezoelectric material. It then modulates the ultrasound to generate a full range of audible frequencies.

    What’s new with Sycamore is a more efficient chip design. This enhanced efficiency means the speakers can deliver more decibels, making open air listening possible. The speaker also performs well in the bass frequency range, historically a weak spot for MEMS speakers. (In the first commercial headphones to use xMEMS technology, the silicon microspeaker was used only for the high-frequency “tweeter”; it was paired with a conventional dynamic driver “woofer” to produce mid-range and bass audio.)

    In the company’s tests of its prototype speaker, Sycamore emitted similar or louder audio compared to the speaker on an Apple Watch Series 8 across most frequencies. Compared to Bose open earbuds, it lagged in mid-range but had stronger bass and treble frequencies.

    The new speaker will be made with the same fabrication process as the earbud chip, Cypress. xMEMS will continue to partner with TSMC to manufacture the speakers, though they are now also using Bosch, a leading MEMS foundry.

    Housholder says that by further improving the efficiency of the cell design, MEMS speakers may become loud enough for other applications, like phone or laptop speakers. But there are fundamental size limitations for the microspeakers, which are manufactured on a 300 millimeter wafer. Combining multiple chips can also bring up the volume, but it’s unlikely that your next loudspeakers will be made of MEMS.

    xMEMS plans to begin sampling Sycamore in early 2025, with mass production expected in January 2026. In the meantime, the company’s full-range in-ear microspeakers will begin mass production in June 2025, followed by its all-silicon fan-on-a-chip in October of the same year.

  • Analog AI Startup Aims to Lower Gen AI's Power Needs
    by Samuel K. Moore on 19. November 2024. at 13:00



    Machine learning chips that use analog circuits instead of digital ones have long promised huge energy savings. But in practice they’ve mostly delivered modest savings, and only for modest-sized neural networks. Silicon Valley startup Sageance says it has the technology to bring the promised power savings to tasks suited for massive generative AI models. The startup claims that its systems will be able to run the large language model Llama 2-70B at one-tenth the power of an Nvidia H100 GPU-based system, at one-twentieth the cost and in one-twentieth the space.

    “My vision was to create a technology that was very differentiated from what was being done for AI,” says Sageance CEO and founder Vishal Sarin. Even back when the company was founded in 2018, he “realized power consumption would be a key impediment to the mass adoption of AI…. The problem has become many, many orders of magnitude worse as generative AI has caused the models to balloon in size.”

    The core power-savings prowess for analog AI comes from two fundamental advantages: It doesn’t have to move data around and it uses some basic physics to do machine learning’s most important math.

    That math problem is multiplying vectors and then adding up the result, called multiply and accumulate. Early on, engineers realized that two foundational rules of electrical engineers did the same thing, more or less instantly. Ohm’s Law—voltage multiplied by conductance equals current—does the multiplication if you use the neural network’s “weight” parameters as the conductances. Kirchoff’s Current Law—the sum of the currents entering and exiting a point is zero—means you can easily add up all those multiplications just by connecting them to the same wire. And finally, in analog AI, the neural network parameters don’t need to be moved from memory to the computing circuits—usually a bigger energy cost than computing itself—because they are already embedded within the computing circuits.

    Sageance uses flash memory cells as the conductance values. The kind of flash cell typically used in data storage is a single transistor that can hold 3 or 4 bits, but Sageance has developed algorithms that let cells embedded in their chips hold 8 bits, which is the key level of precision for LLMs and other so-called transformer models. Storing an 8-bit number in a single transistor instead of the 48 transistors it would take in a typical digital memory cell is an important cost, area, and energy savings, says Sarin, who has been working on storing multiple bits in flash for 30 years.

    An array of blue circles with transistor symbols connected by lines to triangles Digital data is converted to analog voltages [left]. These are effectively multiplied by flash memory cells [blue], summed, and converted back to digital data [bottom].Analog Inference

    Adding to the power savings is that the flash cells are operated in a state called “deep subthreshold.” That is, they are working in a state where they are barely on at all, producing very little current. That wouldn’t do in a digital circuit, because it would slow computation to a crawl. But because the analog computation is done all at once, it doesn’t hinder the speed.

    Analog AI Issues

    If all this sounds vaguely familiar, it should. Back in 2018 a trio of startups went after a version of flash-based analog AI. Syntiant eventually abandoned the analog approach for a digital scheme that’s put six chips in mass production so far. Mythic struggled but stuck with it, as has Anaflash. Others, particularly IBM Research, have developed chips that rely on nonvolatile memories other than flash, such as phase-change memory or resistive RAM.

    Generally, analog AI has struggled to meet its potential, particularly when scaled up to a size that might be useful in datacenters. Among its main difficulties are the natural variation in the conductance cells; that might mean the same number stored in two different cells will result in two different conductances. Worse still, these conductances can drift over time and shift with temperature. This noise drowns out the signal representing the result, and the noise can be compounded stage after stage through the many layers of a deep neural network.

    Sageance’s solution, Sarin explains, is a set of reference cells on the chip and a proprietary algorithm that uses them to calibrate the other cells and track temperature-related changes.

    Another source of frustration for those developing analog AI has been the need to digitize the result of the multiply and accumulate process in order to deliver it to the next layer of the neural network where it must then be turned back into an analog voltage signal. Each of those steps requires analog-to-digital and digital-to-analog converters, which take up area on the chip and soak up power.

    According to Sarin, Sageance has developed low-power versions of both circuits. The power demands of the digital-to-analog converter are helped by the fact that the circuit needs to deliver a very narrow range of voltages in order to operate the flash memory in deep subthreshold mode.

    Systems and What’s Next

    Sageance’s first product, to launch in 2025, will be geared toward vision systems, which are a considerably lighter lift than server-based LLMs. “That is a leapfrog product for us, to be followed very quickly [by] generative AI,” says Sarin.

    Rectangles of various size and texture arranged atop a long narrow rectangle. Future systems from Sageance will be made up of 3D-stacked analog chips linked to a processor and memory through an interposer that follows the universal chiplet interconnect (UCIe) standard.Analog Inference

    The generative AI product would be scaled up from the vision chip mainly by vertically stacking analog AI chiplets atop a communications die. These stacks would be linked to a CPU die and to high-bandwidth memory DRAM in a single package called Delphi.

    In simulations, a system made up of Delphis would run Llama2-70B at 666,000 tokens per second consuming 59 kilowatts, versus a 624 kW for an Nvidia H100-based system, Sageance claims.

  • Shaping Africa’s Future With Microelectronics
    by Willie D. Jones on 18. November 2024. at 19:00



    Timothy Ayelagbe dreams of using technology to advance health care and make other improvements across Africa.

    Ayelagbe calls microelectronics his “joy and passion” and says he wants to use the expertise he’s gaining in the field to help others.

    “My ultimate goal,” he says, “is to uplift my fellow Africans.”

    Timothy Ayelagbe


    Volunteer Roles:

    IEEE Youth Endeavors for Social Innovation Using Sustainable Technology ambassador, 2025 vice president of the IEEE Robotics and Automation Society student branch chapter

    University:

    Obafemi Awolowo University in Ile-Ife, Nigeria

    Major:

    Electronics and electrical engineering

    Minor:

    Microelectronics

    He is pursuing an electronics and electrical engineering degree, specializing in microelectronics, at Obafemi Awolowo University (OAU), in Ile-Ife, Nigeria. He says he believes learning how to employ field-programmable gate arrays (FPGAs) is the path to mastering the hardware description languages that will let him develop affordable, sustainable medical electronics.

    He says he hopes to apply his growing technical expertise and leadership abilities to address the continent’s challenges in health care, infrastructure, and natural resources management.

    Ayelagbe is passionate about mentoring aspiring African engineers as well. Early this year, he became an IEEE Youth Endeavors for Social Innovation Using Sustainable Technology (YESIST) ambassador. The YESIST 12 program provides students and young professionals with a platform to showcase ideas for addressing humanitarian and social issues affecting their communities.

    As an ambassador, Ayelagbe made online webinar sessions in his student branch while also mentoring pre-university students through activities encouraging service-oriented engineering practice.

    A technologist right out of the gate

    Born in Lagos, Nigeria, Ayelagbe was captivated by how things worked from a young age. As a child, he would dismantle and reassemble his toys to learn how they worked.

    His mother, a trader, and his father, then a quality control officer in the metal processing industry, nurtured his curiosity. While the conventional path to upward mobility in Nigeria might have led him to becoming a doctor or nurse, his parents supported his pursuit of technology.

    As it turns out, he is poised to advance the state of health care in Nigeria and around the globe.

    For now, he is focused on his undergraduate studies and on gaining practical experience. He recently completed a six-week student work experience program as part of his university’s engineering curriculum. He and fellow OAU students developed an angular speed measurement system using Hall effect sensors, which calculates the speed when its Hall’s element moves in relation to a magnetic field. Changes in the voltage and current running through the Hall element can be used to calculate the strength of the magnetic field at different locations or to track changes in its position. One common use of Hall effect sensors is to monitor wheel speed in a vehicle’s antilock braking system.

    “I want to apply the things I’m learning to make Africa great.”

    Like commercialized versions, the students’ device was designed to withstand harsh weather and unfavorable road conditions. But theirs is certain to have a significantly lower price point than the magnetic devices it emulates, while producing more accurate readings than traditional mechanical versions, Ayelagbe says.

    “We did some data processing and manipulation via Arduino programming using an ATmega microcontroller and a liquid crystal display to show the angular speed and frequency of rotation,” he says.

    Because the measurement system has potential applications in automotive and other industries, Ayelagbe’s OAU team is seeking partnerships with other researchers to further develop and commercialize it. The team also hopes to publish its findings in an IEEE journal.

    “In the future, I hope to work with semiconductor giant industries like TSMC, Nvidia, Intel, and Qualcomm,” he says.

    Volunteering provides valuable experience

    Despite Ayelagbe’s academic success, he has faced challenges in finding semiconductor internships, citing some companies’ geographical inaccessibility to African students. Instead, he says, he has been gaining valuable experience through volunteering.

    He serves as a social media manager for the Paris-based Human Development Research Initiative (HDRI), an organization that works to inspire young people to help achieve the 17 sustainable U.N. development goals known collectively as Agenda 2030. He has been promoting environmental and climate action through LinkedIn posts.

    Ayelagbe is an active IEEE volunteer and is involved in his student branch. He is the incoming vice president of the branch’s IEEE Robotics and Automation Society chapter and says he would love to take on more roles in the course of his leadership journey. He organizes webinars, meetings, and other initiatives, including connecting fellow student members with engineering professionals for mentorship.

    Through his work with HDRI and IEEE, he has the opportunity to network with students, professionals, and industry experts. The connections, he hopes, can help him achieve his ambitions.

    African nations “need engineers in the leadership sector,” he says, “and I want to apply the things I’m learning to make Africa great.”

  • Predictions From IEEE’s 2024 Technology Megatrends Report
    by Kathy Pretz on 16. November 2024. at 14:00



    It’s time to start preparing your organization and employees for the effects of artificial general intelligence, sustainability, and digital transformation. According to IEEE’s 2024 Technology Megatrends report, the three technologies will change how companies, governments, and universities operate and will affect what new skills employees need.

    A megatrend, which integrates multiple tendencies that evolve over two decades or so, is expected to have a substantial effect on society, technology, ecology, economics, and more.

    More than 50 experts from Asia, Australia, Europe, Latin America, the Middle East, and the United States provided their perspectives for the report. They represent all 47 of IEEE’s fields of interest and come from academia, the public sector, and the private sector. The report includes insights and opportunities about each megatrend and how industries could benefit.

    The experts compared their insights to technology predictions from Google Trends; the IEEE Computer Society and the IEEE Xplore Digital Library; and the U.S. Patent and Trademark Office.

    “We made predictions about technology and megatrends and correlated them with other general megatrends such as economical, ecological, and sociopolitical. They’re all intertwined,” says IEEE Fellow Dejan Milojicic, a member of the IEEE Future Directions Committee and vice president at Hewlett Packard Labs in Milpitas, Calif. He is also a Hewlett Packard Enterprise Fellow.

    The benefits and drawbacks of artificial general intelligence

    Artificial general intelligence (AGI) includes ChatGPT, autonomous robots, wearable and implantable technologies, and digital twins.

    Education, health care, and manufacturing are some of the sectors that can benefit most from AGI, the report says.

    For academia, the technology can help expand remote learning, potentially replacing physical classrooms and leading to more personalized education for students.

    In health care, the technology could lead to personalized medicine, tailored patient treatment plans, and faster drug discovery. AGI also could help reduce costs and increase efficiencies, the report says.

    Manufacturing can use the technology to improve quality control, reduce downtime, and increase production. The time to market could be significantly shortened, the report says.

    Today’s AI systems are specialized and narrow, so to reap the benefits, experts say, the widespread adoption of curated datasets, advances in AI hardware, and new algorithms will be needed. It will require interdisciplinary collaborations across computer science, engineering, ethics, and philosophy, the report says.

    The report points out drawbacks with AGI, including a lack of data privacy, ethical challenges, and misuse of content.

    Another concern is job displacement and the need for employees to be retrained. AGI requires more AI programmers and data scientists but fewer support staff and system administrators, the report notes.

    Adopting digital technologies

    Digital transformation tech includes autonomous technologies, ubiquitous connectivity, and smart environments.

    The areas that would benefit most from expanding their use of computers and other electronic devices, the experts say, are construction, education, health care, and manufacturing.

    The construction industry could use building information modeling (BIM), which generates digital versions of office buildings, bridges, and other structures to improve safety and efficiency.

    Educational institutions already use electronics such as digital whiteboards, laptops, tablets, and smartphones to enhance the learning experience. But the experts point out that schools aren’t using the tools yet for continuing education programs needed to train workers on how to use new tools and technology.

    “Most education processes are the same now as they were in the last century, at a time when we need to change to lifelong learning,” the experts say.

    “We made predictions about technology and megatrends, but we correlated them with other general megatrends such as economical, ecological, and sociopolitical. They’re all intertwined.” —Dejan Milojicic

    The report says the digital transformation will need more employees to supervise automation, as well as those with experience in analytics, but fewer operators and workers responsible for maintaining old systems.

    The health field has started converting to electronic records, but more could be done, the report says, such as using computer-aided design to develop drugs and prosthetics and using BIM tools to design hospitals.

    Manufacturing could benefit by using computer-aided-design data to create digital representations of product prototypes.

    There are some concerns with digital transformation, the experts acknowledge. There aren’t enough chips and batteries to build all the devices and systems needed, for example, and not every organization or government can afford the digital tools. Also, people in underdeveloped areas who lack connectivity would not have access to them, leading to a widening of the digital divide. Other people might resist because of privacy, religious, or lifestyle concerns, the experts note.

    Addressing the climate crisis

    Technology can help engineer social and environmental change. Sustainability applications include clean renewable energy, decarbonization, and energy storage.

    Nearly half of organizations around the world have a company-wide sustainability strategy, but only 18 percent have well-defined goals and a timetable for how to implement them, the report says. About half of companies lack the tools or expertise to deploy sustainable solutions. Meanwhile, information and communication technologies’ energy consumption is growing, using about 10 percent of worldwide electricity.

    The experts predict that transitioning to more sustainable information and communication technologies will lead to entirely new businesses. Blockchain technology could be used to optimize surplus energy produced by microgrids, for example, ultimately leading to more jobs, less-expensive energy, and energy security. Early leaders in sustainability are already applying digital technologies such as AI, big data, blockchain, computer vision, and the Internet of Things to help operationalize sustainability.

    Employees familiar with those technologies will be needed, the report predicts, adding that engineers who can design systems that are more energy efficient and environmentally friendly will be in demand.

    Some of the challenges that could hinder such efforts include a lack of regulations, an absence of incentives to encourage people to become eco-friendly, and the high cost of sustainable technologies.

    How organizations can work together

    All three megatrends should be considered synergistically, the experts say. For example, AGI techniques can be applied to sustainable and digitally transformed technologies. Sustainability is a key aspect of technology, including AGI. And digital transformation needs to be continually updated with AGI and sustainability features, the report says.

    The report included several recommendations for how academia, governments, industries, and professional organizations can work together to advance the three technologies.

    To address the need to retrain employees, for example, industry should work with colleges and universities to educate the workforce and train instructors on the technologies.

    To advance the science that supports the megatrend technologies, academia needs to work more closely with industry on research projects, the experts suggest. In turn, governments should foster research by academia and not-for-profit organizations.

    Companies should advise government officials on how to best regulate the technologies. To gain widespread acceptance of the technologies, the risks and the benefits should be explained to the public to avoid misinformation, the experts say. In addition, processes, practices, and educational materials need to be created to address ethical issues surrounding the technologies.

    “As a whole, these megatrends should focus on helping industry,” Milojicic says. “Government and academia are important in their own ways, but if we can make industry successful, everything else will come from that. Industry will fund academia, and governments will help industry.”

    Professional organizations including IEEE will need to develop technical standards and road maps on the three areas, he says. A road map is a strategic look at the long-term landscape of a technology, what the trends are, and what the possibilities are.

    The megatrends influence which initiatives IEEE is going to explore, Milojicic says, “which could potentially lead to future road maps and standards. In a way, we are doing the prework to prepare what they could eventually standardize.”

    Dejan Milojicic discusses findings from IEEE’s 2024 Technology Megatrends report.

    Dissemination and education are critical

    The group encourages a broad dissemination of the three megatrends to avoid widening the digital divide.

    “The speed of change could be faster than most people can adapt to—which could lead to fear and aggression toward technology,” the experts say. “Broad education is critical for technology adoption.”

  • Video Friday: Extreme Off-Road
    by Evan Ackerman on 15. November 2024. at 18:00



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    Humanoids 2024: 22–24 November 2024, NANCY, FRANCE
    Humanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

    Enjoy today’s videos!

    Don’t get me wrong, this is super impressive, but I’m like 95% sure that there’s a human driving it. For robots like these to be useful, they’ll need to be autonomous, and high speed autonomy over unstructured terrain is still very much a work in progress.

    [ Deep Robotics ]

    Dung beetles impressively coordinate their six legs simultaneously to effectively roll large dung balls. They are also capable of rolling dung balls varying in the weight on different terrains. The mechanisms underlying how their motor commands are adapted to walk and simultaneously roll balls (multitasking behavior) under different conditions remain unknown. Therefore, this study unravels the mechanisms of how dung beetles roll dung balls and adapt their leg movements to stably roll balls over different terrains for multitasking robots.

    [ Paper ] via [ Advanced Science News ]

    Subsurface lava tubes have been detected from orbit on both the Moon and Mars. These natural voids are potentially the best place for long-term human habitations, because they offer shelter against radiation and meteorites. This work presents the development and implementation of a novel Tether Management and Docking System (TMDS) designed to support the vertical rappel of a rover through a skylight into a lunar lava tube. The TMDS connects two rovers via a tether, enabling them to cooperate and communicate during such an operation.

    [ DFKI Robotics Innovation Center ]

    Ad Spiers at Imperial College London writes, “We’ve developed a $80 barometric tactile sensor that, unlike past efforts, is easier to fabricate and repair. By training a machine learning model on controlled stimulation of the sensor we have been able to increase the resolution from 6mm to 0.28mm. We also implement it in one of our E-Troll robotic grippers, allowing the estimation of object position and orientation.”

    [ Imperial College London ] via [ Ad Spiers ]

    Thanks Ad!

    A robot, trained for the first time to perform surgical procedures by watching videos of robotic surgeries, executed the same procedures—but with considerably more precision.

    [ Johns Hopkins University ]

    Thanks, Dina!

    This is brilliant but I’m really just in it for the satisfying noise it makes.

    [ RoCogMan Lab ]

    Fast and accurate physics simulation is an essential component of robot learning, where robots can explore failure scenarios that are difficult to produce in the real world and learn from unlimited on-policy data. Yet, it remains challenging to incorporate RGB-color perception into the sim-to-real pipeline that matches the real world in its richness and realism. In this work, we train a robot dog in simulation for visual parkour. We propose a way to use generative models to synthesize diverse and physically accurate image sequences of the scene from the robot’s ego-centric perspective. We present demonstrations of zero-shot transfer to the RGB-only observations of the real world on a robot equipped with a low-cost, off-the-shelf color camera.

    [ MIT CSAIL ]

    WalkON Suit F1 is a powered exoskeleton designed to walk and balance independently, offering enhanced mobility and independence. Users with paraplegia can easily transfer into the suit directly from their wheelchair, ensuring exceptional usability for people with disabilities.

    [ Angel Robotics ]

    In order to promote the development of the global embodied AI industry, the Unitree G1 robot operation data set is open sourced, adapted to a variety of open source solutions, and continuously updated.

    [ Unitree Robotics ]

    Spot encounters all kinds of obstacles and environmental changes, but it still needs to safely complete its mission without getting stuck, falling, or breaking anything. While there are challenges and obstacles that we can anticipate and plan for—like stairs or forklifts—there are many more that are difficult to predict. To help tackle these edge cases, we used AI foundation models to give Spot a better semantic understanding of the world.

    [ Boston Dynamics ]

    Wing drone deliveries of NHS blood samples are now underway in London between Guy’s and St Thomas’ hospitals.

    [ Wing ]

    As robotics engineers, we love the authentic sounds of robotics—the metal clinking and feet contacting the ground. That’s why we value unedited, raw footage of robots in action. Although unpolished, these candid captures let us witness the evolution of robotics technology without filters, which is truly exciting.

    [ UCR ]

    Eight minutes of chill mode thanks to Kuka’s robot DJs, which make up the supergroup the Kjays.

    A KR3 AGILUS at the drums, loops its beats and sets the beat. The KR CYBERTECH nano is our nimble DJ with rhythm in his blood. In addition, a KR AGILUS performs as a light artist and enchants with soft and expansive movements. In addition there is an LBR Med, which - mounted on the ceiling - keeps an eye on the unusual robot party.

    [ Kuka Robotics Corp. ]

    Am I the only one disappointed that this isn’t actually a little mini Ascento?

    [ Ascento Robotics ]

    This demo showcases our robot performing autonomous table wiping powered by Deep Predictive Learning developed by Ogata Lab at Waseda University. Through several dozen human teleoperation demonstrations, the robot has learned natural wiping motions.

    [ Tokyo Robotics ]

    What’s green, bidirectional, and now driving autonomously in San Francisco and the Las Vegas Strip? The Zoox robotaxi! Give us a wave if you see us on the road!

    [ Zoox ]

    Northrop Grumman has been pioneering capabilities in the undersea domain for more than 50 years. Now, we are creating a new class of uncrewed underwater vehicles (UUV) with Manta Ray. Taking its name from the massive “winged” fish, Manta Ray will operate long-duration, long-range missions in ocean environments where humans can’t go.

    [ Northrop Grumman ]

    I was at ICRA 2024 and I didn’t see most of the stuff in this video.

    [ ICRA 2024 ]

    A fleet of marble-sculpting robots is carving out the future of the art world. It’s a move some artists see as cheating, but others are embracing the change.

    [ CBS ]

  • Newest Google and Nvidia Chips Speed AI Training
    by Samuel K. Moore on 13. November 2024. at 16:00



    Nvidia, Oracle, Google, Dell and 13 other companies reported how long it takes their computers to train the key neural networks in use today. Among those results were the first glimpse of Nvidia’s next generation GPU, the B200, and Google’s upcoming accelerator, called Trillium. The B200 posted a doubling of performance on some tests versus today’s workhorse Nvidia chip, the H100. And Trillium delivered nearly a four-fold boost over the chip Google tested in 2023.

    The benchmark tests, called MLPerf v4.1, consist of six tasks: recommendation, the pre-training of the large language models (LLM) GPT-3 and BERT-large, the fine tuning of the Llama 2 70B large language model, object detection, graph node classification, and image generation.

    Training GPT-3 is such a mammoth task that it’d be impractical to do the whole thing just to deliver a benchmark. Instead, the test is to train it to a point that experts have determined means it is likely to reach the goal if you kept going. For Llama 2 70B, the goal is not to train the LLM from scratch, but to take an already trained model and fine-tune it so it’s specialized in a particular expertise—in this case, government documents. Graph node classification is a type of machine learning used in fraud detection and drug discovery.

    As what’s important in AI has evolved, mostly toward using generative AI, the set of tests has changed. This latest version of MLPerf marks a complete changeover in what’s being tested since the benchmark effort began. “At this point all of the original benchmarks have been phased out,” says David Kanter, who leads the benchmark effort at MLCommons. In the previous round it was taking mere seconds to perform some of the benchmarks.

    A line graph with one diagonal blue line and many colored and dashed branches rising up from that line. Performance of the best machine learning systems on various benchmarks has outpaced what would be expected if gains were solely from Moore’s Law [blue line]. Solid line represent current benchmarks. Dashed lines represent benchmarks that have now been retired, because they are no longer industrially relevant.MLCommons

    According to MLPerf’s calculations, AI training on the new suite of benchmarks is improving at about twice the rate one would expect from Moore’s Law. As the years have gone on, results have plateaued more quickly than they did at the start of MLPerf’s reign. Kanter attributes this mostly to the fact that companies have figured out how to do the benchmark tests on very large systems. Over time, Nvidia, Google, and others have developed software and network technology that allows for near linear scaling—doubling the processors cuts training time roughly in half.

    First Nvidia Blackwell training results

    This round marked the first training tests for Nvidia’s next GPU architecture, called Blackwell. For the GPT-3 training and LLM fine-tuning, the Blackwell (B200) roughly doubled the performance of the H100 on a per-GPU basis. The gains were a little less robust but still substantial for recommender systems and image generation—64 percent and 62 percent, respectively.

    The Blackwell architecture, embodied in the Nvidia B200 GPU, continues an ongoing trend toward using less and less precise numbers to speed up AI. For certain parts of transformer neural networks such as ChatGPT, Llama2, and Stable Diffusion, the Nvidia H100 and H200 use 8-bit floating point numbers. The B200 brings that down to just 4 bits.

    Google debuts 6th gen hardware

    Google showed the first results for its 6th generation of TPU, called Trillium—which it unveiled only last month—and a second round of results for its 5th generation variant, the Cloud TPU v5p. In the 2023 edition, the search giant entered a different variant of the 5th generation TPU, v5e, designed more for efficiency than performance. Versus the latter, Trillium delivers as much as a 3.8-fold performance boost on the GPT-3 training task.

    But versus everyone’s arch-rival Nvidia, things weren’t as rosy. A system made up of 6,144 TPU v5ps reached the GPT-3 training checkpoint in 11.77 minutes, placing a distant second to an 11,616-Nvidia H100 system, which accomplished the task in about 3.44 minutes. That top TPU system was only about 25 seconds faster than an H100 computer half its size.

    A Dell Technologies computer fine-tuned the Llama 2 70B large language model using about 75 cents worth of electricity.

    In the closest head-to-head comparison between v5p and Trillium, with each system made up of 2048 TPUs, the upcoming Trillium shaved a solid 2 minutes off of the GPT-3 training time, nearly an 8 percent improvement on v5p’s 29.6 minutes. Another difference between the Trillium and v5p entries is that Trillium is paired with AMD Epyc CPUs instead of the v5p’s Intel Xeons.

    Google also trained the image generator, Stable Diffusion, with the Cloud TPU v5p. At 2.6 billion parameters, Stable Diffusion is a light enough lift that MLPerf contestants are asked to train it to convergence instead of just to a checkpoint, as with GPT-3. A 1024 TPU system ranked second, finishing the job in 2 minutes 26 seconds, about a minute behind the same size system made up of Nvidia H100s.

    Training power is still opaque

    The steep energy cost of training neural networks has long been a source of concern. MLPerf is only beginning to measure this. Dell Technologies was the sole entrant in the energy category, with an eight-server system containing 64 Nvidia H100 GPUs and 16 Intel Xeon Platinum CPUs. The only measurement made was in the LLM fine-tuning task (Llama2 70B). The system consumed 16.4 megajoules during its 5-minute run, for an average power of 5.4 kilowatts. That means about 75 cents of electricity at the average cost in the United States.

    While it doesn’t say much on its own, the result does potentially provide a ballpark for the power consumption of similar systems. Oracle, for example, reported a close performance result—4 minutes 45 seconds—using the same number and types of CPUs and GPUs.

  • The First Virtual Meeting Was in 1916
    by Allison Marsh on 13. November 2024. at 15:00



    At 8:30 p.m. on 16 May 1916, John J. Carty banged his gavel at the Engineering Societies Building in New York City to call to order a meeting of the American Institute of Electrical Engineers. This was no ordinary gathering. The AIEE had decided to conduct a live national meeting connecting more than 5,000 attendees in eight cities across four time zones. More than a century before Zoom made virtual meetings a pedestrian experience, telephone lines linked auditoriums from coast to coast. AIEE members and guests in Atlanta, Boston, Chicago, Denver, New York, Philadelphia, Salt Lake City, and San Francisco had telephone receivers at their seats so they could listen in.

    The AIEE, a predecessor to the IEEE, orchestrated this event to commemorate recent achievements in communications, transportation, light, and power. The meeting was a triumph of engineering, covered in newspapers in many of the host cities. The Atlanta Constitution heralded it as “a feat never before accomplished in the history of the world.” According to the Philadelphia Evening Ledger, the telephone connections involved traversed about 6,500 kilometers (about 4,000 miles) across 20 states, held up by more than 150,000 poles running through 5,000 switches. It’s worth noting that the first transcontinental phone call had been achieved only a year earlier.

    Carty, president of the AIEE, led the meeting from New York, while section chairmen directed the proceedings in the other cities. First up: roll call. Each city read off the number of members and guests in attendance—from 40 in Denver, the newest section of the institute, to 1,100 at AIEE headquarters in New York. In all, more than 5,100 members attended.

    Due to limited seating in New York and Philadelphia, members were allowed only a single admission ticket, and ladies were explicitly not invited. (Boo.) In Atlanta, Boston, and Chicago, members received two tickets each, and in San Francisco members received three; women were allowed to attend in all of these cities. (The AIEE didn’t admit its first woman until 1922, and only as an associate member; Edith Clarke was the first woman to publish a paper in an AIEE journal, in 1926.)

    These six cities were the only ones officially participating in the meeting. But because the telephone lines ran directly through both Denver and Salt Lake City, AIEE sections in those cities opted to listen in, although they were kept muted; during the meeting, they sent telegrams to headquarters with their attendance and greetings. In a modern-day Zoom call, these notes would have been posted in the chat.

    The first virtual meeting had breakout sessions

    Once everyone had checked in and confirmed that they all could hear, Carty read a telegram from U.S. President Woodrow Wilson, congratulating the members on this unique meeting: “a most interesting evidence of the inventive genius and engineering ability represented by the Institute.”

    Alexander Graham Bell then gave a few words in greeting and remarked that he was glad to see how far the telephone had gone beyond his initial idea. Theodore Vail, first president of AT&T and one of the men who was instrumental in establishing telephone service as a public utility, offered his own congratulations. Charles Le Maistre, a British engineer who happened to be in New York to attend the AIEE Standards Committee, spoke on behalf of his country’s engineering societies. Finally, Thomas Watson, who as Bell’s assistant was the first person to hear words spoken over a telephone, welcomed all of the electrical engineers scattered across the country.

    At precisely 9:00 p.m., the telephone portion of the meeting was suspended for 30 minutes so that each city could have its own local address by an invited guest. Let’s call them breakout sessions. These speakers reflected on the work and accomplishments of engineers. Overall, they conveyed an unrelentingly positive attitude toward engineering progress, with a few nuances.

    In Boston, Lawrence Lowell, president of Harvard University, said the discovery and harnessing of electricity was the greatest single advancement in human history. However, he admonished engineers for failing to foresee the subordination of the individual to the factory system.

    In Philadelphia, Edgar Smith, provost of the University of Pennsylvania, noted that World War I was limiting the availability of certain materials and supplies, and he urged more investment in developing the United States’ natural resources.

    Charles Ferris, dean of engineering at the University of Tennessee, praised the development of long-distance power distribution and the positive effects it had on rural life, but worried about the use of fossil fuels. His chief concern was running out of coal, gas, and oil, not their negative impacts on the environment.

    More than a century before Zoom made virtual meetings a pedestrian experience, telephone lines linked auditoriums from coast to coast for the AIEE’s national meeting.

    On the West Coast, Ray Wilbur, president of Stanford, argued for the value of dissatisfaction, struggle, and unrest on campus as spurs to growth and innovation. I suspect many university presidents then and now would disagree, but student protests remain a force for change.

    After the city breakout sessions, everyone reconnected by telephone, and the host cities took turns calling out their greetings, along with some engineering boasts.

    “Atlanta, located in the Piedmont section of the southern Appalachians, among their racing rivers and roaring falls, whose energy has been dragged forth and laid at her doors through high-tension transmission and in whose phenomenal development no factor has been more potent than the electrical engineers, sends greetings.”

    “Boston sends warmest greetings to her sister cities. The telephone was born here and here it first spoke, but its sound has gone out into all lands and its words unto the ends of the world.”

    “San Francisco hails its fellow members of the Institute…. California has by the pioneer spirit of domination created needs which the world has followed—the snow-crowned Sierras opened up the path of gold to the path of energy, which tonight makes it possible for us on the western rim of the continent of peace to be in instant touch with men who have harnessed rivers, bridled precipices, drawn from the ether that silent and unseen energy that has leveled distance and created force to move the world along lines of greater civilization by closer contacts.”

    That last sentence, my editor notes, is 86 words long, but we included it for its sheer exuberance.

    Maybe all tech meetings should have musical interludes

    The meeting then paused for a musical interlude. I find this idea delightfully weird, like the ballet dream sequence in the middle of the Broadway musical Oklahoma! Each city played a song of their choosing on a phonograph, to be transmitted through the telephone. From the south came strains of “Dixie,” countered by “Yankee Doodle” in New England. New York and San Francisco opted for two variations on the patriotic symbolism of Columbia: “Hail Columbia” and “Columbia the Gem of the Ocean,” respectively. Philadelphia offered up the “Star-Spangled Banner,” and although it wasn’t yet the national anthem, audience members in all auditoriums stood up while it played.

    For the record, the AIEE in those days took entertainment very seriously. Almost all of their conferences included a formal dinner dance, less-formal smokers, sporting competitions, and inspection field trips to local sites of engineering interest. There were even women’s committees to organize events specifically for the ladies.

    I suspect no one in attendance would have predicted that in the 21st century, people groan at the thought of another virtual meeting.

    After the music, Michael Pupin delivered an address on “The Engineering Profession,” a topic that was commonly discussed in the Proceedings of the AIEE in those days. Remember that electrical engineering was still a fairly new academic discipline, only a few decades old, and working engineers were looking to more established professions, such as medical doctors, to see how they might fit into society. Pupin had made a number of advancements in the efficiency of transmission over long-distance telephone, and in 1925 he served as the president of the AIEE.

    The meeting concluded with resolutions, amendments, acceptances, and seconding, following Robert’s Rules of Order. (IEEE meetings still adhere to the rules.) In the last resolution, the participants patted themselves on the back for hosting this first-of-its-kind meeting and acknowledging their own genius that made it possible.

    The Proceedings of the AIEE covered the meeting in great detail. Local press accounts offered less detail. I’ve found no evidence that they ever tried to replicate the meeting. They did try another experiment in which a member read the same paper at meetings in three different cities so that there could be a joint discussion about the contents. But it seems they returned to their normal schedule of annual and section meetings with technical paper sessions and discussion.

    And nowhere have I found answers to some of the basic questions that I, as a historian 100 years later, have about the 1916 event. First, how much did this meeting cost in long-distance fees and who paid for it? Second, what receivers did the audience members use and did they work? And finally, what did the members and guests think of this grand experiment? (My editor would also like to know why no one took a photo of the event.)

    But in the moment, rarely do people think about what later historians may want to know. And I suspect no one in attendance would have predicted that in the 21st century, people groan at the thought of another virtual meeting.

  • Get to Know the IEEE Board of Directors
    by IEEE on 12. November 2024. at 19:00



    The IEEE Board of Directors shapes the future direction of IEEE and is committed to ensuring IEEE remains a strong and vibrant organization—serving the needs of its members and the engineering and technology community worldwide—while fulfilling the IEEE mission of advancing technology for the benefit of humanity.

    This article features IEEE Board of Directors members ChunChe “Lance” Fung, Eric Grigorian, and Christina Schober.

    IEEE Senior Member ChunChe “Lance” Fung

    Director, Region 10: Asia Pacific

    Photo of a man in glasses and a grey jacket. Joanna Mai Yie Leung

    Fung has worked in academia and provided industry consultancy services for more than 40 years. His research interests include applying artificial intelligence, machine learning, computational intelligence, and other techniques to solve practical problems. He has authored more than 400 publications in the disciplines of AI, computational intelligence, and related applications. Fung currently works on the ethical applications and social impacts of AI.

    A member of the IEEE Systems, Man, and Cybernetics Society, Fung has been an active IEEE volunteer for more than 30 years. As a member and chair of the IEEE Technical Program Integrity and Conference Quality committees, he oversaw the quality of technical programs presented at IEEE conferences. Fung also chaired the Region 10 Educational Activities Committee. He was instrumental in translating educational materials to local languages for the IEEE Reaching Locals project.

    As chair of the IEEE New Initiatives Committee, he established and promoted the US $1 Million Challenge Call for New Initiatives, which supports potential IEEE programs, services, or products that will significantly benefit members, the public, the technical community, or customers and could have a lasting impact on IEEE or its business processes.

    Fung has left an indelible mark as a dedicated educator at Singapore Polytechnic, Curtin University, and Murdoch University. He was appointed in 2015 as professor emeritus at Murdoch, and he takes pride in training the next generation of volunteers, leaders, teachers, and researchers in the Western Australian community. Fung received the IEEE Third Millennium Medal and the IEEE Region 10 Outstanding Volunteer Award.

    IEEE Senior Member Eric Grigorian

    Director, Region 3: Southern U.S. & Jamaica

    A photo of a man in a suit and yellow tie. Sean McNeil/GTRI

    Grigorian has extensive experience leading international cross-domain teams that support the commercial and defense industries. His current research focuses on implementing model-based systems engineering, creating models that depict system behavior, interfaces, and architecture. His work has led to streamlined processes, reduced costs, and faster design and implementation of capabilities due to efficient modeling and verification. Grigorian holds two U.S. utility patents.

    Grigorian has been an active volunteer with IEEE since his time as a student member at the University of Alabama in Huntsville (UAH). He saw it as an excellent way to network and get to know people. He found his personality was suited for working within the organization and building leadership skills. During the past 43 years as an IEEE member, he has been affiliated with the IEEE Aerospace and Electronic Systems (AESS), IEEE Computer, and IEEE Communications societies.

    As Grigorian’s career has evolved, his involvement with IEEE has also increased. He has been the IEEE Huntsville Section student activities chair, as well as vice chair, and chair. He also was the section’s AESS chair. He served as IEEE SoutheastCon chair in 2008 and 2019, and served on the IEEE Region 3 executive committee as area chair and conference committee chair, enhancing IEEE members’ benefits, engagement, and career advancement. He has significantly contributed to initiatives within IEEE, including promoting preuniversity science, technology, engineering, and mathematics efforts in Alabama.

    Grigorian’s professional achievements have been recognized with numerous awards from employers and local technical chapters, including with the 2020 UAH Alumni of Achievement Award for the College of Engineering and the 2006 IEEE Region 3 Outstanding Engineer of the Year Award. He is a member of the IEEE–Eta Kappa Nu honor society.

    IEEE Life Senior Member Christina Schober

    Director, Division V

    A photo of a smiling woman. Katie Fears/Brio Art

    Schober is an innovative engineer with a diverse design and manufacturing engineering background. With more than 40 years of experience, her career has spanned research, design, and manufacturing sensors for space, commercial, and military aircraft navigation and tactical guidance systems. She was responsible for the successful transition from design to production for groundbreaking programs including an integrated flight management system, the Stinger missile’s roll frequency sensor, and the designing of three phases of the DARPA atomic clock. She holds 17 U.S. patents and 24 other patents in the aerospace and navigation fields.

    Schober started her career in the 1980s, at a time when female engineers were not widely accepted. The prevailing attitude required her to “stay tough,” she says, and she credits IEEE for giving her technical and professional support. Because of her experiences, she became dedicated to making diversity and inclusion systemic in IEEE.

    Schober has held many leadership roles, including IEEE Division VIII Director, IEEE Sensors Council president, and IEEE Standards Sensors Council secretary. In addition to her membership in the IEEE Photonics Society, she is active with the IEEE Computer Society, IEEE Sensors Council, IEEE Standards Association, and IEEE Women in Engineering.

    She is also active in her local community, serving as an invited speaker on STEM for the public school system and was a volunteer at youth shelters. Schober has received numerous awards including the IEEE Sensors Council Lifetime Contribution Award and the IEEE Twin Cities Section’s Young Engineer of the Year Award. She is an IEEE Computer Society Gold Core member, a member of the IEEE–Eta Kappa Nu honor society and received the IEEE Third Millennium Medal.

  • Why Are Kindle Colorsofts Turning Yellow?
    by Gwendolyn Rak on 12. November 2024. at 12:00



    In physical books, yellowing pages are usually a sign of age. But brand-new users of Amazon’s Kindle Colorsofts, the tech giant’s first color e-reader, are already noticing yellow hues appearing at the bottoms of their displays.

    Since the complaints began to trickle in, Amazon has reportedly suspended shipments and announced that it is working to fix the issue. (As of publication of this article, the US $280 Kindle had an average 2.6 star rating on Amazon.) It’s not yet clear what is causing the discoloration. But while the issue is new—and unexpected—the technology is not, says Jason Heikenfeld, an IEEE Fellow and engineering professor at the University of Cincinnati. The Kindle Colorsoft, which became available on 30 October, uses “a very old approach,” says Heikenfeld, who previously worked to develop the ultimate e-paper technology. “It was the first approach everybody tried.”

    Amazon’s e-reader uses reflective display technology developed by E Ink, a company that started in the 1990s as an MIT Media Lab spin-off before developing its now-dominant electronic paper displays. E Ink is used in Kindles, as well as top e-readers from Kobo, reMarkable, Onyx, and more. E Ink first introduced Kaleido—the basis of the Colorsoft’s display—five years ago, though the road to full-color e-paper started well before.

    How E-Readers Work

    Monochromatic Kindles work by applying voltages to electrodes in the screen that bring black or white pigment to the top of each pixel. Those pixels then reflect ambient light, creating a paperlike display. To create a full-color display, companies like E Ink added an array of filters just above the ink. This approach didn’t work well at first because the filters lost too much light, making the displays dark and low resolution. But with a few adjustments, Kaleido was ready for consumer products in 2019. (Other approaches—like adding colored pigments to the ink—have been developed, but these come with their own drawbacks, including a higher price tag.)

    Given this design, it initially seemed to Heikenfeld that the issue would have stemmed from the software, which determines the voltages applied to each electrode. This aligned with reports from some users that the issue appeared after a software update.

    But industry analyst Ming-Chi Kuo suggested in a post on X that the issue is due to the e-reader’s hardware. Amazon switched the optically clear adhesive (OCA) used in the Colorsoft to a material that may not be so optically clear. In its announcement of the Colorsoft, the company boasted “custom formulated coatings” that would enhance the color display as one of the new e-reader’s innovations.

    In terms of resolving the issue, Kuo’s post also stated that “While component suppliers have developed several hardware solutions, Amazon seems to be leaning toward a software-based fix.” Heikenfeld is not sure how a software fix would work, apart from blacking out the bottom of the screen.

    Amazon did not reply to IEEE Spectrum’s request for comment. In an email to IEEE Spectrum, E Ink stated, “While we cannot comment on any individual partner or product, we are committed to supporting our partners in understanding and addressing any issues that arise.”

    The Future of E-Readers

    It took a long time for color Kindles to arrive, and the future of reflective e-reader displays isn’t likely to improve much, according to Heikenfeld. “I used to work a lot in this field, and it just really slowed down at some point, because it’s a tough nut to crack,” Heikenfeld says.

    There are inherent limitations and inefficiencies to working with filter-based color displays that rely on ambient light, and there’s no Moore’s Law for these displays. Instead, their improvement is asymptotic—and we may already be close to the limit. Meanwhile, displays that emit light, like LCD and OLED, continue to improve. “An iPad does a pretty damn good job with battery life now,” says Heikenfeld.

    At the same time, he believes there will always be a place for reflective displays, which remain a more natural experience for our eyes. “We live in a world of reflective color,” Heikenfeld says.

    This is story was updated on 12 November 2024 to correct that Jason Heikenfeld is an IEEE Fellow.

  • Where’s My Robot?
  • This Mobile 3D Printer Can Print Directly on Your Floor
    by Kohava Mendelsohn on 11. November 2024. at 14:00



    Waiting for each part of a 3D-printed project to finish, taking it out of the printer, and then installing it on location can be tedious for multi-part projects. What if there was a way for your printer to print its creation exactly where you needed it? That’s the promise of MobiPrint, a new 3D printing robot that can move around a room, printing designs directly onto the floor.

    MobiPrint, designed by Daniel Campos Zamora at the University of Washington, consists of a modified off-the-shelf 3D printer atop a home vacuum robot. First it autonomously maps its space—be it a room, a hallway, or an entire floor of a house. Users can then choose from a prebuilt library or upload their own design to be printed anywhere in the mapped area. The robot then traverses the room and prints the design.

    It’s “a new system that combines robotics and 3D printing that could actually go and print in the real world,” Campos Zamora says. He presented MobiPrint on 15 October at the ACM Symposium on User Interface Software and Technology.

    Campos Zamora and his team started with a Roborock S5 vacuum robot and installed firmware that allowed it to communicate with the open source program Valetudo. Valetudo disconnects personal robots from their manufacturer’s cloud, connecting them to a local server instead. Data collected by the robot, such as environmental mapping, movement tracking, and path planning, can all be observed locally, enabling users to see the robot’s LIDAR-created map.

    Campos Zamora built a layer of software that connects the robot’s perception of its environment to the 3D printer’s print commands. The printer, a modified Prusa Mini+, can print on carpet, hardwood, and vinyl, with maximum printing dimensions of 180 by 180 by 65 millimeters. The robot has printed pet food bowls, signage, and accessibility markers as sample objects.

    MakeabilityLab/YouTube

    Currently, MobiPrint can only “park and print.” The robot base cannot move during printing to make large objects, like a mobility ramp. Printing designs larger than the robot is one of Campos Zamora’s goals in the future. To learn more about the team’s vision for MobiPrint, Campos Zamora answered a few questions from IEEE Spectrum.

    What was the inspiration for creating your mobile 3D printer?

    Daniel Campos Zamora: My lab is focused on building systems with an eye towards accessibility. One of the things that really inspired this project was looking at the tactile surface indicators that help blind and low vision users find their way around a space. And so we were like, what if we made something that could automatically go and deploy these things? Especially in indoor environments, which are generally a little trickier and change more frequently over time.

    We had to step back and build this entirely different thing, using the environment as a design element. We asked: how do you integrate the real world environment into the design process, and then what kind of things can you print out in the world? That’s how this printer was born.

    What were some surprising moments in your design process?

    Campos Zamora: When I was testing the robot on different surfaces, I was not expecting the 3D printed designs to stick extremely well to the carpet. It stuck way too well. Like, you know, just completely bonded down there.

    I think there’s also just a lot of joy in seeing this printer move. When I was doing a demonstration of it at this conference last week, it almost seemed like the robot had a personality. A vacuum robot can seem to have a personality, but this printer can actually make objects in my environment, so I feel a different relationship to the machine.

    Where do you hope to take MobiPrint in the future?

    Campos Zamora: There’s several directions I think we could go. Instead of controlling the robot remotely, we could have it follow someone around and print accessibility markers along a path they walk. Or we could integrate an AI system that recommends objects be printed in different locations. I also want to explore having the robot remove and recycle the objects it prints.

  • Machine Learning Might Save Time on Chip Testing
    by Samuel K. Moore on 10. November 2024. at 14:00



    Finished chips coming in from the foundry are subject to a battery of tests. For those destined for critical systems in cars, those tests are particularly extensive and can add 5 to 10 percent to the cost of a chip. But do you really need to do every single test?

    Engineers at NXP have developed a machine-learning algorithm that learns the patterns of test results and figures out the subset of tests that are really needed and those that they could safely do without. The NXP engineers described the process at the IEEE International Test Conference in San Diego last week.

    NXP makes a wide variety of chips with complex circuitry and advanced chip-making technology, including inverters for EV motors, audio chips for consumer electronics, and key-fob transponders to secure your car. These chips are tested with different signals at different voltages and at different temperatures in a test process called continue-on-fail. In that process, chips are tested in groups and are all subjected to the complete battery, even if some parts fail some of the tests along the way.

    Chips were subject to between 41 and 164 tests, and the algorithm was able to recommend removing 42 to 74 percent of those tests.

    “We have to ensure stringent quality requirements in the field, so we have to do a lot of testing,” says Mehul Shroff, an NXP Fellow who led the research. But with much of the actual production and packaging of chips outsourced to other companies, testing is one of the few knobs most chip companies can turn to control costs. “What we were trying to do here is come up with a way to reduce test cost in a way that was statistically rigorous and gave us good results without compromising field quality.”

    A Test Recommender System

    Shroff says the problem has certain similarities to the machine learning-based recommender systems used in e-commerce. “We took the concept from the retail world, where a data analyst can look at receipts and see what items people are buying together,” he says. “Instead of a transaction receipt, we have a unique part identifier and instead of the items that a consumer would purchase, we have a list of failing tests.”

    The NXP algorithm then discovered which tests fail together. Of course, what’s at stake for whether a purchaser of bread will want to buy butter is quite different from whether a test of an automotive part at a particular temperature means other tests don’t need to be done. “We need to have 100 percent or near 100 percent certainty,” Shroff says. “We operate in a different space with respect to statistical rigor compared to the retail world, but it’s borrowing the same concept.”

    As rigorous as the results are, Shroff says that they shouldn’t be relied upon on their own. You have to “make sure it makes sense from engineering perspective and that you can understand it in technical terms,” he says. “Only then, remove the test.”

    Shroff and his colleagues analyzed data obtained from testing seven microcontrollers and applications processors built using advanced chipmaking processes. Depending on which chip was involved, they were subject to between 41 and 164 tests, and the algorithm was able to recommend removing 42 to 74 percent of those tests. Extending the analysis to data from other types of chips led to an even wider range of opportunities to trim testing.

    The algorithm is a pilot project for now, and the NXP team is looking to expand it to a broader set of parts, reduce the computational overhead, and make it easier to use.

    “Any novel solution that helps in test-time savings without any quality hit is valuable,” says Sriharsha Vinjamury, a principal engineer at Arm. “Reducing test time is essential, as it reduces costs.” He suggests that the NXP algorithm could be integrated with a system that adjust the order of tests, so that failures could be spotted earlier.

    This post was updated on 13 November 2024.

  • Video Friday: Robot Dog Handstand
    by Evan Ackerman on 08. November 2024. at 17:30



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    Humanoids 2024: 22–24 November 2024, NANCY, FRANCE

    Enjoy today’s videos!

    Just when I thought quadrupeds couldn’t impress me anymore...

    [ Unitree Robotics ]

    Researchers at Meta FAIR are releasing several new research artifacts that advance robotics and support our goal of reaching advanced machine intelligence (AMI). These include Meta Sparsh, the first general-purpose encoder for vision-based tactile sensing that works across many tactile sensors and many tasks; Meta Digit 360, an artificial fingertip-based tactile sensor that delivers detailed touch data with human-level precision and touch-sensing; and Meta Digit Plexus, a standardized platform for robotic sensor connections and interactions that enables seamless data collection, control and analysis over a single cable.

    [ Meta ]

    The first bimanual Torso created at Clone includes an actuated elbow, cervical spine (neck), and anthropomorphic shoulders with the sternoclavicular, acromioclavicular, scapulothoracic and glenohumeral joints. The valve matrix fits compactly inside the ribcage. Bimanual manipulation training is in progress.

    [ Clone Inc. ]

    Equipped with a new behavior architecture, Nadia navigates and traverses many types of doors autonomously. Nadia also demonstrates robustness to failed grasps and door opening attempts by automatically retrying and continuing. We present the robot with pull and push doors, four types of opening mechanisms, and even spring-loaded door closers. A deep neural network and door plane estimator allow Nadia to identify and track the doors.

    [ Paper preprint by authors from Florida Institute for Human and Machine Cognition ]

    Thanks, Duncan!

    In this study, we integrate the musculoskeletal humanoid Musashi with the wire-driven robot CubiX, capable of connecting to the environment, to form CubiXMusashi. This combination addresses the shortcomings of traditional musculoskeletal humanoids and enables movements beyond the capabilities of other humanoids. CubiXMusashi connects to the environment with wires and drives by winding them, successfully achieving movements such as pull-up, rising from a lying pose, and mid-air kicking, which are difficult for Musashi alone.

    [ CubiXMusashi, JSK Robotics Laboratory, University of Tokyo ]

    Thanks, Shintaro!

    An old boardwalk seems like a nightmare for any robot with flat feet.

    [ Agility Robotics ]

    This paper presents a novel learning-based control framework that uses keyframing to incorporate high-level objectives in natural locomotion for legged robots. These high-level objectives are specified as a variable number of partial or complete pose targets that are spaced arbitrarily in time. Our proposed framework utilizes a multi-critic reinforcement learning algorithm to effectively handle the mixture of dense and sparse rewards. In the experiments, the multi-critic method significantly reduces the effort of hyperparameter tuning compared to the standard single-critic alternative. Moreover, the proposed transformer-based architecture enables robots to anticipate future goals, which results in quantitative improvements in their ability to reach their targets.

    [ Disney Research paper ]

    Human-like walking where that human is the stompiest human to ever human its way through Humanville.

    [ Engineai ]

    We present the first static-obstacle avoidance method for quadrotors using just an onboard, monocular event camera. Quadrotors are capable of fast and agile flight in cluttered environments when piloted manually, but vision-based autonomous flight in unknown environments is difficult in part due to the sensor limitations of traditional onboard cameras. Event cameras, however, promise nearly zero motion blur and high dynamic range, but produce a large volume of events under significant ego-motion and further lack a continuous-time sensor model in simulation, making direct sim-to-real transfer not possible.

    [ Paper University of Pennsylvania and University of Zurich ]

    Cross-embodiment imitation learning enables policies trained on specific embodiments to transfer across different robots, unlocking the potential for large-scale imitation learning that is both cost-effective and highly reusable. This paper presents LEGATO, a cross-embodiment imitation learning framework for visuomotor skill transfer across varied kinematic morphologies. We introduce a handheld gripper that unifies action and observation spaces, allowing tasks to be defined consistently across robots.

    [ LEGATO ]

    The 2024 Xi’an Marathon has kicked off! STAR1, the general-purpose humanoid robot from Robot Era, joins runners in this ancient yet modern city for an exciting start!

    [ Robot Era ]

    In robotics, there are valuable lessons for students and mentors alike. Watch how the CyberKnights, a FIRST robotics team champion sponsored by RTX, with the encouragement of their RTX mentor, faced challenges after a poor performance and scrapped its robot to build a new one in just nine days.

    [ CyberKnights ]

    In this special video, PAL Robotics takes you behind the scenes of our 20th-anniversary celebration, a memorable gathering with industry leaders and visionaries from across robotics and technology. From inspiring speeches to milestone highlights, the event was a testament to our journey and the incredible partnerships that have shaped our path.

    [ PAL Robotics ]

    Thanks, Rugilė!

  • Millimeter Waves May Not Be 6G’s Most Promising Spectrum
    by Margo Anderson on 06. November 2024. at 17:00



    In 6G telecom research today, a crucial portion of wireless spectrum has been neglected: the Frequency Range 3, or FR3, band. The shortcoming is partly due to a lack of viable software and hardware platforms for studying this region of spectrum, ranging from approximately 6 to 24 gigahertz. But a new, open-source wireless research kit is changing that equation. And research conducted using that kit, presented last week at a leading industry conference, offers proof of viability of this spectrum band for future 6G networks.

    In fact, it’s also arguably signaling a moment of telecom industry re-evaluation. The high-bandwidth 6G future, according to these folks, may not be entirely centered around difficult millimeter wave-based technologies. Instead, 6G may leave plenty of room for higher-bandwidth microwave spectrum tech that is ultimately more familiar and accessible.

    The FR3 band is a region of microwave spectrum just shy of millimeter-wave frequencies (30 to 300 GHz). FR3 is also already very popular today for satellite Internet and military communications. For future 5G and 6G networks to share the FR3 band with incumbent players would require telecom networks nimble enough to perform regular, rapid-response spectrum-hopping.

    Yet spectrum-hopping might still be an easier problem to solve than those posed by the inherent physical shortcomings of some portions of millimeter-wave spectrum—shortcomings that include limited range, poor penetration, line-of-sight operations, higher power requirements, and susceptibility to weather.

    Pi-Radio’s New Face

    Earlier this year, the Brooklyn, N.Y.-based startup Pi-Radio—a spinoff from New York University’s Tandon School of Engineering—released a wireless spectrum hardware and software kit for telecom research and development. Pi-Radio’s FR-3 is a software-defined radio system developed for the FR3 band specifically, says company co-founder Sundeep Rangan.

    “Software-defined radio is basically a programmable platform to experiment and build any type of wireless technology,” says Rangan, who is also the associate director of NYU Wireless. “In the early stages when developing systems, all researchers need these.”

    For instance, the Pi-Radio team presented one new research finding that infers direction to an FR3 antenna from measurements taken by a mobile Pi-Radio receiver—presented at the IEEE Signal Processing Society‘s Asilomar Conference on Signals, Systems and Computers in Pacific Grove, Calif. on 30 October.

    According to Pi-Radio co-founder Marco Mezzavilla, who’s also an associate professor at the Polytechnic University of Milan, the early-stage FR3 research that the team presented at Asilomar will enable researchers “to capture [signal] propagation in these frequencies and will allow us to characterize it, understand it, and model it... And this is the first stepping stone towards designing future wireless systems at these frequencies.”

    There’s a good reason researchers have recently rediscovered FR3, says Paolo Testolina, postdoctoral research fellow at Northeastern University’s Institute for the Wireless Internet of Things unaffiliated with the current research effort. “The current scarcity of spectrum for communications is driving operators and researchers to look in this band, where they believe it is possible to coexist with the current incumbents,” he says. “Spectrum sharing will be key in this band.”

    Rangan notes that the work on which Pi-Radio was built has been published earlier this year both on the more foundational aspects of building networks in the FR3 band as well as the specific implementation of Pi-Radio’s unique, frequency-hopping research platform for future wireless networks. (Both papers were published in IEEE journals.)

    “If you have frequency hopping, that means you can get systems that are resilient to blockage,” Rangan says. “But even, potentially, if it was attacked or compromised in any other way, this could actually open up a new type of dimension that we typically haven’t had in the cellular infrastructure.” The frequency-hopping that FR3 requires for wireless communications, in other words, could introduce a layer of hack-proofing that might potentially strengthen the overall network.

    Complement, Not Replacement

    The Pi-Radio team stresses, however, that FR3 would not supplant or supersede other new segments of wireless spectrum. There are, for instance, millimeter wave 5G deployments already underway today that will no doubt expand in scope and performance into the 6G future. That said, the ways that FR3 expand future 5G and 6G spectrum usage is an entirely unwritten chapter: Whether FR3 as a wireless spectrum band fizzles, or takes off, or finds a comfortable place somewhere in between depends in part on how it’s researched and developed now, the Pi-Radio team says.

    “We’re at this tipping point where researchers and academics actually are empowered by the combination of this cutting-edge hardware with open-source software,” Mezzavilla says. “And that will enable the testing of new features for communications in these new frequency bands.” (Mezzavilla credits the National Telecommunications and Information Administration for recognizing the potential of FR3, and for funding the group’s research.)

    By contrast, millimeter-wave 5G and 6G research has to date been bolstered, the team says, by the presence of a wide range of millimeter-wave software-defined radio (SDR) systems and other research platforms.

    “Companies like Qualcomm, Samsung, Nokia, they actually had excellent millimeter wave development platforms,” Rangan says. “But they were in-house. And the effort it took to build one—an SDR at a university lab—was sort of insurmountable.”

    So releasing an inexpensive open-source SDR in the FR3 band, Mezzavilla says, could jump start a whole new wave of 6G research.

    “This is just the starting point,” Mezzavilla says. “From now on we’re going to build new features—new reference signals, new radio resource control signals, near-field operations... We’re ready to ship these yellow boxes to other academics around the world to test new features and test them quickly, before 6G is even remotely near us.”

    This story was updated on 7 November 2024 to include detail about funding from the National Telecommunications and Information Administration.

  • Azerbaijan Plans Caspian-Black Sea Energy Corridor
    by Amos Zeeberg on 06. November 2024. at 15:58



    Azerbaijan next week will garner much of the attention of the climate tech world, and not just because it will host COP29, the United Nation’s giant annual climate change conference. The country is promoting a grand, multi-nation plan to generate renewable electricity in the Caucasus region and send it thousands of kilometers west, under the Black Sea, and into energy–hungry Europe.

    The transcontinental connection would start with wind, solar, and hydropower generated in Azerbaijan and Georgia, and off-shore wind power generated in the Caspian Sea. Long-distance lines would carry up to 1.5 gigawatts of clean electricity to Anaklia, Georgia, at the east end of the Black Sea. An undersea cable would move the electricity across the Black Sea and deliver it to Constanta, Romania, where it could be distributed further into Europe.

    The scheme’s proponents say this Caspian-Black Sea energy corridor will help decrease global carbon emissions, provide dependable power to Europe, modernize developing economies at Europe’s periphery, and stabilize a region shaken by war. Organizers hope to build the undersea cable within the next six years at an estimated cost of €3.5 billion (US $3.8 billion).

    To accomplish this, the governments of the involved countries must quickly circumvent a series of technical, financial, and political obstacles. “It’s a huge project,” says Zviad Gachechiladze, a director at Georgian State Electrosystem, the agency that operates the country’s electrical grid, and one of the architects of the Caucasus green-energy corridor. “To put it in operation [by 2030]—that’s quite ambitious, even optimistic,” he says.

    Black Sea Cable to Link Caucasus and Europe

    The technical lynchpin of the plan falls on the successful construction of a high voltage direct current (HVDC) submarine cable in the Black Sea. It’s a formidable task, considering that it would stretch across nearly 1,200 kilometers of water, most of which is over 2 km deep, and, since Russia’s invasion of Ukraine, littered with floating mines. By contrast, the longest existing submarine power cable—the North Sea Link—carries 1.4 GW across 720 km between England and Norway, at depths of up to 700 meters.

    As ambitious as Azerbaijan’s plans sound, longer undersea connections have been proposed. The Australia-Asia PowerLink project aims to produce 6 GW at a vast solar farm in Northern Australia and send about a third of it to Singapore via a 4,300-km undersea cable. The Morocco-U.K. Power Project would send 3.6 GW over 3,800 km from Morocco to England. A similar attempt by Desertec to send electricity from North Africa to Europe ultimately failed.

    Building such cables involves laying and stitching together lengths of heavy submarine power cables from specialized ships—the expertise for which lies with just two companies in the world. In an assessment of the Black Sea project’s feasibility, the Milan-based consulting and engineering firm CESI determined that the undersea cable could indeed be built, and estimated that it could carry up to 1.5 GW—enough to supply over 2 million European households.

    But to fill that pipe, countries in the Caucasus region would have to generate much more green electricity. For Georgia, that will mostly come from hydropower, which already generates over 80 percent of the nation’s electricity. “We are a hydro country. We have a lot of untapped hydro potential,” says Gachechiladze.

    Azerbaijan and Georgia Plan Green Energy Corridor

    Generating hydropower can also generate opposition, because of the way dams alter rivers and landscapes. “There were some cases when investors were not able to construct power plants because of opposition of locals or green parties” in Georgia, says Salome Janelidze, a board member at the Energy Training Center, a Georgian government agency that promotes and educates around the country’s energy sector.

    “It was definitely a problem and it has not been totally solved,” says Janelidze. But “to me it seems it is doable,” she says. “You can procure and construct if you work closely with the local population and see them as allies rather than adversaries.”

    For Azerbaijan, most of the electricity would be generated by wind and solar farms funded by foreign investment. Masdar, the renewable-energy developer of the United Arab Emirates government, has been investing heavily in wind power in the country. In June, the company broke ground on a trio of wind and solar projects with 1 GW capacity. It intends to develop up to 9 GW more in Azerbaijan by 2030. ACWA Power, a Saudi power-generation company, plans to complete a 240-MW solar plant in the Absheron and Khizi districts of Azerbaijan next year and has struck a deal with the Azerbaijani Ministry of Energy to install up to 2.5 GW of offshore and onshore wind.

    CESI is currently running a second study to gauge the practicality of the full breadth of the proposed energy corridor—from the Caspian Sea to Europe—with a transmission capacity of 4 to 6 GW. But that beefier interconnection will likely remain out of reach in the near term. “By 2030, we can’t claim our region will provide 4 GW or 6 GW,” says Gachechiladze. “1.3 is realistic.”

    COP29: Azerbaijan’s Renewable Energy Push

    Signs of political support have surfaced. In September, Azerbaijan, Georgia, Romania, and Hungary created a joint venture, based in Romania, to shepherd the project. Those four countries in 2022 inked a memorandum of understanding with the European Union to develop the energy corridor.

    The involved countries are in the process of applying for the cable to be selected as an EU “project of mutual interest,” making it an infrastructure priority for connecting the union with its neighbors. If selected, “the project could qualify for 50 percent grant financing,” says Gachechiladze. “It’s a huge budget. It will improve drastically the financial condition of the project.” The commissioner responsible for EU enlargement policy projected that the union would pay an estimated €2.3 billion ($2.5 billion) toward building the cable.

    Whether next week’s COP29, held in Baku, Azerbaijan, will help move the plan forward remains to be seen. In preparation for the conference, advocates of the energy corridor have been taking international journalists on tours of the country’s energy infrastructure.

    Looming over the project are the security issues threaten to thwart it. Shipping routes in the Black Sea have become less dependable and safe since Russia’s invasion of Ukraine. To the south, tensions between Armenia and Azerbaijan remain after the recent war and ethnic violence.

    In order to improve relations, many advocates of the energy corridor would like to include Armenia. “The cable project is in the interests of Georgia, it’s in the interests of Armenia, it’s in the interests of Azerbaijan,” says Agha Bayramov, an energy geopolitics researcher at the University of Groningen, in the Netherlands. “It might increase the chance of them living peacefully together. Maybe they’ll say, ‘We’re responsible for European energy. Let’s put our egos aside.’”

  • Students Tackle Environmental Issues in Colombia and Türkiye
    by Ashley Moran on 05. November 2024. at 19:00



    EPICS in IEEE, a service learning program for university students supported by IEEE Educational Activities, offers students opportunities to engage with engineering professionals and mentors, local organizations, and technological innovation to address community-based issues.

    The following two environmentally focused projects demonstrate the value of teamwork and direct involvement with project stakeholders. One uses smart biodigesters to better manage waste in Colombia’s rural areas. The other is focused on helping Turkish olive farmers protect their trees from climate change effects by providing them with a warning system that can identify growing problems.

    No time to waste in rural Colombia

    Proper waste management is critical to a community’s living conditions. In rural La Vega, Colombia, the lack of an effective system has led to contaminated soil and water, an especially concerning issue because the town’s economy relies heavily on agriculture.

    The Smart Biodigesters for a Better Environment in Rural Areas project brought students together to devise a solution.

    Vivian Estefanía Beltrán, a Ph.D. student at the Universidad del Rosario in Bogotá, addressed the problem by building a low-cost anaerobic digester that uses an instrumentation system to break down microorganisms into biodegradable material. It reduces the amount of solid waste, and the digesters can produce biogas, which can be used to generate electricity.

    “Anaerobic digestion is a natural biological process that converts organic matter into two valuable products: biogas and nutrient-rich soil amendments in the form of digestate,” Beltrán says. “As a by-product of our digester’s operation, digestate is organic matter that can’t be transferred into biogas but can be used as a soil amendment for our farmers’ crops, such as coffee.

    “While it may sound easy, the process is influenced by a lot of variables. The support we’ve received from EPICS in IEEE is important because it enables us to measure these variables, such as pH levels, temperature of the reactor, and biogas composition [methane and hydrogen sulfide]. The system allows us to make informed decisions that enhance the safety, quality, and efficiency of the process for the benefit of the community.”

    The project was a collaborative effort among Universidad del Rosario students, a team of engineering students from Escuela Tecnológica Instituto Técnico Central, Professor Carlos Felipe Vergara, and members of Junta de Acción Comunal (Vereda La Granja), which aims to help residents improve their community.

    “It’s been a great experience to see how individuals pursuing different fields of study—from engineering to electronics and computer science—can all work and learn together on a project that will have a direct positive impact on a community.” —Vivian Estefanía Beltrán

    Beltrán worked closely with eight undergraduate students and three instructors—Maria Fernanda Gómez, Andrés Pérez Gordillo (the instrumentation group leader), and Carlos Felipe Vergara-Ramirez—as well as IEEE Graduate Student Member Nicolás Castiblanco (the instrumentation group coordinator).

    The team constructed and installed their anaerobic digester system in an experimental station in La Vega, a town located roughly 53 kilometers northwest of Bogotá.

    “This digester is an important innovation for the residents of La Vega, as it will hopefully offer a productive way to utilize the residual biomass they produce to improve quality of life and boost the economy,” Beltrán says. Soon, she adds, the system will be expanded to incorporate high-tech sensors that automatically monitor biogas production and the digestion process.

    “For our students and team members, it’s been a great experience to see how individuals pursuing different fields of study—from engineering to electronics and computer science—can all work and learn together on a project that will have a direct positive impact on a community. It enables all of us to apply our classroom skills to reality,” she says. “The funding we’ve received from EPICS in IEEE has been crucial to designing, proving, and installing the system.”

    The project also aims to support the development of a circular economy, which reuses materials to enhance the community’s sustainability and self-sufficiency.

    Protecting olive groves in Türkiye

    Türkiye is one of the world’s leading producers of olives, but the industry has been challenged in recent years by unprecedented floods, droughts, and other destructive forces of nature resulting from climate change. To help farmers in the western part of the country monitor the health of their olive trees, a team of students from Istanbul Technical University developed an early-warning system to identify irregularities including abnormal growth.

    “Almost no olives were produced last year using traditional methods, due to climate conditions and unusual weather patterns,” says Tayfun Akgül, project leader of the Smart Monitoring of Fruit Trees in Western Türkiye initiative.

    “Our system will give farmers feedback from each tree so that actions can be taken in advance to improve the yield,” says Akgül, an IEEE senior member and a professor in the university’s electronics and communication engineering department.

    “We’re developing deep-learning techniques to detect changes in olive trees and their fruit so that farmers and landowners can take all necessary measures to avoid a low or damaged harvest,” says project coordinator Melike Girgin, a Ph.D. student at the university and an IEEE graduate student member.

    Using drones outfitted with 360-degree optical and thermal cameras, the team collects optical, thermal, and hyperspectral imaging data through aerial methods. The information is fed into a cloud-based, open-source database system.

    Akgül leads the project and teaches the team skills including signal and image processing and data collection. He says regular communication with community-based stakeholders has been critical to the project’s success.

    “There are several farmers in the village who have helped us direct our drone activities to the right locations,” he says. “Their involvement in the project has been instrumental in helping us refine our process for greater effectiveness.

    “For students, classroom instruction is straightforward, then they take an exam at the end. But through our EPICS project, students are continuously interacting with farmers in a hands-on, practical way and can see the results of their efforts in real time.”

    Looking ahead, the team is excited about expanding the project to encompass other fruits besides olives. The team also intends to apply for a travel grant from IEEE in hopes of presenting its work at a conference.

    “We’re so grateful to EPICS in IEEE for this opportunity,” Girgin says. “Our project and some of the technology we required wouldn’t have been possible without the funding we received.”

    A purpose-driven partnership

    The IEEE Standards Association sponsored both of the proactive environmental projects.

    “Technical projects play a crucial role in advancing innovation and ensuring interoperability across various industries,” says Munir Mohammed, IEEE SA senior manager of product development and market engagement. “These projects not only align with our technical standards but also drive technological progress, enhance global collaboration, and ultimately improve the quality of life for communities worldwide.”

    For more information on the program or to participate in service-learning projects, visit EPICS in IEEE.

    On 7 November, this article was updated from an earlier version.

  • U.S. Chip Revival Plan Chooses Sites
    by Samuel K. Moore on 05. November 2024. at 18:51



    Last week the organization tasked with running the the biggest chunk of U.S. CHIPS Act’s US $13 billion R&D program made some significant strides: The National Semiconductor Technology Center (NSTC) released a strategic plan and selected the sites of two of three planned facilities and released a new strategic plan. The locations of the two sites—a “design and collaboration” center in Sunnyvale, Calif., and a lab devoted to advancing the leading edge of chipmaking, in Albany, N.Y.—build on an existing ecosystem at each location, experts say. The location of the third planned center—a chip prototyping and packaging site that could be especially critical for speeding semiconductor startups—is still a matter of speculation.

    “The NSTC represents a once-in-a-generation opportunity for the U.S. to accelerate the pace of innovation in semiconductor technology,” Deirdre Hanford, CEO of Natcast, the nonprofit that runs the NSTC centers, said in a statement. According to the strategic plan, which covers 2025 to 2027, the NSTC is meant to accomplish three goals: extend U.S. technology leadership, reduce the time and cost to prototype, and build and sustain a semiconductor workforce development ecosystem. The three centers are meant to do a mix of all three.

    New York gets extreme ultraviolet lithography

    NSTC plans to direct $825 million into the Albany project. The site will be dedicated to extreme ultraviolet lithography, a technology that’s essential to making the most advanced logic chips. The Albany Nanotech Complex, which has already seen more than $25 billion in investments from the state and industry partners over two decades, will form the heart of the future NSTC center. It already has an EUV lithography machine on site and has begun an expansion to install a next-generation version, called high-NA EUV, which promises to produce even finer chip features. Working with a tool recently installed in Europe, IBM, a long-time tenant of the Albany research facility, reported record yields of copper interconnects built every 21 nanometers, a pitch several nanometers tighter than possible with ordinary EUV.

    “It’s fulfilling to see that this ecosystem can be taken to the national and global level through CHIPS Act funding,” said Mukesh Khare, general manager of IBM’s semiconductors division, speaking from the future site of the NSTC EUV center. “It’s the right time, and we have all the ingredients.”

    While only a few companies are capable of manufacturing cutting edge logic using EUV, the impact of the NSTC center will be much broader, Khare argues. It will extend down as far as early-stage startups with ideas or materials for improving the chipmaking process “An EUV R&D center doesn’t mean just one machine,” says Khare. “It needs so many machines around it… It’s a very large ecosystem.”

    Silicon Valley lands the design center

    The design center is tasked with conducting advanced research in chip design, electronic design automation (EDA), chip and system architectures, and hardware security. It will also host the NSTC’s design enablement gateway—a program that provides NSTC members with a secure, cloud-based access to design tools, reference processes and designs, and shared data sets, with the goal of reducing the time and cost of design. Additionally, it will house workforce development, member convening, and administration functions.

    Situating the design center in Silicon Valley, with its concentration of research universities, venture capital, and workforce, seems like the obvious choice to many experts. “I can’t think of a better place,” says Patrick Soheili, co-founder of interconnect technology startup Eliyan, which is based in Santa Clara, Calif.

    Abhijeet Chakraborty, vice president of engineering in the technology and product group at Silicon Valley-based Synopsys, a leading maker of EDA software, sees Silicon Valley’s expansive tech ecosystem as one of its main advantages in landing the NSTC’s design center. The region concentrates companies and researchers involved in the whole spectrum of the industry from semiconductor process technology to cloud software.

    Access to such a broad range of industries is increasingly important for chip design startups, he says. “To design a chip or component these days you need to go from concept to design to validation in an environment that takes care of the entire stack,” he says. It’s prohibitively expensive for a startup to do that alone, so one of Chakraborty’s hopes for the design center is that it will help startups access the design kits and other data needed to operate in this new environment.

    Packaging and prototyping still to come

    A third promised center for prototyping and packaging is still to come. “The big question is where does the packaging and prototyping go?” says Mark Granahan, cofounder and CEO of Pennsylvania-based power semiconductor startup Ideal Semiconductor. “To me that’s a great opportunity.” He points out that because there is so little packaging technology infrastructure in the United States, any ambitious state or region should have a shot at hosting such a center. One of the original intentions of the act, after all, was to expand the number of regions of the country that are involved in the semiconductor industry.

    But that hasn’t stopped some already tech-heavy regions from wanting it. “Oregon offers the strongest ecosystem for such a facility,” a spokesperson for Intel, whose technology development is done there. “The state is uniquely positioned to contribute to the success of the NSTC and help drive technological advancements in the U.S. semiconductor industry.”

    As NSTC makes progress, Granahan’s concern is that bureaucracy will expand with it and slow efforts to boost the U.S. chip industry. Already the layers of control are multiplying. The Chips Office at the National Institute of Standards and Technology executes the Act. The NSTC is administered by the nonprofit Natcast, which directs the EUV center, which is in a facility run by another nonprofit, NY CREATES. “We want these things to be agile and make local decisions.”

  • Oceans Lock Away Carbon Slower Than Previously Thought
    by Emily Waltz on 04. November 2024. at 20:00



    Research expeditions conducted at sea using a rotating gravity machine and microscope found that the Earth’s oceans may not be absorbing as much carbon as researchers have long thought.

    Oceans are believed to absorb roughly 26 percent of global carbon dioxide emissions by drawing down CO2 from the atmosphere and locking it away. In this system, CO2 enters the ocean, where phytoplankton and other organisms consume about 70 percent of it. When these organisms eventually die, their soft, small structures sink to the bottom of the ocean in what looks like an underwater snowfall.

    This “marine snow” pulls carbon away from the surface of the ocean and sequesters it in the depths for millennia, which enables the surface waters to draw down more CO2 from the air. It’s one of Earth’s best natural carbon-removal systems. It’s so effective at keeping atmospheric CO2 levels in check that many research groups are trying to enhance the process with geoengineering techniques.

    But the new study, published on 11 October in Science, found that the sinking particles don’t fall to the ocean floor as quickly as researchers thought. Using a custom gravity machine that simulated marine snow’s native environment, the study’s authors observed that the particles produce mucus tails that act like parachutes, putting the brakes on their descent—sometimes even bringing them to a standstill.

    The physical drag leaves carbon lingering in the upper hydrosphere, rather than being safely sequestered in deeper waters. Living organisms can then consume the marine snow particles and respire their carbon back into the sea. Ultimately, this impedes the rate at which the ocean draws down and sequesters additional CO2 from the air.

    The implications are grim: Scientists’ best estimates of how much CO2 the Earth’s oceans sequester could be way off. “We’re talking roughly hundreds of gigatonnes of discrepancy if you don’t include these marine snow tails,” says Manu Prakash, a bioengineer at Stanford University and one of the paper’s authors. The work was conducted by researchers at Stanford, Rutgers University in New Jersey, and Woods Hole Oceanographic Institution in Massachusetts.

    Oceans Absorb Less CO2 Than Expected

    Researchers for years have been developing numerical models to estimate marine carbon sequestration. Those models will need to be adjusted for the slower sinking speed of marine snow, Prakash says.

    The findings also have implications for startups in the fledgling marine carbon geoengineering field. These companies use techniques such as ocean alkalinity enhancement to augment the ocean’s ability to sequester carbon. Their success depends, in part, on using numerical models to prove to investors and the public that their techniques work. But their estimates are only as good as the models they use, and the scientific community’s confidence in them.

    “We’re talking roughly hundreds of gigatonnes of discrepancy if you don’t include these marine snow tails.” —Manu Prakash, Stanford University

    The Stanford researchers made the discovery on an expedition off the coast of Maine. There, they collected marine samples by hanging traps from their boat 80 meters deep. After pulling up a sample, the researchers quickly analyzed the contents while still on board the ship using their wheel-shaped machine and microscope.

    Close-up of a rotating gravity machine and microscope with labels for the circular fluidic chamber, infrared LED ring, liquid lens, dark-field camera, rotational stage, objective and x y stages. The researchers built a microscope with a spinning wheel that simulates marine snow falling through sea water over longer distances than would otherwise be practical.Prakash Lab/Stanford

    The device simulates the organisms’ vertical travel over long distances. Samples go into a wheel about the size of a vintage film reel. The wheel spins constantly, allowing suspended marine-snow particles to sink while a camera captures their every move.

    The apparatus adjusts for temperature, light, and pressure to emulate marine conditions. Computational tools assess flow around the sinking particles and custom software removes noise in the data from the ship’s vibrations. To accommodate for the tilt and roll of the ship, the researchers mounted the device on a two-axis gimbal.

    Slower Marine Snow Reduces Carbon Sequestration

    With this setup, the team observed that sinking marine snow generates an invisible halo-shaped comet tail made of viscoelastic transparent exopolymer—a mucus-like parachute. They discovered the invisible tail by adding small beads to the seawater sample in the wheel, and analyzing the way they flowed around the marine snow. “We found that the beads were stuck in something invisible trailing behind the sinking particles,” says Rahul Chajwa, a bioengineering postdoctoral fellow at Stanford.

    The tail introduces drag and buoyancy, doubling the amount of time marine snow spends in the upper 100 meters of the ocean, the researchers concluded. “This is the sedimentation law we should be following,” says Prakash, who hopes to get the results into climate models.

    The study will likely help models project carbon export—the process of transporting CO2 from the atmosphere to the deep ocean, says Lennart Bach, a marine biochemist at the University of Tasmania in Australia, who was not involved with the research. “The methodology they developed is very exciting and it’s great to see new methods coming into this research field,” he says.

    But Bach cautions against extrapolating the results too far. “I don’t think the study will change the numbers on carbon export as we know them right now,” because these numbers are derived from empirical methods that would have unknowingly included the effects of the mucus tail, he says.

    Close-up of a white snowflake-like clump descending in dark water. Marine snow may be slowed by “parachutes” of mucus while sinking, potentially lowering the rate at which the global ocean can sequester carbon in the depths.Prakash Lab/Stanford

    Prakash and his team came up with the idea for the microscope while conducting research on a human parasite that can travel dozens of meters. “We would make 5- to 10-meter-tall microscopes, and one day, while packing for a trip to Madagascar, I had this ‘aha’ moment,” says Prakash. “I was like: Why are we packing all these tubes? What if the two ends of these tubes were connected?”

    The group turned their linear tube into a closed circular channel—a hamster wheel approach to observing microscopic particles. Over five expeditions at sea, the team further refined the microscope’s design and fluid mechanics to accommodate marine samples, often tackling the engineering while on the boat and adjusting for flooding and high seas.

    In addition to the sedimentation physics of marine snow, the team also studies other plankton that may affect climate and carbon-cycle models. On a recent expedition off the coast of Northern California, the group discovered a cell with silica ballast that makes marine snow sink like a rock, Prakash says.

    The crafty gravity machine is one of Prakash’s many frugal inventions, which include an origami-inspired paper microscope, or “foldscope,” that can be attached to a smartphone, and a paper-and-string biomedical centrifuge dubbed a “paperfuge.”

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 09. February 2022. at 15:31



    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


    Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

    Andrew Ng on...

    The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

    Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

    When you say you want a foundation model for computer vision, what do you mean by that?

    Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

    What needs to happen for someone to build a foundation model for video?

    Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

    Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

    Back to top

    It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

    Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

    “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
    —Andrew Ng, CEO & Founder, Landing AI

    I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

    I expect they’re both convinced now.

    Ng: I think so, yes.

    Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

    Back to top

    How do you define data-centric AI, and why do you consider it a movement?

    Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

    When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

    The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

    You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

    Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

    When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

    Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

    “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
    —Andrew Ng

    For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

    Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

    Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

    One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

    When you talk about engineering the data, what do you mean exactly?

    Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

    For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

    Back to top

    What about using synthetic data, is that often a good solution?

    Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

    Do you mean that synthetic data would allow you to try the model on more data sets?

    Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

    “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
    —Andrew Ng

    Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

    Back to top

    To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

    Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

    One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

    How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

    Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

    In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

    So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

    Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

    Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

    Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

    Back to top

    This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 08. February 2022. at 14:00



    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

    Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

    But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

    How is AI currently being used to design the next generation of chips?

    Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

    Portrait of a woman with blonde-red hair smiling at the camera Heather GorrMathWorks

    Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

    What are the benefits of using AI for chip design?

    Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

    So it’s like having a digital twin in a sense?

    Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

    So, it’s going to be more efficient and, as you said, cheaper?

    Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

    We’ve talked about the benefits. How about the drawbacks?

    Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

    Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

    One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

    How can engineers use AI to better prepare and extract insights from hardware or sensor data?

    Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

    One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

    What should engineers and designers consider when using AI for chip design?

    Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

    How do you think AI will affect chip designers’ jobs?

    Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

    How do you envision the future of AI and chip design?

    Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 07. February 2022. at 16:12



    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

    IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

    Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

    “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

    The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

    Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

    Golden dilution refrigerator hanging vertically Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

    In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

    As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

    In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

    “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

    On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

    While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

    “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

    This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

    “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

    Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.