IEEE News

IEEE Spectrum IEEE Spectrum

  • Solar-Powered Tech Transforms Remote Learning
    by Maurizio Arseni on 19. April 2025. at 13:00



    When Marc Alan Sperber, of Arizona State University’s Education for Humanity initiative arrived at a refugee camp along the Thai-Myanmar border, the scene was typical of many crisis zones: no internet, unreliable power, and few resources. But within minutes, he and local NGO partners were able to set up a full-featured digital classroom using nothing more than a solar panel and a yellow device the size of a soup can.

    Students equipped with only basic smartphones and old tablets were accessing content through Beekee, a Swiss-built lightweight standalone microserver that can turn any location into an offline first, pop-up digital classroom.

    While international initiatives like Giga try to connect every school to the Internet, the timeline and cost remain hard to predict. And even then, according to Unesco’s Global Education Monitoring Report, keeping schools in low-income countries online could run up to a billion dollars a day.

    Beekee, founded by Vincent Widmer and Sergio Estupiñán during their PhD studies at the educational technology department of the University of Geneva, seeks to bridge the connectivity gap through its easy-to-deploy device.

    At the core of Beekee’s box is a Raspberry Pi-based microserver, enclosed in a ventilated 3D-printed, thermoresistant, plastic shell. Optimized for passive and active cooling, weather resilience, and field repairs, it can withstand heat in arid climates like that of Jordan and northern Kenya.

    With its devices often deployed in remote regions, where repair options are few, Beekee supplies 3D print-friendly STL and G-code files to partners, enabling them to fabricate replacement parts on a 3D printer. “We’ve seen them use recycled plastic filament in Kenya and Lebanon to print replacement parts within days,” says Estupiñán.

    The device consumes less than 10 watts of power, making it easy to run for over 12 hours on an inexpensive 20,000 milliamp-hour (mAh) power bank. Alternately, Beekee can run on compact solar panels, where battery backup can provide up to two hours even on a cloudy day or at night. “This kind of energy efficiency is essential,” says Marcel Hertel of GIZ, the German development agency that uses Beekee in Indonesia as a digital learning platform, accessible to farmers in remote areas for training. “We work where even charging a phone is a challenge,” he says.

    The device runs on a custom Linux distribution and open-source software stack. Its Wi-Fi hotspot has a 40-meter range, providing coverage enough for two adjacent classrooms or a small courtyard. Up to 40 learners can connect simultaneously using their smartphones, tablets, or laptops, without apps or internet access needed. Beekee’s interface is browser-based.

    However, the yellow box isn’t meant to replace the internet. It’s designed to complement it, using available bandwidth for syncing whenever available via 3G or 4G connections.

    Although in many deployment zones, 3G/4G connectivity exists but is fragile. Mobile networks suffer from speed caps, high data costs, and congestion. Streaming educational content or relying on cloud platforms becomes impractical. But satellite-based internet connectivity, including emerging LEO satellite providers like Starlink, still provide windows of opportunity to download and upload content on the yellow box.

    Men and women sit at a table and work on digital tablets. In the foreground is a hexagonal device with two antennae coming out of the top. Beekee’s replacement part 3D design files can be used for remotely repairing the organization’s rugged e-learning boxes, using only a screwdriver and a 3D printer. Beekee

    Offline Moodle for E-Learning

    Beekee hosts e-learning tools for teachers and students, offering an offline Moodle instance—an open-source learning management system. Via Moodle, educators can use Scorm packages and H5P modules, technical standards commonly used to package and deliver e-learning material.

    “Beekee is designed to interoperate with existing training platforms,” says Estupiñán. “We sync learner progress, content updates, and analytics without changing how an organization already works.”

    Beekee also comes with Open Educational Resources (OER), including Offline Wikipedia, Khan Academy videos in multiple languages, and curated instructional content. “We don’t want just to deliver content,” says Estupiñán, “but also create a collaborative, engaging learning environment.”

    Before turning to Beekee, some organizations attempted to create their own offline learning platforms or worked with third-party developers.

    Some of them overlooked realities like extreme heat, power outages, and near-zero internet bandwidth—while others tried solutions that were essentially file libraries masquerading as learning platforms.

    “Most standalone systems don’t support remote updates or syncing of learner data and analytics,” says Sperber. “They delivered PDFs, not actual learning experiences that include interactive practice, assessment, feedback, or anything of the like.”

    Additionally, many of the systems lacked sustainable maintenance strategies and devices broke down under field conditions. “The tech might have looked sleek, but when things failed, there was no repair plan,” says Estupiñán. “We designed Beekee so that even non-specialist users could fix things with a screwdriver and a local 3D printer.”

    Beekee runs its own production line using a 3D printer farm in Geneva, capable of producing up to 30 custom units per day. But it doesn’t make only hardware, It also offers training, instructional design support, and ongoing technical help. “The real challenge isn’t just getting technology into the field, it’s keeping it running,” says Estupiñán.

    The Next Frontier: Offline AI

    Future plans include integrating small language models (SLMs) directly into the box. A lightweight AI engine could automate tasks like grading, flagging conceptual errors, or supporting teachers with localized lesson plans.

    “Offline AI is the next big step,” says Estupiñán. “It lets us bring intelligent support to teachers who may be isolated, undertrained, or overwhelmed.”

    Beekee has partnered with more than 40 organizations across nearly 30 countries. Founded five years ago and, has now a team of seven. The company recently joined UNESCO’s Global Education Coalition alongside Coursera, Microsoft, and Google. Even though Beekee is primarily used in low-resource environments, its offline-first design is now drawing interest in broader contexts.

    In France and Switzerland, secondary schools are beginning to use Beekee devices to give students digital access without exposing them fully to the internet during class. Teachers use them for outdoor projects, such as biology fieldwork, allowing students to share photos and notes over a local network. “The system is also being considered for secure, offline learning in correction facilities, and companies are exploring its potential for training in isolated, privacy-sensitive settings,” says Widmer.

  • Video Friday: Robot Boxing
    by Evan Ackerman on 18. April 2025. at 16:00



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
    ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
    ICRA 2025: 19–23 May 2025, ATLANTA, GA
    London Humanoids Summit: 29–30 May 2025, LONDON
    IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
    2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX
    RSS 2025: 21–25 June 2025, LOS ANGELES
    ETH Robotics Summer School: 21–27 June 2025, GENEVA
    IAS 2025: 30 June–4 July 2025, GENOA, ITALY
    ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
    IEEE World Haptics: 8–11 July 2025, SUWON, KOREA
    IFAC Symposium on Robotics: 15–18 July 2025, PARIS
    RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL
    RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS
    CLAWAR 2025: 5–7 September 2025, SHENZHEN
    CoRL 2025: 27–30 September 2025, SEOUL
    IEEE Humanoids: 30 September–2 October 2025, SEOUL
    World Robot Summit: 10–12 October 2025, OSAKA, JAPAN
    IROS 2025: 19–25 October 2025, HANGZHOU, CHINA

    Enjoy today’s videos!

    Let’s step into a new era of Sci-Fi, join the fun together! Unitree will be livestreaming robot combat in about a month, stay tuned!

    [ Unitree ]

    A team of scientists and students from Delft University of Technology in the Netherlands (TU Delft) has taken first place at the A2RL Drone Championship in Abu Dhabi - an international race that pushes the limits of physical artificial intelligence, challenging teams to fly fully autonomous drones using only a single camera. The TU Delft drone competed against 13 autonomous drones and even human drone racing champions, using innovative methods to train deep neural networks for high-performance control.

    [ TU Delft ]

    RAI’s Ultra Mobile Vehicle (UMV) is learning some new tricks!

    [ RAI Institute ]

    With 28 moving joints (20 QDD actuators + 8 servo motors), Cosmo can walk with its two feet with a speed of up to 1 m/s (0.5 m/s nominal) and balance itself even when pushed. Coordinated with the motion of its head, fingers, arms and legs, Cosmo has a loud and expressive voice for effective interaction with humans. Cosmo speaks in canned phrases from the 90’s cartoon he originates from and his speech can be fully localized in any language.

    [ RoMeLa ]

    We wrote about Parallel Systems back in January of 2022, and it’s good to see that their creative take on autonomous rail is still moving along.

    [ Parallel Systems ]

    RoboCake is ready. This edible robotic cake is the result of a collaboration between researchers from EPFL (the Swiss Federal Institute of Technology in Lausanne), the Istituto Italiano di Tecnologia (IIT-Italian Institute of Technology) and pastry chefs and food scientists from EHL in Lausanne. It takes the form of a robotic wedding cake, decorated with two gummy robotic bears and edible dark chocolate batteries that power the candles.

    [ EPFL ]

    ROBOTERA’s fully self-developed five-finger dexterous hand has upgraded its skills, transforming into an esports hand in the blink of an eye! The XHAND1 features 12 active degrees of freedom, pioneering an industry-first fully direct-drive joint design. It offers exceptional flexibility and sensitivity, effortlessly handling precision tasks like finger opposition, picking up soft objects, and grabbing cards. Additionally, it delivers powerful grip strength with a maximum payload of nearly 25 kilograms, making it adaptable to a wide range of complex application scenarios.

    [ ROBOTERA ]

    Witness the future of industrial automation as Extend Robotics trials their cutting-edge humanoid robot in Leyland factories. In this groundbreaking video, see how the robot skillfully connects a master service disconnect unit—a critical task in factory operations. Watch onsite workers seamlessly collaborate with the robot using an intuitive XR (extended reality) interface, blending human expertise with robotic precision.

    [ Extend Robotics ]

    I kind of like the idea of having a mobile robot that lives in my garage and manages the charging and cleaning of my car.

    [ Flexiv ]

    How can we ensure robots using foundation models, such as large language models (LLMs), won’t “hallucinate” when executing tasks in complex, previously unseen environments? Our Safe and Assured Foundation Robots for Open Environments (SAFRON) Advanced Research Concept (ARC) seeks ideas to make sure robots behave only as directed & intended.

    [ DARPA ]

    What if doing your chores were as easy as flipping a switch? In this talk and live demo, roboticist and founder of 1X Bernt Børnich introduces NEO, a humanoid robot designed to help you out around the house. Watch as NEO shows off its ability to vacuum, water plants and keep you company, while Børnich tells the story of its development — and shares a vision for robot helpers that could free up your time to focus on what truly matters.

    [ 1X ] via [ TED ]

    Rodney Brooks gave a keynote at the Stanford HAI spring conference on Robotics in a Human-Centered World.

    There are a bunch of excellent talks from this conference on YouTube at the link below, but I think this panel is especially good, as a discussion of going from from research to real-world impact.

    [ YouTube ] via [ Stanford HAI ]

    Wing CEO Adam Woodworth discusses consumer drone delivery with Peter Diamandis at Abundance 360.

    [ Wing ]

    This CMU RI Seminar is from Sangbae Kim, who was until very recently at MIT but is now the Robotics Architect at Meta’s Robotics Studio.

    [ CMU RI ]

  • Bell Labs Turns 100, Plans to Leave Its Old Headquarters
    by Dina Genkina on 18. April 2025. at 13:00



    This year, Bell Labs celebrates its hundredth birthday. In a centennial celebration held last week at the Murray Hill, New Jersey campus, the lab’s impressive technological history was celebrated with talks, panels, demos, and over a half dozen gracefully aging Nobel laureates.

    During its impressive 100 year tenure, Bell Labs scientists invented the transistor, laid down the theoretical grounding for the digital age, discovered radio astronomy which led to the first evidence in favor of the big bang theory, contributed to the invention of the laser, developed the Unix operating system, invented the charge-coupled device (CCD) camera, and many more scientific and technological contributions that have earned Bell Labs ten Noble prizes and five Turing awards.

    “I normally tell people, this is the ‘Bell Labs invented everything’ tour,” said Nokia Bell Labs archivist Ed Eckert as he led a tour through the lab’s history exhibit.

    The lab is smaller than it once was. The main campus in Murray Hill, New Jersey appears like a bit of a ghost town, with empty cubicles and offices lining the halls. Now, it’s planning a move to a smaller facility in New Brunswick, New Jersey sometime in 2027. In its heyday, Bell Labs boasted around 6,000 workers at the Murray Hill location. Although that number has now dwindled to about 1,000, more work at other locations around the world

    The Many Accomplishments of Bell Labs

    Despite its somewhat diminished size, Bell Labs, now owned by Nokia, is alive and kicking.

    “As Nokia Bell Labs, we have a dual mission,” says Bell Labs president Peter Vetter. “On the one hand, we need to support the longevity of the core business. That is networks, mobile networks, optical networks, the networking at large, security, device research, ASICs, optical components that support that network system. And then we also have the second part of the mission, which is help the company grow into new areas.”

    Some of the new areas for growth were represented in live demonstrations at the centennial.

    A team at Bell Labs is working on establishing the first cellular network on the moon. In February, Intuitive Machines sent their second lunar mission, Athena, with Bell Labs’ technology on board. The team fit two full cellular networks into a briefcase-sized box, the most compact networking system ever made. This cell network was self-deploying: Nobody on Earth needs to tell it what to do. The lunar lander tipped on its side upon landing and quickly went offline due to lack of solar power, Bell Labs’ networking module had enough time to power up and transmit data back to Earth.

    Another Bell Labs group is focused on monitoring the world’s vast network of undersea fiber-optic cables. Undersea cables are subject to interruptions, be it from adversarial sabotage, undersea weather events like earthquakes or tsunamis, or fishing nets and ship anchors. The team wants to turn these cables into a sensor network, capable of monitoring the environment around a cable for possible damage. The team has developed a real-time technique for monitoring mild changes in cable length, so sensitive that the lab-based demo was able to pick up tiny vibrations from the presenter’s speaking voice. This technique can pin changes down to a 10 kilometer interval of cable, greatly simplifying the search for affected regions.

    Nokia is taking the path less travelled when it comes to quantum computing, pursuing so-called topological quantum bits. These qubits, if made, would be much more robust to noise than other approaches, and are more readily amenable to scaling. However, building even a single qubit of this kind has been elusive. Nokia Bell Labs’ Robert Willett has been at it since his graduate work in 1988, and the team expect to demonstrate the first NOT gate with this architecture later this year.

    Beam-steering antennas for point-to-point fixed wireless are normally made on printed circuit boards. But as the world goes to higher frequencies, toward 6G, conventional printed circuit board materials are no longer cutting it—the signal loss makes them economically unviable. That’s why a team at Nokia Bell Labs has developed a way to print circuit boards on glass instead. The result is a small glass chip that has 64 integrated circuits on one side and the antenna array on the other. A 100 gigahertz link using the tech was deployed at the Paris Olympics in 2024, and a commercial product is on the roadmap for 2027.

    Mining, particularly autonomous mining that avoids putting humans in harm’s way, relies heavily on networking. That’s why Nokia has entered the mining business, developing smart digital twin technology that models the mine and the autonomous trucks that work on it. Their robo-truck system features two cellular modems, three Wifi cards, and twelve ethernet ports. The system collects different types of sensor data and correlates them on a virtual map of the mine (the digital twin). Then, it uses AI to suggest necessary maintenance and to optimize scheduling.

    The lab is also dipping into AI. One team is working on integrating large language models with robots for industrial applications. These robots have access to a digital twin model of the space they are in and have a semantic representation of certain objects in their surroundings. In a demo, a robot was verbally asked to identify missing boxes in a rack, and it successfully pointed out which box wasn’t found in its intended place, and when prompted travelled to the storage area and identified the replacement. The key is to build robots that can “reason about the physical world,” says Matthew Andrews, a researcher in the AI lab. A test system will be deployed in a warehouse in the United Arab Emirates in the next six months.

    Despite impressive scientific demonstrations, there was an air of apprehension about the event. In a panel discussion about the future of innovation, Princeton engineering dean Andrea Goldsmith said, “I’ve never been more worried about the innovation ecosystem in the US.” Former Google CEO Eric Schmidt said in a keynote that “The current administration seems to be trying to destroy university R&D.” Nevertheless, Schmidt and others expressed optimism about the future of innovation at Bell Labs and the US more generally. “We will win, because we are right and R&D is the foundation of economic growth,” he said.

  • The Future of AI and Robotics Is Being Led by Amazon’s Next-Gen Warehouses
    by Dexter Johnson on 17. April 2025. at 11:09



    This is a sponsored article brought to you by Amazon.

    The cutting edge of robotics and artificial intelligence (AI) doesn’t occur just at NASA, or one of the top university labs, but instead is increasingly being developed in the warehouses of the e-commerce company Amazon. As online shopping continues to grow, companies like Amazon are pushing the boundaries of these technologies to meet consumer expectations.

    Warehouses, the backbone of the global supply chain, are undergoing a transformation driven by technological innovation. Amazon, at the forefront of this revolution, is leveraging robotics and AI to shape the warehouses of the future. Far from being just a logistics organization, Amazon is positioning itself as a leader in technological innovation, making it a prime destination for engineers and scientists seeking to shape the future of automation.

    Amazon: A Leader in Technological Innovation

    Amazon’s success in e-commerce is built on a foundation of continuous technological innovation. Its fulfillment centers are increasingly becoming hubs of cutting-edge technology where robotics and AI play a pivotal role. Heath Ruder, Director of Product Management at Amazon, explains how Amazon’s approach to integrating robotics with advanced material handling equipment is shaping the future of its warehouses.

    “We’re integrating several large-scale products into our next-generation fulfillment center in Shreveport, Louisiana,” says Ruder. “It’s our first opportunity to get our robotics systems combined under one roof and understand the end-to-end mechanics of how a building can run with incorporated autonomation.” Ruder is referring to the facility’s deployment of its Automated Storage and Retrieval Systems (ASRS), called Sequoia, as well as robotic arms like “Robin” and “Cardinal” and Amazon’s proprietary autonomous mobile robot, “Proteus”.

    Amazon has already deployed “Robin”, a robotic arm that sorts packages for outbound shipping by transferring packages from conveyors to mobile robots. This system is already in use across various Amazon fulfillment centers and has completed over three billion successful package moves. “Cardinal” is another robotic arm system that efficiently packs packages into carts before the carts are loaded onto delivery trucks.

    Proteus” is Amazon’s autonomous mobile robot designed to work around people. Unlike traditional robots confined to a restricted area, Proteus is fully autonomous and navigates through fulfillment centers using sensors and a mix of AI-based and ML systems. It works with human workers and other robots to transport carts full of packages more efficiently.

    The integration of these technologies is estimated to increase operational efficiency by 25 percent. “Our goal is to improve speed, quality, and cost. The efficiency gains we’re seeing from these systems are substantial,” says Ruder. However, the real challenge is scaling this technology across Amazon’s global network of fulfillment centers. “Shreveport was our testing ground and we are excited about what we have learned and will apply at our next building launching in 2025.”

    Amazon’s investment in cutting-edge robotics and AI systems is not just about operational efficiency. It underscores the company’s commitment to being a leader in technological innovation and workplace safety, making it a top destination for engineers and scientists looking to solve complex, real-world problems.

    How AI Models Are Trained: Learning from the Real World

    One of the most complex challenges Amazon’s robotics team faces is how to make robots capable of handling a wide variety of tasks that require discernment. Mike Wolf, a principal scientist at Amazon Robotics, plays a key role in developing AI models that enable robots to better manipulate objects, across a nearly infinite variety of scenarios.

    “The complexity of Amazon’s product catalog—hundreds of millions of unique items—demands advanced AI systems that can make real-time decisions about object handling,” explains Wolf. But how do these AI systems learn to handle such an immense variety of objects? Wolf’s team is developing machine learning algorithms that enable robots to learn from experience.

    “We’re developing the next generation of AI and robotics. For anyone interested in this field, Amazon is the place where you can make a difference on a global scale.” —Mike Wolf, Amazon Robotics

    In fact, robots at Amazon continuously gather data from their interactions with objects, refining their ability to predict how items will be affected when manipulated. Every interaction a robot has—whether it’s picking up a package or placing it into a container—feeds back into the system, refining the AI model and helping the robot to improve. “AI is continually learning from failure cases,” says Wolf. “Every time a robot fails to complete a task successfully, that’s actually an opportunity for the system to learn and improve.” This data-centric approach supports the development of state-of-the-art AI systems that can perform increasingly complex tasks, such as predicting how objects are affected when manipulated. This predictive ability will help robots determine the best way to pack irregularly shaped objects into containers or handle fragile items without damaging them.

    “We want AI that understands the physics of the environment, not just basic object recognition. The goal is to predict how objects will move and interact with one another in real time,” Wolf says.

    What’s Next in Warehouse Automation

    Valerie Samzun, Senior Technical Product Manager at Amazon, leads a cutting-edge robotics program that aims to enhance workplace safety and make jobs more rewarding, fulfilling, and intellectually stimulating by allowing robots to handle repetitive tasks.

    “The goal is to reduce certain repetitive and physically demanding tasks from associates,” explains Samzun. “This allows them to focus on higher-value tasks in skilled roles.” This shift not only makes warehouse operations more efficient but also opens up new opportunities for workers to advance their careers by developing new technical skills.

    “Our research combines several cutting-edge technologies,” Samzun shared. “The project uses robotic arms equipped with compliant manipulation tools to detect the amount of force needed to move items without damaging them or other items.” This is an advancement that incorporates learnings from previous Amazon robotics projects. “This approach allows our robots to understand how to interact with different objects in a way that’s safe and efficient,” says Samzun. In addition to robotic manipulation, the project relies heavily on AI-driven algorithms that determine the best way to handle items and utilize space.

    Samzun believes the technology will eventually expand to other parts of Amazon’s operations, finding multiple applications across its vast network. “The potential applications for compliant manipulation are huge,” she says.

    Attracting Engineers and Scientists: Why Amazon is the Place to Be

    As Amazon continues to push the boundaries of what’s possible with robotics and AI, it’s also becoming a highly attractive destination for engineers, scientists, and technical professionals. Both Wolf and Samzun emphasize the unique opportunities Amazon offers to those interested in solving real-world problems at scale.

    For Wolf, who transitioned to Amazon from NASA’s Jet Propulsion Laboratory, the appeal lies in the sheer impact of the work. “The draw of Amazon is the ability to see your work deployed at scale. There’s no other place in the world where you can see your robotics work making a direct impact on millions of people’s lives every day,” he says. Wolf also highlights the collaborative nature of Amazon’s technical teams. Whether working on AI algorithms or robotic hardware, scientists and engineers at Amazon are constantly collaborating to solve new challenges.

    Amazon’s culture of innovation extends beyond just technology. It’s also about empowering people. Samzun, who comes from a non-engineering background, points out that Amazon is a place where anyone with the right mindset can thrive, regardless of their academic background. “I came from a business management background and found myself leading a robotics project,” she says. “Amazon provides the platform for you to grow, learn new skills, and work on some of the most exciting projects in the world.”

    For young engineers and scientists, Amazon offers a unique opportunity to work on state-of-the-art technology that has real-world impact. “We’re developing the next generation of AI and robotics,” says Wolf. “For anyone interested in this field, Amazon is the place where you can make a difference on a global scale.”

    The Future of Warehousing: A Fusion of Technology and Talent

    From Amazon’s leadership, it’s clear that the future of warehousing is about more than just automation. It’s about harnessing the power of robotics and AI to create smarter, more efficient, and safer working environments. But at its core it remains centered on people in its operations and those who make this technology possible—engineers, scientists, and technical professionals who are driven to solve some of the world’s most complex problems.

    Amazon’s commitment to innovation, combined with its vast operational scale, makes it a leader in warehouse automation. The company’s focus on integrating robotics, AI, and human collaboration is transforming how goods are processed, stored, and delivered. And with so many innovative projects underway, the future of Amazon’s warehouses is one where technology and human ingenuity work hand in hand.

    “We’re building systems that push the limits of robotics and AI,” says Wolf. “If you want to work on the cutting edge, this is the place to be.”

  • Future Chips Will Be Hotter Than Ever
    by James Myers on 16. April 2025. at 13:30



    For over 50 years now, egged on by the seeming inevitability of Moore’s Law, engineers have managed to double the number of transistors they can pack into the same area every two years. But while the industry was chasing logic density, an unwanted side effect became more prominent: heat.

    In a system-on-chip (SoC) like today’s CPUs and GPUs, temperature affects performance, power consumption, and energy efficiency. Over time, excessive heat can slow the propagation of critical signals in a processor and lead to a permanent degradation of a chip’s performance. It also causes transistors to leak more current and as a result waste power. In turn, the increased power consumption cripples the energy efficiency of the chip, as more and more energy is required to perform the exact same tasks.

    The root of the problem lies with the end of another law: Dennard scaling. This law states that as the linear dimensions of transistors shrink, voltage should decrease such that the total power consumption for a given area remains constant. Dennard scaling effectively ended in the mid-2000s at the point where any further reductions in voltage were not feasible without compromising the overall functionality of transistors. Consequently, while the density of logic circuits continued to grow, power density did as well, generating heat as a by-product.

    As chips become increasingly compact and powerful, efficient heat dissipation will be crucial to maintaining their performance and longevity. To ensure this efficiency, we need a tool that can predict how new semiconductor technology—processes to make transistors, interconnects, and logic cells—changes the way heat is generated and removed. My research colleagues and I at Imec have developed just that. Our simulation framework uses industry-standard and open-source electronic design automation (EDA) tools, augmented with our in-house tool set, to rapidly explore the interaction between semiconductor technology and the systems built with it.

    The results so far are inescapable: The thermal challenge is growing with each new technology node, and we’ll need new solutions, including new ways of designing chips and systems, if there’s any hope that they’ll be able to handle the heat.

    The Limits of Cooling

    Traditionally, an SoC is cooled by blowing air over a heat sink attached to its package. Some data centers have begun using liquid instead because it can absorb more heat than gas. Liquid coolants—typically water or a water-based mixture—may work well enough for the latest generation of high-performance chips such as Nvidia’s new AI GPUs, which reportedly consume an astounding 1,000 watts. But neither fans nor liquid coolers will be a match for the smaller-node technologies coming down the pipeline.

    A rainbow-colored shape similar to a capital \u201cI\u201d beside a line chart. Heat follows a complex path as it’s removed from a chip, but 95 percent of it exits through the heat sink. Imec

    Take, for instance, nanosheet transistors and complementary field-effect transistors (CFETs). Leading chip manufacturers are already shifting to nanosheet devices, which swap the fin in today’s fin field-effect transistors for a stack of horizontal sheets of semiconductor. CFETs take that architecture to the extreme, vertically stacking more sheets and dividing them into two devices, thus placing two transistors in about the same footprint as one. Experts expect the semiconductor industry to introduce CFETs in the 2030s.

    In our work, we looked at an upcoming version of the nanosheet called A10 (referring to a node of 10 angstroms, or 1 nanometer) and a version of the CFET called A5, which Imec projects will appear two generations after the A10. Simulations of our test designs showed that the power density in the A5 node is 12 to 15 percent higher than in the A10 node. This increased density will, in turn, lead to a projected temperature rise of 9 °C for the same operating voltage.

    Two colorful and textured rectangles and a graph with two lines sweeping up and to the right. Complementary field-effect transistors will stack nanosheet transistors atop each other, increasing density and temperature. To operate at the same temperature as nanosheet transistors (A10 node), CFETs (A5 node) will have to run at a reduced voltage. Imec

    Nine degrees might not seem like much. But in a data center, where hundreds of thousands to millions of chips are packed together, it can mean the difference between stable operation and thermal runaway—that dreaded feedback loop in which rising temperature increases leakage power, which increases temperature, which increases leakage power, and so on until, eventually, safety mechanisms must shut down the hardware to avoid permanent damage.

    Researchers are pursuing advanced alternatives to basic liquid and air cooling that may help mitigate this kind of extreme heat. Microfluidic cooling, for instance, uses tiny channels etched into a chip to circulate a liquid coolant inside the device. Other approaches include jet impingement, which involves spraying a gas or liquid at high velocity onto the chip’s surface, and immersion cooling, in which the entire printed circuit board is dunked in the coolant bath.

    But even if these newer techniques come into play, relying solely on coolers to dispense with extra heat will likely be impractical. That’s especially true for mobile systems, which are limited by size, weight, battery power, and the need to not cook their users. Data centers, meanwhile, face a different constraint: Because cooling is a building-wide infrastructure expense, it would cost too much and be too disruptive to update the cooling setup every time a new chip arrives.

    Performance Versus Heat

    Luckily, cooling technology isn’t the only way to stop chips from frying. A variety of system-level solutions can keep heat in check by dynamically adapting to changing thermal conditions.

    One approach places thermal sensors around a chip. When the sensors detect a worrying rise in temperature, they signal a reduction in operating voltage and frequency—and thus power consumption—to counteract heating. But while such a scheme solves thermal issues, it might noticeably affect the chip’s performance. For example, the chip might always work poorly in hot environments, as anyone who’s ever left their smartphone in the sun can attest.

    Another approach, called thermal sprinting, is especially useful for multicore data-center CPUs. It is done by running a core until it overheats and then shifting operations to a second core while the first one cools down. This process maximizes the performance of a single thread, but it can cause delays when work must migrate between many cores for longer tasks. Thermal sprinting also reduces a chip’s overall throughput, as some portion of it will always be disabled while it cools.

    System-level solutions thus require a careful balancing act between heat and performance. To apply them effectively, SoC designers must have a comprehensive understanding of how power is distributed on a chip and where hot spots occur, where sensors should be placed and when they should trigger a voltage or frequency reduction, and how long it takes parts of the chip to cool off. Even the best chip designers, though, will soon need even more creative ways of managing heat.

    Making Use of a Chip’s Backside

    A promising pursuit involves adding new functions to the underside, or backside, of a wafer. This strategy mainly aims to improve power delivery and computational performance. But it might also help resolve some heat problems.

    Four multilayer rectangles hover above a series of squiggles New technologies can reduce the voltage that needs to be delivered to a multicore processor so that the chip maintains a minimum voltage while operating at an acceptable frequency. A backside power-delivery network does this by reducing resistance. Backside capacitors lower transient voltage losses. Backside integrated voltage regulators allow different cores to operate at different minimum voltages as needed.Imec

    Imec foresees several backside technologies that may allow chips to operate at lower voltages, decreasing the amount of heat they generate. The first technology on the road map is the so-called backside power-delivery network (BSPDN), which does precisely what it sounds like: It moves power lines from the front of a chip to the back. All the advanced CMOS foundries plan to offer BSPDNs by the end of 2026. Early demonstrations show that they lessen resistance by bringing the power supply much closer to the transistors. Less resistance results in less voltage loss, which means the chip can run at a reduced input voltage. And when voltage is reduced, power density drops—and so, in turn, does temperature.

    Two stacks of blocks and four colorful squares that become increasingly dominated by reds and oranges. By changing the materials within the path of heat removal, backside power-delivery technology could make hot spots on chips even hotter. Imec

    After BSPDNs, manufacturers will likely begin adding capacitors with high energy-storage capacity to the backside as well. Large voltage swings caused by inductance in the printed circuit board and chip package can be particularly problematic in high-performance SoCs. Backside capacitors should help with this issue because their closer proximity to the transistors allows them to absorb voltage spikes and fluctuations more quickly. This arrangement would therefore enable chips to run at an even lower voltage—and temperature—than with BSPDNs alone.

    Finally, chipmakers will introduce backside integrated voltage-regulator (IVR) circuits. This technology aims to curtail a chip’s voltage requirements further still through finer voltage tuning. An SoC for a smartphone, for example, commonly has 8 or more compute cores, but there’s no space on the chip for each to have its own discrete voltage regulator. Instead, one off-chip regulator typically manages the voltage of four cores together, regardless of whether all four are facing the same computational load. IVRs, on the other hand, would manage each core individually through a dedicated circuit, thereby improving energy efficiency. Placing them on the backside would save valuable space on the frontside.

    It is still unclear how backside technologies will affect heat management; demonstrations and simulations are needed to chart the effects. Adding new technology will often increase power density, and chip designers will need to consider the thermal consequences. In placing backside IVRs, for instance, will thermal issues improve if the IVRs are evenly distributed or if they are concentrated in specific areas, such as the center of each core and memory cache?

    Recently, we showed that backside power delivery may introduce new thermal problems even as it solves old ones. The cause is the vanishingly thin layer of silicon that’s left when BSPDNs are created. In a frontside design, the silicon substrate can be as thick as 750 micrometers. Because silicon conducts heat well, this relatively bulky layer helps control hot spots by spreading heat from the transistors laterally. Adding backside technologies, however, requires thinning the substrate to about 1 mm to provide access to the transistors from the back. Sandwiched between two layers of wires and insulators, this slim silicon slice can no longer move heat effectively toward the sides. As a result, heat from hyperactive transistors can get trapped locally and forced upward toward the cooler, exacerbating hot spots.

    Our simulation of an 80-core server SoC found that BSPDNs can raise hot-spot temperatures by as much as 14 °C. Design and technology tweaks—such as increasing the density of the metal on the backside—can improve the situation, but we will need more mitigation strategies to avoid it completely.

    Preparing for “CMOS 2.0”

    BSPDNs are part of a new paradigm of silicon logic technology that Imec is calling CMOS 2.0. This emerging era will also see advanced transistor architectures and specialized logic layers. The main purpose of these technologies is optimizing chip performance and power efficiency, but they might also offer thermal advantages, including improved heat dissipation.

    In today’s CMOS chips, a single transistor drives signals to both nearby and faraway components, leading to inefficiencies. But what if there were two drive layers? One layer would handle long wires and buffer these connections with specialized transistors; the other would deal only with connections under 10 mm. Because the transistors in this second layer would be optimized for short connections, they could operate at a lower voltage, which again would reduce power density. How much, though, is still uncertain.

    Six horizontal rectangles with different blocky designs in each hover over each other. In the future, parts of chips will be made on their own silicon wafers using the appropriate process technology for each. They will then be 3D stacked to form SoCs that function better than those built using only one process technology. But engineers will have to carefully consider how heat flows through these new 3D structures. Imec

    What is clear is that solving the industry’s heat problem will be an interdisciplinary effort. It’s unlikely that any one technology alone—whether that’s thermal-interface materials, transistors, system-control schemes, packaging, or coolers—will fix future chips’ thermal issues. We will need all of them. And with good simulation tools and analysis, we can begin to understand how much of each approach to apply and on what timeline. Although the thermal benefits of CMOS 2.0 technologies—specifically, backside functionalization and specialized logic—look promising, we will need to confirm these early projections and study the implications carefully. With backside technologies, for instance, we will need to know precisely how they alter heat generation and dissipation—and whether that creates more new problems than it solves.

    Chip designers might be tempted to adopt new semiconductor technologies assuming that unforeseen heat issues can be handled later in software. That may be true, but only to an extent. Relying too heavily on software solutions would have a detrimental impact on a chip’s performance because these solutions are inherently imprecise. Fixing a single hot spot, for example, might require reducing the performance of a larger area that is otherwise not overheating. It will therefore be imperative that SoCs and the semiconductor technologies used to build them are designed hand in hand.

    The good news is that more EDA products are adding features for advanced thermal analysis, including during early stages of chip design. Experts are also calling for a new method of chip development called system technology co-optimization. STCO aims to dissolve the rigid abstraction boundaries between systems, physical design, and process technology by considering them holistically. Deep specialists will need to reach outside their comfort zone to work with experts in other chip-engineering domains. We may not yet know precisely how to resolve the industry’s mounting thermal challenge, but we are optimistic that, with the right tools and collaborations, it can be done.

  • Navigating the Angstrom Era
    by Wiley on 16. April 2025. at 13:22



    This is a sponsored article brought to you by Applied Materials.

    The semiconductor industry is in the midst of a transformative era as it bumps up against the physical limits of making faster and more efficient microchips. As we progress toward the “angstrom era,” where chip features are measured in mere atoms, the challenges of manufacturing have reached unprecedented levels. Today’s most advanced chips, such as those at the 2nm node and beyond, are demanding innovations not only in design but also in the tools and processes used to create them.

    At the heart of this challenge lies the complexity of defect detection. In the past, optical inspection techniques were sufficient to identify and analyze defects in chip manufacturing. However, as chip features have continued to shrink and device architectures have evolved from 2D planar transistors to 3D FinFET and Gate-All-Around (GAA) transistors, the nature of defects has changed.

    Defects are often at scales so small that traditional methods struggle to detect them. No longer just surface-level imperfections, they are now commonly buried deep within intricate 3D structures. The result is an exponential increase in data generated by inspection tools, with defect maps becoming denser and more complex. In some cases, the number of defect candidates requiring review has increased 100-fold, overwhelming existing systems and creating bottlenecks in high-volume production.

    Applied Materials’ CFE technology achieves sub-nanometer resolution, enabling the detection of defects buried deep within 3D device structures.

    The burden created by the surge in data is compounded by the need for higher precision. In the angstrom era, even the smallest defect — a void, residue, or particle just a few atoms wide — can compromise chip performance and the yield of the chip manufacturing process. Distinguishing true defects from false alarms, or “nuisance defects,” has become increasingly difficult.

    Traditional defect review systems, while effective in their time, are struggling to keep pace with the demands of modern chip manufacturing. The industry is at an inflection point, where the ability to detect, classify, and analyze defects quickly and accurately is no longer just a competitive advantage — it’s a necessity.

    Defect map comparison showing manageable defects vs. massive questionable defects during inspection. Applied Materials

    Adding to the complexity of this process is the shift toward more advanced chip architectures. Logic chips at the 2nm node and beyond, as well as higher-density DRAM and 3D NAND memories, require defect review systems capable of navigating intricate 3D structures and identifying issues at the nanoscale. These architectures are essential for powering the next generation of technologies, from artificial intelligence to autonomous vehicles. But they also demand a new level of precision and speed in defect detection.

    In response to these challenges, the semiconductor industry is witnessing a growing demand for faster and more accurate defect review systems. In particular, high-volume manufacturing requires solutions that can analyze exponentially more samples without sacrificing sensitivity or resolution. By combining advanced imaging techniques with AI-driven analytics, next-generation defect review systems are enabling chipmakers to separate the signal from the noise and accelerate the path from development to production.

    eBeam Evolution: Driving the Future of Defect Detections

    Electron beam (eBeam) imaging has long been a cornerstone of semiconductor manufacturing, providing the ultra-high resolution necessary to analyze defects that are invisible to optical techniques. Unlike light, which has a limited resolution due to its wavelength, electron beams can achieve resolutions at the sub-nanometer scale, making them indispensable for examining the tiniest imperfections in modern chips.

    Optical offers faster but lower resolution; eBeam provides higher resolution but slower speed. Applied Materials

    The journey of eBeam technology has been one of continuous innovation. Early systems relied on thermal field emission (TFE), which generates an electron beam by heating a filament to extremely high temperatures. While TFE systems are effective, they have known limitations. The beam is relatively broad, and the high operating temperatures can lead to instability and shorter lifespans. These constraints became increasingly problematic as chip features shrank and defect detection requirements grew more stringent.

    Enter cold field emission (CFE) technology, a breakthrough that has redefined the capabilities of eBeam systems. Unlike TFE, CFE operates at room temperature, using a sharp, cold filament tip to emit electrons. This produces a narrower, more stable beam with a higher density of electrons that results in significantly improved resolution and imaging speed.

    Comparison of thermal (orange) and cold (blue) field emissions over a patterned surface. Applied Materials

    For decades, CFE systems were limited to lab usage because it was not possible to keep the tools up and running for adequate periods of time — primarily because at “cold” temperatures, contaminants inside the chambers adhere to the eBeam emitter and partially block the flow of electrons.

    In December 2022, Applied Materials announced that it had solved the reliability issues with the introduction of its first two eBeam systems based on CFE technology. Applied is an industry leader at the forefront of defect detection innovation. It is a company that has consistently pushed the boundaries of materials engineering to enable the next wave of innovation in chip manufacturing. After more than 10 years of research across a global team of engineers, Applied mitigated the CFE stability challenge by developing multiple breakthroughs. These include new technology to deliver orders of magnitude higher vacuum compared to TFE — tailoring the eBeam column with special materials that reduce contamination, and designing a novel chamber self-cleaning process that further keeps the tip clean.

    CFE technology achieves sub-nanometer resolution, enabling the detection of defects buried deep within 3D device structures. This is a capability that is critical for advanced architectures like Gate-All-Around (GAA) transistors and 3D NAND memory. Additionally, CFE systems offer faster imaging speeds compared to traditional TFE systems, allowing chipmakers to analyze more defects in less time.

    The Rise of AI in Semiconductor Manufacturing

    While eBeam technology provides the foundation for high-resolution defect detection, the sheer volume of data generated by modern inspection tools has created a new challenge: how to process and analyze this data quickly and accurately. This is where artificial intelligence (AI) comes into play.

    AI-driven systems can classify defects with remarkable accuracy, sorting them into categories that provide engineers with actionable insights.

    AI is transforming manufacturing processes across industries, and semiconductors are no exception. AI algorithms — particularly those based on deep learning — are being used to automate and enhance the analysis of defect inspection data. These algorithms can sift through massive datasets, identifying patterns and anomalies that would be impossible for human engineers to detect manually.

    By training with real in-line data, AI models can learn to distinguish between true defects — such as voids, residues, and particles — and false alarms, or “nuisance defects.” This capability is especially critical in the angstrom era, where the density of defect candidates has increased exponentially.

    Enabling the Next Wave of Innovation: The SEMVision H20

    The convergence of AI and advanced imaging technologies is unlocking new possibilities for defect detection. AI-driven systems can classify defects with remarkable accuracy. Sorting defects into categories provides engineers with actionable insights. This not only speeds up the defect review process, but it also improves its reliability while reducing the risk of overlooking critical issues. In high-volume manufacturing, where even small improvements in yield can translate into significant cost savings, AI is becoming indispensable.

    The transition to advanced nodes, the rise of intricate 3D architectures, and the exponential growth in data have created a perfect storm of manufacturing challenges, demanding new approaches to defect review. These challenges are being met with Applied’s new SEMVision H20.

    SEMVision H20 efficiently bins defects from optical inspection in under 1 hour compared to eBeam methods. Applied Materials

    By combining second-generation cold field emission (CFE) technology with advanced AI-driven analytics, the SEMVision H20 is not just a tool for defect detection - it’s a catalyst for change in the semiconductor industry.

    A New Standard for Defect Review

    The SEMVision H20 builds on the legacy of Applied’s industry-leading eBeam systems, which have long been the gold standard for defect review. This second generation CFE has higher, sub-nanometer resolution faster speed than both TFE and first generation CFE because of increased electron flow through its filament tip. These innovative capabilities enable chipmakers to identify and analyze the smallest defects and buried defects within 3D structures. Precision at this level is essential for emerging chip architectures, where even the tiniest imperfection can compromise performance and yield.

    But the SEMVision H20’s capabilities go beyond imaging. Its deep learning AI models are trained with real in-line customer data, enabling the system to automatically classify defects with remarkable accuracy. By distinguishing true defects from false alarms, the system reduces the burden on process control engineers and accelerates the defect review process. The result is a system that delivers 3X faster throughput while maintaining the industry’s highest sensitivity and resolution - a combination that is transforming high-volume manufacturing.


    Woman with long dark hair wearing a black turtleneck and long dangling earrings in grayscale.

    “One of the biggest challenges chipmakers often have with adopting AI-based solutions is trusting the model. The success of the SEMVision H20 validates the quality of the data and insights we are bringing to customers. The pillars of technology that comprise the product are what builds customer trust. It’s not just the buzzword of AI. The SEMVision H20 is a compelling solution that brings value to customers.”

    Broader Implications for the Industry

    The impact of the SEMVision H20 extends far beyond its technical specifications. By enabling faster and more accurate defect review, the system is helping chipmakers reduce factory cycle times, improve yields, and lower costs. In an industry where margins are razor-thin and competition is fierce, these improvements are not just incremental - they are game-changing.

    Additionally, the SEMVision H20 is enabling the development of faster, more efficient, and more powerful chips. As the demand for advanced semiconductors continues to grow - driven by trends like artificial intelligence, 5G, and autonomous vehicles - the ability to manufacture these chips at scale will be critical. The system is helping to make this possible, ensuring that chipmakers can meet the demands of the future.

    A Vision for the Future

    Applied’s work on the SEMVision H20 is more than just a technological achievement; it’s a reflection of the company’s commitment to solving the industry’s toughest challenges. By leveraging cutting-edge technologies like CFE and AI, Applied is not only addressing today’s pain points but also shaping the future of defect review.

    As the semiconductor industry continues to evolve, the need for advanced defect detection solutions will only grow. With the SEMVision H20, Applied is positioning itself as a key enabler of the next generation of semiconductor technologies, from logic chips to memory. By pushing the boundaries of what’s possible, the company is helping to ensure that the industry can continue to innovate, scale, and thrive in the angstrom era and beyond.

  • The Many Ways Tariffs Will Hit Your Electronics
    by Samuel K. Moore on 16. April 2025. at 13:00



    Like the industry he covers, Shawn DuBravac had already had quite a week by the time IEEE Spectrum spoke to him early last Thursday, 10 April 2025. As chief economist at IPC, the 3,000-member industry association for electronics manufacturers, he’s tasked with figuring out the impact of the tsunami of tariffs the U.S. government has planned, paused, or enacted. Earlier that morning he’d recalculated price changes for electronics in the U.S. market following a 90-day pause on steeper tariffs that had been unveiled the previous week, the implementation of universal 10 percent tariffs, and a 125 percent tariff on Chinese imports. A day after this interview, he was recalculating again, following an exemption on electronics of an unspecified duration. According to DuBravac, the effects of all this will likely include higher prices, less choice for consumers, stalled investment, and even stifled innovation.

    How have you had to adjust your forecasts today [Thursday 10 April]?

    Shawn DuBravac: I revised our forecasts this morning to take into account what the world would look like if the 90-day pause holds into the future and the 125 percent tariffs on China also hold. If you look at smartphones, it would be close to a 91 percent impact. But if all the tariffs are put back in place as they were specified on “Liberation Day,” then that would be 101 percent price impact.

    The estimates become highly dependent on how influential China is for final assembly. So, if you look instead at something like TVs, 76 percent of televisions that are imported into the United States are coming from Mexico, where there has long been strong TV manufacturing because there were already tariffs in place on smart flat-panel televisions. The price impact I see for TVs is somewhere between 12 and 18 percent, as opposed to a near doubling for smartphones.

    Video-game consoles are another story. In 2024, 86 percent of video-game consoles were coming into the United States from China. So the tariffs have a very big impact.

    That said, the number of smartphones coming from China has actually declined pretty significantly in recent years. It was still about 72 percent in 2024, but Vietnam was 14 percent and India was 12 percent. Only a couple years ago the United States wasn’t importing any meaningful amount of smartphones from India, and it’s now become a very important hub.

    It sounds like the supply chain started shifting well ahead of these tariffs.

    DuBravac: Supply chains are really designed to be dynamic, adaptive, and resilient. So they’re constantly reoptimizing. I almost think of supply chains like living, breathing entities. If there is a disruption in one part, it’s like it lurches forward to figure out how to resolve the constrain, how to heal.

    We make these estimates with the presumption that nothing changes, but everything would change if this 125 percent were to become permanent. You would see an acceleration of the decoupling from China that has been happening since 2017 and accelerated during the pandemic.

    It’s also important to recognize that the United States isn’t the only buyer of smartphones. They’re produced in a global market, and so the supply chains are going to optimize based on that global-market dynamic. Maybe the rest of the chain could remain intact, and for example, China could continue to produce smartphones for Europe, Asia, and Latin America.

    How can supply chains adapt in this constantly changing environment?

    DuBravac: That, to me, is the most detrimental aspect of all of this. Supply chains want to adjust, but if they’re not sure what the environment is going to be in the future, they will be hesitant. If you were investing in a new factory—especially a modern, cutting-edge, semiautonomous factory—these are long-term investments. You’re looking at a 20- to 50-year time horizon, so you’re not going to make those type of investments in a geography if you’re not sure what the the broader situation is.

    I think one of the great ironies of all of this is that there was already a decoupling from China taking place, but because the tariff dynamics have been so fluid, it causes a pause in new business investment. As a result of that potential pause, the impact of tariffs could be more pronounced on U.S. consumers, because supply chains don’t adjust as quickly as they might have adjusted in a more certain environment.

    A lot of damage was done because of the uncertainty that’s been created, and it’s not clear to me that any of that uncertainty has been resolved. Our 3,000 member companies express a tremendous amount of uncertainty about the current environment.

    Lower-priced electronics have thin margins. What does that mean for the low-end consumer?

    DuBravac: What I see there is the households that are constrained by financials, they’re often the consumers of low-price products, and they’re the ones that are most likely to see tariff cost pushed through. There’s just no margin along the way to absorb those higher costs, and so they might see the highest percentage pricing.

    A low-price laptop would probably see a higher price increase in percentage terms. So I think the challenge there is the households least well positioned to handle the impact are the ones that will probably see the most impact.

    For some products, we tend to have higher price elasticities at lower price points, which means that a small price change tends to have a big negative impact on demand. There could be other things happening in the background as well, but the net result is that U.S. consumers have less choice.

    Some companies have already announced that they were going to cut out their lower-priced models, because it no longer makes economic sense to sell into the marketplace. That could happen on a company basis within their model selections, but it could also happen broadly, in an entire category where you might see the three or four lowest-priced options for a given category exit the market. So now you’re only left with more expensive options.

    What other effects are tariffs having?

    DuBravac: Another long-term effect we’ve talked about is that as companies try to optimize the cost, they relocate engineering staff to address cost. They’re pulling that engineering staff from other problems that they were trying to solve, like the next cutting-edge innovation. So some of that loss is a potentially a loss of innovation. Companies are going to worry about cost, and as a result, they’re not going to make the next iteration of product as innovative. It’s hard to measure, but I think that it is a potential negative by-product.

    The other thing is tariffs generally allow domestic producers to raise their price as well. You’ve already seen that for steel manufacturers. Maybe that makes U.S. companies more solvent or more viable, but at the end of the day, it’s consumers and businesses that will be paying higher prices.

  • Meet the “First Lady of Engineering”
    by Willie D. Jones on 15. April 2025. at 16:00



    For more than a century, women and racial minorities have fought for access to education and employment opportunities once reserved exclusively for white men. The life of Yvonne Young “Y.Y.” Clark is a testament to the power of perseverance in that fight. As a smart Black woman who shattered the barriers imposed by race and gender, she made history multiple times during her career in academia and industry.

    She probably is best known as the first woman to serve as a faculty member in the engineering college at Tennessee State University, in Nashville. Her pioneering spirit extended far beyond the classroom, however, as she continuously staked out new territory for women and Black professionals in engineering. She accomplished a lot before she died on 27 January 2019 at her home in Nashville at the age of 89.

    Clark is the subject of the latest biography in IEEE-USA’s Famous Women Engineers in History series. “Don’t Give Up” was her mantra.

    An early passion for technology

    Born on 13 April 1929 in Houston, Clark moved with her family to Louisville, Ky., as a baby. She was raised in an academically driven household. Her father, Dr. Coleman M. Young Jr., was a surgeon. Her mother, Hortense H. Young, was a library scientist and journalist. Her mother’s “Tense Topics” column, published by the Louisville Defender newspaper, tackled segregation, housing discrimination, and civil rights issues, instilling awareness of social justice in Y.Y.

    Clark’s passion for technology became evident at a young age. As a child, she secretly repaired her family’s malfunctioning toaster, surprising her parents. It was a defining moment, signaling to her family that she was destined for a career in engineering—not in education like her older sister, a high school math teacher.

    “Y.Y.’s family didn’t create her passion or her talents. Those were her own,” said Carol Sutton Lewis, co-host and producer for the third season of the “Lost Women of Science” podcast, on which Clark was profiled. “What her family did do, and what they would continue to do, was make her interests viable in a world that wasn’t fair.”

    Clark’s interest in studying engineering was precipitated by her passion for aeronautics. She said all the pilots she spoke with had studied engineering, so she was determined to do so. She joined the Civil Air Patrol and took simulated flying lessons. She then learned to fly an airplane with the help of a family friend.

    Despite her academic excellence, though, racial barriers stood in her way. She graduated at age 16 from Louisville’s Central High School in 1945. Her parents, concerned that she was too young to attend college, sent her to Boston for two additional years at the Girls’ Latin School and Roxbury Memorial High School.

    She then applied to the University of Louisville, where she was initially accepted and offered a full scholarship. When university administrators realized she was Black, however, they rescinded the scholarship and the admission, Clark said on the “Lost Women of Science” podcast, which included clips from when her daughter interviewed her in 2007. As Clark explained in the interview, the state of Kentucky offered to pay her tuition to attend Howard University, a historically Black college in Washington, D.C., rather than integrate its publicly funded university.

    Breaking barriers in higher education

    Although Howard provided an opportunity, it was not free of discrimination. Clark faced gender-based barriers, according to the IEEE-USA biography. She was the only woman among 300 mechanical engineering students, many of whom were World War II veterans.

    “Y.Y.’s family didn’t create her passion or her talents. Those were her own. What her family did do, and what they would continue to do, was make her interests viable in a world that wasn’t fair.” —Carol Sutton Lewis

    Despite the challenges, she persevered and in 1951 became the first woman to earn a bachelor’s degree in mechanical engineering from the university. The school downplayed her historic achievement, however. In fact, she was not allowed to march with her classmates at graduation. Instead, she received her diploma during a private ceremony in the university president’s office.

    A career defined by firsts

    Determined to forge a career in engineering, Clark repeatedly encountered racial and gender discrimination. In a 2007 Society of Women Engineers (SWE) StoryCorps interview, she recalled that when she applied for an engineering position with the U.S. Navy, the interviewer bluntly told her, “I don’t think I can hire you.” When she asked why not, he replied, “You’re female, and all engineers go out on a shakedown cruise,” the trip during which the performance of a ship is tested before it enters service or after it undergoes major changes such as an overhaul. She said the interviewer told her, “The omen is: ‘No females on the shakedown cruise.’”

    Clark eventually landed a job with the U.S. Army’s Frankford Arsenal gauge laboratories in Philadelphia, becoming the first Black woman hired there. She designed gauges and finalized product drawings for the small-arms ammunition and range-finding instruments manufactured there. Tensions arose, however, when some of her colleagues resented that she earned more money due to overtime pay, according to the IEEE-USA biography. To ease workplace tensions, the Army reduced her hours, prompting her to seek other opportunities.

    Her future husband, Bill Clark, saw the difficulty she was having securing interviews, and suggested she use the gender-neutral name Y.Y. on her résumé.

    The tactic worked. She became the first Black woman hired by RCA in 1955. She worked for the company’s electronic tube division in Camden, N.J.

    Although she excelled at designing factory equipment, she encountered more workplace hostility.

    “Sadly,” the IEEE-USA biography says, she “felt animosity from her colleagues and resentment for her success.”

    When Bill, who had taken a faculty position as a biochemistry instructor at Meharry Medical College in Nashville, proposed marriage, she eagerly accepted. They married in December 1955, and she moved to Nashville.

    In 1956 Clark applied for a full-time position at Ford Motor Co.’s Nashville glass plant, where she had interned during the summers while she was a Howard student. Despite her qualifications, she was denied the job due to her race and gender, she said.

    She decided to pursue a career in academia, becoming in 1956 the first woman to teach mechanical engineering at Tennessee State University. In 1965 she became the first woman to chair TSU’s mechanical engineering department.

    While teaching at TSU, she pursued further education, earning a master’s degree in engineering management from Nashville’s Vanderbilt University in 1972—another step in her lifelong commitment to professional growth.

    After 55 years with the university, where she was also a freshman student advisor for much of that time, Clark retired in 2011 and was named professor emeritus.

    A legacy of leadership and advocacy

    Clark’s influence extended far beyond TSU. She was active in the Society of Women Engineers after becoming its first Black member in 1951.

    Racism, however, followed her even within professional circles.

    At the 1957 SWE conference in Houston, the event’s hotel initially refused her entry due to segregation policies, according to a 2022 profile of Clark. Under pressure from the society’s leadership, the hotel compromised; Clark could attend sessions but had to be escorted by a white woman at all times and was not allowed to stay in the hotel despite having paid for a room. She was reimbursed and instead stayed with relatives.

    As a result of that incident, the SWE vowed never again to hold a conference in a segregated city.

    Over the decades, Clark remained a champion for women in STEM. In one SWE interview, she advised future generations: “Prepare yourself. Do your work. Don’t be afraid to ask questions, and benefit by meeting with other women. Whatever you like, learn about it and pursue it.

    “The environment is what you make it. Sometimes the environment is hostile, but don’t worry about it. Be aware of it so you aren’t blindsided.”

    Her contributions earned her numerous accolades including the 1998 SWE Distinguished Engineering Educator Award and the 2001 Tennessee Society of Professional Engineers Distinguished Service Award.

    A lasting impression

    Clark’s legacy was not confined to engineering; she was deeply involved in Nashville community service. She served on the board of the 18th Avenue Family Enrichment Center and participated in the Nashville Area Chamber of Commerce. She was active in the Hendersonville Area chapter of The Links, a volunteer service organization for Black women, and the Nashville alumnae chapter of the Delta Sigma Theta sorority. She also mentored members of the Boy Scouts, many of whom went on to pursue engineering careers.

    Clark spent her life knocking down barriers that tried to impede her. She didn’t just break the glass ceiling—she engineered a way through it for people who came after her.

  • What Engineers Should Know About AI Jobs in 2025
    by Gwendolyn Rak on 15. April 2025. at 14:00



    It seems AI jobs are here to stay, based on the latest data from the 2025 AI Index Report.

    To better understand the current state of AI, the annual report from Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) collects a wide range of information on model performance, investment, public opinion, and more. Every year, IEEE Spectrum summarizes our top takeaways from the entire report by plucking out a series of charts, but here we zero in on the technology’s effect on the workforce. Much of the report’s findings about jobs are based on data from LinkedIn and Lightcast, a research firm that analyzes job postings from more than 51,000 websites.

    Last year’s report showed signs that the AI hiring boom was quieting. But this year, AI job postings were back up in most places after the prior year’s lag. In the United States, for example, the percentage of all job postings demanding AI skills rose to 1.8 percent, up from 1.4 percent in 2023.

    Line graph of job posting data from 2014 to 2024. In 2024, 3.27% of Singapore's job postings were specifically for AI jobs. Followed by 1.99% in Luxembourg, 1.89% in Hong Kong and 1.79% in the United States. The Netherlands is listed last with only 1.25%. The AI Index Report/Stanford HAI

    Will AI Create Job Disruptions?

    Many people, including software engineers, fear that AI will make their jobs expendable—but others believe the technology will provide new opportunities. A McKinsey & Co. survey found that 28 percent of executives in software engineering expect generative AI to decrease their organizations’ workforces in the next three years, while 32 percent expect the workforce to increase. Overall, the portion of executives who anticipate a decrease in the workforce seems to be declining.

    In fact, a separate study from LinkedIn and GitHub suggests that adoption of GitHub Copilot, the generative AI-powered coding assistant, is associated with a small increase in software-engineering hiring. The study also found these new hires were required to have fewer advanced programming skills, as Peter McCrory, an economist and labor researcher at LinkedIn, noted during a panel discussion on the AI Report last Thursday.

    As tools like GitHub Copilot are adopted, the mix of required skills may shift. “Big picture, what we see on LinkedIn in recent years is that members are increasingly emphasizing a broader range of skills and increasingly uniquely human skills, like ethical reasoning or leadership,” McCrory said.

    Python Remains a Top Skill

    Still, programming skills remain central to AI jobs. In both 2023 and 2024, Python was the top specialized skill listed in U.S. AI job postings. The programming language also held onto its lead this year as the language of choice for many AI programmers.

    Bar graph of skills in AI job postings in the United States. Python is at the top, appearing 527% more on 2024 job postings than it did from 2012 to 2014. Computer science is second and data analysis is third. Compared to 2012 through 2014, Amazon web services appears more than 1,778% more, followed by an 833% data science increase. The AI Index Report/Stanford HAI

    Taking a broader look at AI-related skills, most were listed in a greater percentage of job postings in 2024 compared with those in 2023, with two exceptions: autonomous driving and robotics. Generative AI in particular saw a large increase, growing by nearly a factor of four.

    A line graph showing the percentage of job postings in the United States specifically related to certain skill clusters from 2010 to 2024. In 2024, 0.94% of jobs were linked to artificial intelligence, followed by machine learning at 0.92% and Natural language processing at 0.23%. The AI Index Report/Stanford HAI

    AI’s Gender Gap

    A gender gap is appearing in AI talent. According to LinkedIn’s research, women in most countries are less likely to list AI skills on their profiles, and it estimates that in 2024, nearly 70 percent of AI professionals on the platform were male. The ratio has been “remarkably stable over time,” the report states.

    Bar graph of the average AI skill penetration rate across gender from 2015 through 2024. The United States has the largest disparity of men over women. Saudi Arabia is the only location listed where data for women exceeds men. The AI Index Report/Stanford HAI

    Academia and Industry

    Although models are becoming more efficient, training AI is expensive. That expense is one of the primary reasons most of today’s notable AI advances are coming from industry instead of academia.

    “Sometimes in academia, we make do with what we have, so you’re seeing a shift of our research toward topics that we can afford to do with the limited computing [power] that we have,” AI Index steering committee co-director Yolanda Gil said at last week’s panel discussion. “That is a loss in terms of advancing the field of AI,” said Gil.

    Gil and others at the event emphasized the importance of investment in academia, as well as collaboration across sectors—industry, government, and education. Such partnerships can both provide needed resources to researchers and create a better understanding of the job market among educators, enabling them to prepare students to fill important roles.

  • Video Friday: Tiny Robot Bug Hops and Jumps
    by Evan Ackerman on 11. April 2025. at 15:30



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
    ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
    ICRA 2025: 19–23 May 2025, ATLANTA, GA
    London Humanoids Summit: 29–30 May 2025, LONDON
    IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
    2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX
    RSS 2025: 21–25 June 2025, LOS ANGELES
    ETH Robotics Summer School: 21–27 June 2025, GENEVA
    IAS 2025: 30 June–4 July 2025, GENOA, ITALY
    ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
    IEEE World Haptics: 8–11 July 2025, SUWON, KOREA
    IFAC Symposium on Robotics: 15–18 July 2025, PARIS
    RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL
    RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS
    CLAWAR 2025: 5–7 September 2025, SHENZHEN
    World Robot Summit: 10–12 October 2025, OSAKA, JAPAN
    IROS 2025: 19–25 October 2025, HANGZHOU, CHINA
    IEEE Humanoids: 30 September–2 October 2025, SEOUL
    CoRL 2025: 27–30 September 2025, SEOUL

    Enjoy today’s videos!

    MIT engineers developed an insect-sized jumping robot that can traverse challenging terrains while using far less energy than an aerial robot of comparable size. This tiny, hopping robot can leap over tall obstacles and jump across slanted or uneven surfaces carrying about 10 times more payload than a similar-sized aerial robot, opening the door to many new applications.

    [ MIT ]

    CubiX is a wire-driven robot that connects to the environment through wires, with drones used to establish these connections. By integrating with various tools and a robot, it performs tasks beyond the limitations of its physical structure.

    [ JSK Lab ]

    Thanks, Shintaro!

    It’s a game a lot of us played as children—and maybe even later in life: unspooling measuring tape to see how far it would extend before bending. But to engineers at the University of California San Diego, this game was an inspiration, suggesting that measuring tape could become a great material for a robotic gripper.

    [ University of California San Diego ]

    I enjoyed the Murderbot books, and the trailer for the TV show actually looks not terrible.

    [ Murderbot ]

    For service robots, being able to operate an unmodified elevator is much more difficult (and much more important) than you might think.

    [ Pudu Robotics ]

    There’s a lot of buzz around impressive robotics demos — but taking Physical AI from demo to real-world deployment is a journey that demands serious engineering muscle. Hammering out the edge cases and getting to scale is 500x the effort of getting to the first demo. See our process for building this out for the singulation and induction Physical AI solution trusted by some of the world’s leading parcel carriers. Here’s to the teams likewise committed to the grind toward reliability and scale.

    [ Dexterity Robotics ]

    I am utterly charmed by the design of this little robot.

    [ RoMeLa ]

    This video shows a shortened version of Issey Miyake’s Fly With Me runway show from 2025 Paris Men’s Fashion Week. My collaborators and I brought two industrial robots to life to be the central feature of the minimalist scenography for the Japanese brand.

    Each ABB IRB 6640 robot held a two meter square piece of fabric, and moved synchronously in flowing motions to match the emotional timing of the runway show. With only three-weeks development time and three days on-site, I built custom live coding tools that opened up the industrial robots to more improvisational workflows. This level of reliable, real-time control unlocked the flexibility needed by the Issey Miyake team to make the necessary last-minute creative decisions for the show.

    [ Atonaton ]

    Meet Clone’s first musculoskeletal android: Protoclone, the most anatomically accurate robot in the world. Based on a natural human skeleton, Protoclone is actuated with over 1,000 Myofibers, Clone’s proprietary artificial muscle technology.

    [ Clone Robotics ]

    There are a lot of heavily produced humanoid robot videos from the companies selling them, but now that these platforms are entering the research space, we should start getting a more realistic sense of their capabilities.

    [ University College London ]

    Here’s a bit more footage from RIVR on their home delivery robot.

    [ RIVR ]

    And now, this.

    [ EngineAI ]

    Robots are at the heart of sci-fi, visions of the future, but what if that future is now? And what if those robots, helping us at work and at home, are simply an extension of the tools we’ve used for millions of years? That’s what artist and engineer Catie Cuan thinks, and it’s part of the reason she teaches robots to dance. In this episode we meet the people at the frontiers of the future of robotics and Astro Teller introduces two groundbreaking projects, Everyday Robots and Intrinsic, that have advanced how robots could work not just for us but with us.

    [ Moonshot Podcast ]

  • Climb the Career Ladder with Focused Expertise
    by Rahul Pandey on 10. April 2025. at 18:00



    This article is crossposted from IEEE Spectrum’s rebooted careers newsletter! In partnership with tech career development company Taro, every issue will be bringing you deeper insight into how to pursue your goals and navigate professional challenges. Sign up now to get insider tips, expert advice, and practical strategies delivered to your inbox for free.

    One of the key strategies for gaining seniority is expertise. Whether you’re trying to get promoted or land a new job at a higher level, you need to demonstrate mastery over a valuable skill or domain.

    Here’s what most job seekers get wrong about this: They think that being an “expert” is reserved for senior or principal engineers who have decades of experience. Nothing could be further from the truth.

    Instead of assuming that expertise is a distant goal, realize that you can become more knowledgeable than anyone as long as you narrow the scope appropriately. For example, in one afternoon, you can become the go-to person in your team of 10 for anything related to configuring logs for your company’s version control software.

    In a company with any amount of sophistication, each person’s knowledge is incomplete. There will always be problems that fall into the category of “If we had more time, we’d look into that.” Your goal is to identify which of these gaps could make a meaningful business impact. It need not be purely technical; it could be about search engine optimization (SEO), launch processes, or improving the developer experience.

    This is actionable advice if you’re on the job market. If you’re looking for a job, especially as a junior engineer, your #1 goal is to demonstrate mastery over a technology or domain.

    This means you should be selective about how much you claim to know on your resume. If you mention every programming language or analysis tool you’ve ever touched, you are making it impossible for someone to identify your level of expertise. This is especially true when you have less than 4 years of experience.

    When you claim to know everything, I’ll assume you actually suck at everything. You should be able to teach me something about each of the projects or technologies you mention, e.g. discuss tradeoffs or interesting technical decisions you made.

    Yes, you do disqualify yourself from certain jobs where you didn’t list the technologies they were looking for. But those jobs weren’t a good fit anyway.

    -Rahul

    ICYMI: Top Programming Languages

    If you’re taking our advice and looking to develop expertise in a programming language your team needs, check out Spectrum‘s Top Programming Languages interactive. There you’ll find out what programming languages are the most important in your field, and which are most in demand by employers.

    Read more here.

    ICYMI: Data Centers Seek Engineers Amid a Talent Shortage

    The rapid development of AI is fueling a data center boom, unlocking billions of dollars in investments to build the infrastructure needed to support data- and energy-hungry models. This surge in construction has created a strong demand for certain electrical engineers, whose expertise in power systems and energy efficiency is essential for designing, building, and maintaining energy-intensive AI infrastructure.

    Read more here.

    ICYMI: In Praise of “Normal” Engineers

    You don’t have to be a superhero to develop valuable skills either. In one of the most popular articles on IEEE Spectrum this month, Charity Majors breaks down the dangers of lionizing the “10x engineer,” writing “Individual engineers don’t own software; engineering teams own software. It doesn’t matter how fast an individual engineer can write software. What matters is how fast the team can collectively write, test, review, ship, maintain, refactor, extend, architect, and revise the software that they own.”

    Read more here.

  • IEEE TryEngineering STEM Grants Fund Over 50 Projects
    by Robert Schneider on 10. April 2025. at 17:00



    IEEE TryEngineering, a program within Educational Activities, fosters outreach to school-age children worldwide by equipping teachers and IEEE volunteers with tools for engaging activities. The science, technology, engineering, and math resources include peer-reviewed lesson plans, games, and activities that are designed to captivate and inspire—all provided at no cost.

    The TryEngineering STEM grant program provides financial support to IEEE volunteers to start, sustain, or scale up selected outreach projects in their communities. Since its inception in 2021, 144 projects have been funded, totaling more than US $176,000. At least 1,000 IEEE volunteers have led programs, engaging with more than 19,000 students.

    Last year the grant program awarded more than $70,379 to 58 volunteer-led projects, and 462 applications from nine IEEE regions were received. IEEE members involved in preuniversity outreach programs, including STEM Champions and members of the preuniversity education coordinating committee, reviewed the submissions using a criteria-based rubric.

    The full list of funded projects can be found here. What follows is a sampling.

    Eight donor-supported projects

    STEM Education Workshop 2024: Introducing the Internet of Things to High School Students, funded by the Taenzer Memorial Fund with support from the IEEE Foundation, featured a hands-on activity that provided an introduction to the IoT, programming, and basic microcontroller concepts. Forty high school and vocational school students and nine teachers from the Itenas Bandung electrical engineering study program in Indonesia attended. Twelve IEEE volunteers facilitated the program. Through experimentation with microelectronics, students were able to be creative, spurring increased interest and a desire to further explore technology.

    The Taenzer Fund subsidized seven additional proposals to support engineering in developing countries. They totaled $10,000 and reached more than 300 students. The programs included:

    • Exploring the Future: An IEEE STEM Industry Tour in Indonesia. This event in Jakarta engaged 20 students with hands-on workshops, networking opportunities, and visits to cutting-edge facilities involved with 5G, AI, and ocean engineering.
    • IEEE STEM Empowerment: Student Workshop Series. In these workshops, also held in Indonesia, 20 students tackled hands-on projects in communications technology, AI, and ocean engineering.
    • IEEE STEM Teacher Workshop: Empowering Educators for Future Innovators. Fifty participants attended the event in Sukabumi, Indonesia. It offered hands-on sessions on pedagogy, cutting-edge technologies, and ways to increase gender inclusivity.
    • IEEE Women in Engineering Day. A three-day session in Kenya included STEM competitions, interactive workshops, mentorship sessions, and hands-on activities to empower young women.
    • WIE Impact. A series of workshops and events held in Zaghouan, Tunisia, engaged 160 students with activities in coding, cybersecurity, robotics, space exploration, sustainability, and first aid.
    • Exploring Sustainable Futures: Empowering Students With IoT-driven Aquaponics Systems for STEM Enthusiasts. This hands-on program in Kuala Lumpur, Malaysia, taught 28 students about coding basics, real-time system monitoring, and IoT-driven aquaponics systems. Aquaponics, a food-production method, combines aquaculture with hydroponics.
    • Development of a Game—Multiplayer Kuis (Gamukis)—for Students of Senior High School 1 Cangkringan in Increasing Learning Enthusiasm. Held at Amikom University, in Yogyakarta, Indonesia, this program taught 150 students how to design educational video games.

    Indian students huddled together while listening to an instructor. Students who participated in the program held at the Atal Tinkering Lab in Narasaraopet, Andhra Pradesh, learned about the fundamentals of Internet of Things and its applications. Atal Tinkering Lab Program Team

    IEEE society-sponsored programs

    The generous support from the Taenzer Fund was supplemented by financial assistance from IEEE groups including the Communications, Oceanic Engineering, and Signal Processing societies, as well as IEEE Women in Engineering.

    The IEEE Signal Processing Society funded three projects including Train the STEM Trainers in Secondary Schools-Multiplier Effect STEM Outreach. The “train the trainers” program involved 350 students and 10 parents, more than 80 teachers, and 30 volunteers. The teachers were trained in robotics and coding using mBlock and Python. The students got experience with calculators, digital counters, LED displays, robotic cars, smart dustbins, and more.

    A focus on Internet of Things

    Another notable project was the ConnectXperience: The Journey into the World of IoT. Held at the Atal Tinkering Lab in Narasaraopet, Andhra Pradesh, India, this program engaged more than 400 students who learned about IoT fundamentals, robotics, programming, electronics, data analytics, networking, cybersecurity, and other innovative applications of IoT.

    2025 grants

    TryEngineering recently announced its 2025 STEM grant recipients. Out of more than 410 applications received, funding was awarded to 58 programs from nine sections, for a total of $70,379. The list of recipients can be found here.

    To contribute to the program, visit the TryEngineering Fund donation page.

  • The Great Chatbot Debate: Do They Really Understand?
    by Eliza Strickland on 10. April 2025. at 15:01



    The large language models (LLMs) that power today’s chatbots have gotten so astoundingly capable, AI researchers are hard pressed to assess those capabilities—it seems that no sooner is there a new test than the AI systems ace it. But what does that performance really mean? Do these models genuinely understand our world? Or are they merely a triumph of data and calculations that simulates true understanding?

    To hash out these questions, IEEE Spectrum partnered with the Computer History Museum in Mountain View, Calif. to bring two opinionated experts to the stage. I was the moderator of the event, which took place on 25 March. It was a fiery (but respectful) debate, well worth watching in full.

    Emily M. Bender is a University of Washington professor and director of its computational linguistics laboratory, and she has emerged over the past decade as one of the fiercest critics of today’s leading AI companies and their approach to AI. She’s also known as one of the coauthors of the seminal 2021 paper “On the Dangers of Stochastic Parrots,” a paper that laid out the possible risks of LLMs (and caused Google to fire coauthor Timnit Gebru). Bender, unsurprisingly, took the “no” position.

    Taking the “yes” position was Sébastien Bubeck, who recently moved to OpenAI from Microsoft, where he was VP of AI. During his time at Microsoft he coauthored the influential preprint “Sparks of Artificial General Intelligence,” which described his early experiments with OpenAI’s GPT-4 while it was still under development. In that paper, he described advances over prior LLMs that made him feel that the model had reached a new level of comprehension.

    With no further ado, we bring you the matchup that I call “Parrots vs. Sparks.”

    - YouTube youtu.be

  • First Supercritical CO2 Circuit Breaker Debuts
    by Emily Waltz on 09. April 2025. at 16:36



    Researchers this month will begin testing a high-voltage circuit breaker that can quench an arc and clear a fault with supercritical carbon dioxide fluid. The first-of-its-kind device could replace conventional high-voltage breakers, which use the potent greenhouse gas sulfur hexafluoride, or SF6. Such equipment is scattered widely throughout power grids as a way to stop the flow of electrical current in an emergency.

    “SF6 is a fantastic insulator, but it’s very bad for the environment—probably the worst greenhouse gas you can think of,” says Johan Enslin, a program director at U.S. Advanced Research Projects Agency–Energy (ARPA-E), which funded the research. The greenhouse warming potential of SF6 is nearly 25,000 times as high as that of carbon dioxide, he notes.

    If successful, the invention, developed by researchers at the Georgia Institute of Technology, could have a big impact on greenhouse gas emissions. Hundreds of thousands of circuit breakers dot power grids globally, and nearly all of the high voltage ones are insulated with SF6.

    A high-voltage circuit breaker interrupter, like this one made by GE Vernova, stops current by mechanically creating a gap and an arc, and then blasting high-pressure gas through the gap. This halts the current by absorbing free electrons and quenching the arc as the dielectric strength of the gas is increased. GE Vernova

    On top of that, SF6 byproducts are toxic to humans. After the gas quenches an arc, it can decompose into substances that can irritate the respiratory system. People who work on SF6-insulated equipment have to wear full respirators and protective clothing. The European Union and California are phasing out the use of SF6 and other fluorinated gases (F-gases) in electrical equipment, and several other regulators are following suit.

    In response, researchers globally are racing to develop alternatives. Over the last five years, ARPA-E has funded 15 different early-stage circuit breaker projects. And GE Vernova has developed products for the European market that use a gas mixture that includes an F-gas, but at a fraction of the concentration of conventional SF6 breakers.

    Reinventing Circuit Breakers With Supercritical CO2

    The job of a grid-scale circuit breaker is to interrupt the flow of electrical current when something goes wrong, such as a fault caused by a lightning strike. These devices are placed throughout substations, power generation plants, transmission and distribution networks, and industrial facilities where equipment operates in tens to hundreds of kilovolts.

    Unlike home circuit breakers, which can isolate a fault with a small air gap, grid-scale breakers need something more substantial. Most high-voltage breakers rely on a mechanical interrupter housed in an enclosure containing SF6, which is a non-conductive insulating gas. When a fault occurs, the device breaks the circuit by mechanically creating a gap and an arc, and then blasts the high-pressure gas through the gap, absorbing free electrons and quenching the arc as the dielectric strength of the gas is increased.

    In Georgia Tech’s design, supercritical carbon dioxide quenches the arc. The fluid is created by putting CO2 under very high pressure and temperature, turning it into a substance that’s somewhere between a gas and a liquid. Because supercritical CO2 is quite dense, it can quench an arc and avoid reignition of a new arc by reducing the momentum of electrons—or at least that’s the theory.

    Led by Lukas Graber, head of Georgia Tech’s plasma and dielectrics lab, the research group will run its 72-kV prototype AC breaker through a synthetic test circuit at the University of Wisconsin-Milwaukee beginning in late April. They group is also building a 245-kV version.

    The use of supercritical CO2 isn’t new, but designing a circuit breaker around it is. The challenge was to build the breaker with components that can withstand the high pressure needed to sustain supercritical CO2, says Graber.

    The team turned to the petroleum industry to find the parts, and found all but one: the bushing. This crucial component serves as a feed-through to carry current through equipment enclosures. But a bushing that can withstand 120 atmospheres of pressure didn’t exist. So Georgia Tech made its own using mineral-filled epoxy resins, copper conductors, steel pipes, and blank flanges.

    “They had to go back to the fundamentals of the bushing design to make the whole breaker work,” says Enslin. “That’s where they are making the biggest contribution, in my eyes.” The compact design of Georgia Tech’s breaker will also allow it to fit in tighter spaces without sacrificing power density, he says.

    Replacing a substation’s existing circuit breakers with this design will require some adjustments, including the addition of a heat pump in the vicinity for thermal management of the breaker.

    If the tests on the synthetic circuit go well, Graber plans to run the breaker through a battery of real-world simulations at KEMA Laboratories‘ Chalfont, Penn. location—a gold standard certification facility.

    A high voltage circuit breaker against a solid color background. The Georgia Tech team built its circuit breaker with parts that can withstand the very high pressures of supercritical CO2.Alfonso Jose Cruz

    GE Vernova Markets SF6-alternative Circuit Breaker

    If Georgia Tech’s circuit breaker makes it to the market, it will have to compete with GE Vernova, which had a 20-year head start on developing SF6-free circuit breakers. In 2018, the company installed its first SF6-free gas-insulated substation in Europe, which included a 145 kV-class AC circuit breaker that’s insulated with a gas mixture it calls g3. It’s composed of CO2, oxygen and a small amount of C4F7N, or heptafluoroisobutyronitrile.

    This fluorinated greenhouse gas isn’t good for the environment either. But it comprises less than 5 percent of the gas mixture, so it lowers the greenhouse warming potential by up to 99 percent compared with SF6. That makes the warming potential still far greater than CO2 and methane, but it’s a start.

    “One of the reasons we’re using this technology is because we can make an SF6-free circuit breaker that will actually bolt onto the exact foundation of our equivalent SF6 breaker,” says Todd Irwin, a high-voltage circuit breaker senior product specialist at GE Vernova. It’s a drop-in replacement that will “slide right into a substation,” he says. Workers must still wear full protective gear when they maintain or fix the machine like they do for SF6 equipment, Irwin says. The company also makes a particular type of breaker called a live tank circuit breaker without the fluorinated component, he says.

    All of these approaches, including Georgia Tech’s supercritical CO2, depend on mechanical action to open and close the circuit. This takes up precious time in the event of a fault. That’s inspired many researchers to turn to semiconductors, which can do the switching a lot faster, and don’t need a gas to turn off the current.

    “With mechanical, it can take up to four or five cycles to clear the fault and that’s so much energy that you have to absorb,” says Enslin at ARPA-E. A semiconductor can potentially do it in a millisecond or less, he says. But commercial development of these solid state circuit breakers is still in early stages, and is focused on medium voltages. “It will take some time to get them to the required high voltages,” Enslin says.

    The work may be niche, but the impact could be high. About 1 percent of SF6 leaks from electrical equipment. In 2018, that translated to 9,040 tons (8,200 tonnes) of SF6 emitted globally, accounting for about 1 percent of the global warming value that year.

  • Airbus Is Working on a Superconducting Electric Aircraft
    by Glenn Zorpette on 09. April 2025. at 15:10



    One of the greatest climate-related engineering challenges right now is the design and construction of a large, zero-emission, passenger airliner. And in this massive undertaking, no airplane maker is as invested as Airbus.

    At the Airbus Summit, a symposium for journalists on 24 and 25 March, top executives sketched out a bold, tech-forward vision for the company’s next couple of generations of aircraft. The highlight, from a tech perspective, is a superconducting, fuel-cell powered airliner.

    Airbus’s strategy is based on parallel development efforts. While undertaking the enormous R&D projects needed to create the large, fuel-cell aircraft, the company said it will also work aggressively on an airliner designed to wring the most possible efficiency out of combustion-based propulsion. For this plane, the company is targeting a 20-to-30 percent reduction in fuel consumption, according to Bruno Fichefeux, head of future programmes at Airbus. The plane would be a single-aisle airliner, designed to succeed Airbus’s A320 family of aircraft, the highest-selling passenger jet aircraft on the market, with nearly 12,000 delivered. The company expects the new plane to enter service some time in the latter half of the 2030s.

    Airbus hopes to achieve such a large efficiency gain by exploiting emerging advances in jet engines, wings, lightweight, high-strength composite materials, and sustainable aviation fuel. For example, Airbus disclosed that it is now working on a pair of advanced jet engines, the more radical of which would have an open fan whose blades would spin without a surrounding nacelle. Airbus is evaluating such an engine in a project with partner CFM International, a joint venture between GE Aerospace and Safran Aircraft Engines.

    Without a nacelle to enclose them, an engine’s fan blades can be very large, permitting higher levels of “bypass air,” which is the air sucked in to the back of the engine—separate from the air used to combust fuel—and expelled to provide thrust. The ratio of bypass air to combustion air is an important measure of engine performance, with higher ratios indicating higher efficiencies, according to Mohamed Ali, chief technology and operating officer for GE Aerospace. Typical bypass ratios today are around 11 or 12, but the open-fan design could enable ratios as high as 60, according to Ali.

    The partners have already tested open-fan engines in two different series of wind-tunnel tests in Europe, Ali added. “The results have been extremely encouraging, not only because they are really good in terms of performance and noise validation, but also [because] they’re validating the computational analysis that we have done,” Ali said at the Airbus event.

    Head-on view of an open fan engine's blades inside of a large wind tunnel. A scale model of an open-fan aircraft engine was tested last year in a wind tunnel in Modane, France. The tests were conducted by France’s national aerospace research agency and Safran Aircraft Engines, which is working on open-fan engines with GE Aerospace.Safran Aircraft Engines

    Fuel-cell airliner is a cornerstone of zero-emission goals

    In parallel with this advanced combustion-powered airliner, Airbus has been developing a fuel-cell aircraft for five years under a program called ZEROe. At the Summit, Airbus CEO Guillaume Faury backed off of a goal to fly such a plane by 2035, citing the lack of a regulatory framework for certifying such an aircraft as well as the slow pace of the build-out of infrastructure needed to produce “green” hydrogen at commercial scale and at competitive prices. “We would have the risk of a sort of ‘Concord of hydrogen’ where we would have a solution, but that would not be a commercially viable solution at scale,” Faury explained.

    That said, he took pains to reaffirm the company’s commitment to the project. “We continue to believe in hydrogen,” he declared. “We’re absolutely convinced that this is an energy for the future for aviation, but there’s just more work to be done. More work for Airbus, and more work for the others around us to bring that energy to something that is at scale, that is competitive, and that will lead to a success, making a significant contribution to decarbonization.” Many of the world’s major industries, including aviation, have pledged to achieve zero net greenhouse gas emissions by the year 2050, a fact that Faury and other Airbus officials repeatedly invoked as a key driver of the ZEROe project.

    Later in the event, Glenn Llewellyn, Airbus’s vice president in charge of the ZEROe program, described the project in detail, indicating an effort of breathtaking technological ambition. The envisioned aircraft would seat at least 100 people and have a range of 1000 nautical miles (1850 kilometers). It would be powered by four fuel-cell “engines” (two on each wing), each with a power output of 2 megawatts.

    According to Hauke Luedders, head of fuel cell propulsion systems development at Airbus, the company has already done extensive tests in Munich on a 1.2 MW system built with partners including Liebherr Group, ElringKlinger, Magna Steyr, and Diehl. Luedders said the company is focusing on low-temperature proton-exchange-membrane fuel cells, although it has not yet settled on the technology.

    But the real stunner was Llewellyn’s description of a comprehensive program at Airbus to design and test a complete superconducting electrical powertrain for the fuel-cell aircraft. “As the hydrogen stored on the aircraft is stored at a very cold temperature, minus 253 degrees Celsius, we can use this temperature and the cryogenic technology to also efficiently cool down the electrics in the full system,” Llewellyn explained. “It significantly improves the energy efficiency and the performance. And even if this is an early technology, with the right efforts and the right partnerships, this could be a game changer for our fuel-cell aircraft, for our fully electric aircraft, enabling us to design larger, more powerful, and more efficient aircraft.”

    In response to a question from IEEE Spectrum, Llewellyn elaborated that all of the major components of the electric propulsion system would be cryo-cooled: “electric distribution system, electronic controls, power converters, and the motors”—specifically, the coils in the motors. “We’re working with partners on every single component,” he added. The cryo-cooling system would chill a refrigerant that would circulate to keep the components cold, he explained.

     A cutaway diagram shows the key components of a fuel-cell engine, consisting of an electric motor, fuel cells, and other systems. A fuel cell aircraft “engine,” as envisioned by Airbus, would include a 2-megawatt electric motor and associated motor control unit (MCU), a fuel-cell system to power the motor, and associated systems for supplying air, hydrogen fuel, liquid refrigerant, and other necessities. The ram air system would capture cold air flowing over the aircraft for use in the cooling systems.Airbus SAS

    Could aviation be the killer app for superconductors?

    Llewellyn did not specify which superconductors and refrigerants the team was working with. But high temperature superconductors are a good bet, because of the drastically reduced requirements on the cooling system that would be needed to sustain superconductivity.

    Copper-oxide based ceramic superconductors were invented at IBM in 1986, and various forms of them can superconduct at temperatures between –238 °C (35 K) and –140 °C (133 K) at ambient pressure. These temperatures are higher than traditional superconductors, which need temperatures below about 25 K. Nevertheless, commercial applications for the high-temperature superconductors have been elusive.

    But a superconductivity expert, applied physicist Yu He at Yale University, was heartened by the news from Airbus. “My first reaction was, ‘really?’ And my second reaction was, wow, this whole line of research, or application, is indeed growing and I’m very delighted” about Airbus’s ambitious plans.

    Copper-oxide superconductors have been used in a few applications, almost all of them experimental. These included wind-turbine generators, magnetic-levitation train demonstrations, short electrical transmission cables, magnetic-resonance imaging machines and, notably, in the electromagnet coils for experimental tokamak fusion reactors.

    The tokamak application, at a fusion startup called Commonwealth Fusion Systems, is particularly relevant because to make coils, engineers had to invent a process for turning the normally brittle copper-oxide superconducting material into a tape that could be used to form donut-shaped coils capable of sustaining very high current flow and therefore very intense magnetic fields.

    “Having a superconductor to provide such a large current is desirable because it doesn’t generate heat,” says He. “That means, first, you have much less energy lost directly from the coils themselves. And, second, you don’t require as much cooling power to remove the heat.”

    Still, the technical hurdles are substantial. “One can argue that inside the motor, intense heat will still need to be removed due to aerodynamic friction,” He says. “Then it becomes, how do you manage the overall heat within the motor?”

    A young man works on a keyboard while staring at a very wide computer monitor. An engineer at Air Liquide Advanced Technologies works on a test of a hydrogen storage and distribution system at the Liquid Hydrogen Breadboard in November, 2024. The “Breadboard” was established last year in Grenoble, France, by Air Liquide and Airbus.Céline Sadonnet/Master Films

    For this challenge, engineers will at least have a favorable environment with cold, fast-flowing air. Engineers will be able to tap into the “massive air flow” over the motors and other components to assist the cooling, He suggests. Smart design could “take advantage of this kinetic energy of flowing air.”

    To test the evolving fuel-cell propulsion system, Airbus has built a unique test center in Grenoble called the “Liquid Hydrogen Breadboard,” Llewellyn disclosed at the Summit. “We partnered with Air Liquide Advanced Technologies” to build the facility, he said. “This Breadboard is a versatile test platform designed to simulate key elements of future aircraft architecture: tanks, valves, pipes, and pumps, allowing us to validate different configurations at full scale. And this test facility is helping us gain critical insight into safety, hydrogen operations, tank design, refueling, venting, and gauging.”

    “Throughout 2025, we’re going to continue testing the complete liquid-hydrogen and distribution system,” Llewellyn added. “And by 2027, our objective is to take an even further major step forward, testing the complete end-to-end system, including the fuel-cell engine and the liquid hydrogen storage and distribution system together, which will allow us to assess the full system in action.”

    Glenn Zorpette traveled to Toulouse as a guest of Airbus.

  • Get Ready for the Stellarator Showdown
    by Tom Clynes on 09. April 2025. at 14:33



    For decades, nuclear fusion—the reaction that powers the sun—has been the ultimate energy dream. If harnessed on Earth, it could provide endless, carbon-free power. But the challenge is huge. Fusion requires temperatures hotter than the sun’s core and a mastery of plasma—the superheated gas in which atoms that have been stripped of their electrons collide, their nuclei fusing. Containing that plasma long enough to generate usable energy has remained elusive.

    Now, two companies—Germany’s Proxima Fusion and Tennessee-based Type One Energy—have taken a major step forward, publishing peer-reviewed blueprints for their competing stellarator designs. Two weeks ago, Type One released six technical papers in a special issue of the Journal of Plasma Physics. Proxima detailed its fully integrated stellarator power plant concept, called Stellaris, in the journal Fusion Engineering and Design. Both firms say the papers demonstrate that their machines can deliver commercial fusion energy.

    At the heart of both approaches is the stellarator, a mesmerizingly complex machine that uses twisted magnetic fields to hold the plasma steady. This configuration, first dreamed up in the 1950s, promises a crucial advantage: Unlike its more popular cousin, the tokamak, a stellarator can operate continuously, without the need for a strong internal plasma current. Instead, stellarators use external magnetic coils. This design reduces the risk of sudden disruptions to the plasma field that can send high-energy particles crashing into reactor walls.

    The downside? Stellarators, while theoretically simpler to operate, are notoriously difficult to design and build. Recent advances in computational power, high-temperature superconducting (HTS) magnets, and AI-enhanced optimization of magnet geometries are changing the game, helping researchers to uncover patterns that lead to simpler, faster, and cheaper stellarator designs.

    Two Visions of Fusion with One Goal

    While both firms are racing toward the same destination—practical, commercial fusion power—the Proxima paper’s focus leans more toward the engineering integration of its reactor, while Type One’s papers reveal details of its plasma physics design and key components of its reactor.

    Proxima, a spinoff from Germany’s Max Planck Institute for Plasma Physics, aims to build a 1-gigawatt stellarator power plant. The design uses HTS magnets and AI optimization to generate more power per unit volume than earlier stellarators, while also significantly reducing the overall size. Proxima has applied for a patent on an innovative liquid-metal breeding blanket, which will be used to breed tritium fuel for the fusion reaction, via the reaction of neutrons with lithium.

    Three dimensional rendering of multiple layers inside a donut-shaped stellarator. Proxima Fusion’s Stellaris design is significantly smaller than other stellarators of the same power.Proxima Fusion

    “This is the first time anyone has put all the elements together in a single, fully integrated concept,” says Proxima cofounder and chief scientist Jorrit Lion. The design builds on the Wendelstein 7-X stellarator, a €1.4 billion (US $1.5 billion) project funded by the German government and the European Union, which set records for electron temperature, plasma density, and energy confinement time.

    Type One’s stellarator design incorporates three key innovations: an optimized magnetic field for plasma stability, advanced manufacturing techniques, and cutting-edge HTS magnets. The plant it has dubbed Infinity Two is designed to generate 350 megawatts of electricity.

    Like Proxima’s plant, Infinity Two will use deuterium-tritium fuel and build on lessons learned from W7-X, as well as Wisconsin’s HSX project, where many of Type One’s founders worked before forming the company. In partnership with the Tennessee Valley Authority, Type One aims to build Infinity Two at TVA’s Bull Run Fossil Plant by the mid-2030s.

    “Why are we the first private fusion company with an agreement to develop a fusion power plant with a utility? Because we have a design based in reality,” says Christofer Mowry, CEO of Type One Energy. “This isn’t about building a science experiment. This is about delivering energy.”

    AI Points to an Ideal 3D Magnetic-Field Structure

    Both firms have relied heavily on AI and supercomputing to help them place the magnetic coils to more precisely shape their magnetic fields. Type One relied on a range of high-performance computing resources, including the U.S. Department of Energy’s cutting-edge exascale Frontier supercomputer at Oak Ridge National Laboratory, to power its highly detailed simulations.

    That research led to one of the more intriguing developments buried in these papers: a possible move toward consensus in the stellarator physics community about the ideal three-dimensional magnetic-field structure.

    Proxima’s team has always embraced the quasi-isodynamic (QI) approach, used in W7-X, which prioritizes deep particle trapping for superior plasma confinement. Type One, on the other hand, built its early designs around quasi-symmetry (QS), inspired by the HSX stellarator, which aimed to streamline particle motion. Now, based on its optimization research, Type One is changing course.

    “We were champions of quasi-symmetry,” says Type One’s lead theorist Chris Hegna. “But the surprise was that we couldn’t make quasi-symmetry work as well as we thought we could. We will continue doing studies of quasi-symmetry, but primarily it looks like QI is the prominent optimization choice we are going to pursue.”

    Three dimensional rendering of a large stellarator inside a warehouse. Type One Energy is working with the Tennessee Valley Authority to build a commercial stellarator by the mid-2030s.Type One Energy

    The Road Ahead for Stellarators

    According to Hegna, Type One’s partnership with TVA could put a stellarator fusion plant on the grid by the mid-2030s. But before it builds Infinity Two, the company plans to validate key technologies with its Infinity One test platform, set for construction in 2026 and operation by 2029.

    Proxima, meanwhile, plans to bring its Stellaris design to life by the 2030s, first with a demo stellarator, dubbed Alpha. The company claims Alpha will be the first stellarator to demonstrate net energy production in a steady state. It’s targeted to debut in 2031, after the 2027 completion of a demonstration set of the complex magnetic coils.

    Both companies face a common challenge: funding. Type One has raised $82 million and, according to Axios, is preparing for more than $200 million in Series A financing, which the company declined to confirm. Proxima has secured about $65 million in public and private capital.

    If the recent papers succeed in building confidence in stellarators, investors may be more willing to fund these ambitious projects. The coming decade will determine whether both companies’ confidence in their own designs is justified, and whether producing fusion energy from stellarators transitions from scientific ambition to commercial reality.

  • IEEE-HKN Marks 120th Anniversary With Hackathon
    by Amy Michael on 08. April 2025. at 18:00



    Among the many events that marked the IEEE–Eta Kappa Nu (IEEE-HKN) honor society’s 120th anniversary last year was its first international hackathon. Organized by a group of 10 HKN students and led by recent graduate Christian Winingar from the Gamma Theta chapter at Missouri University of Science and Technology, in Rolla, the hackathon required more than seven months of planning.

    The idea originated from students who attended the society’s 2023 Student Leadership Conference to continue fostering international collaboration among the society’s chapters.

    “It seemed a natural fit to organize it as a way to celebrate the society’s anniversary,” says Serena Canavero, one of the event’s organizers and the 2025 HKN student governor.

    To tie the hackathon to the establishment of the society, the organizing committee created mathematical and engineering problems around saving the eight founders of HKN from those who would oppose their commitment to its foundational tenets of scholarship, character, and attitude.

    “It was a valuable experience for IEEE-HKN members both at the professional and student member levels to connect with each other and to foster a community focused on problem-solving and innovation.” —Christian Winingar

    “Our founders, especially Maurice L. Carr, envisioned a society that would eventually become international, as it is today, more than a century into the future,” Canavero says. “To capture this spirit, our team combed through HKN’s historical records, seeking insights into the visionary students who founded it. We imagined them facing various challenges, often misunderstood by school leaders who didn’t yet see the value in an organization dedicated to the professional growth of young, bright, and philanthropic engineers.”

    The hackathon succeeded in capturing the imagination of students around the world, ultimately attracting 62 participants collaborating in 12 teams from 11 to 22 October. The students worked together to face obstacles, using mathematical and engineering principles. In the process, they learned to appreciate each other’s problem-solving approaches, how to contribute to a team, and how to surmount logistical challenges such as working across time zones.

    On 28 October, the hackathon culminated with teams presenting their solutions virtually to 16 IEEE members who judged their work based on its completeness, accuracy, and timeliness. The presentation was a part of HKN’s Founders Day celebration, which included a virtual fireside chat by two eminent members and IEEE Medal of Honor winners, Vint Cerf and Robert Kahn.

    These are the top five IEEE-HKN teams and their chapters:

    Shockingly Efficient. Mu Nu chapter at Politecnico di Torino, in Italy; Sigma chapter at Carnegie Mellon; Mu Kappa chapter at the University of Queensland, in Brisbane, Australia; and Iota Kappa chapter at Montana State University, in Bozeman.

    Light Emitting Resistor. Lambda Omega chapter at the National University of Singapore.

    Thetastic Coders. Theta chapter at the University of Wisconsin in Madison.

    Jumbos. Epsilon Delta chapter at Tufts University, in Massachusetts.

    Leo. Nu Theta chapter at Purdue University Northwest in Hammond, Ind.

    TestEquity, a test and measurement product distributor, provided prizes for the teams. They included a gift card, a soldering station, handheld industrial and digital multimeters, a voltage detector, and multi-tools.

    The organizing committee was pleased with the hackathon.

    “It was a valuable experience for IEEE-HKN members both at the professional and student member levels to connect with each other and foster a community focused on problem-solving and innovation,” Winigar says.

    “The international hackathon brought together motivated young IEEE-HKN engineers from both computer and electrical engineering backgrounds,” Canavero adds. “It blended chapters into mixed teams, sparking creativity and problem-solving, bridging time zones, and fostering our community at an international level. It was a testament to how IEEE-HKN empowers young leaders to dream big, enabling us to collaborate on ambitious engineering endeavors together.”

    Because of the enthusiastic response to the hackathon, plans are underway to hold another one this year.

  • This Alphabet Spin-off Brings “Fishal Recognition” to Aquaculture
    by Rajesh Jadhav on 07. April 2025. at 13:00



    Deep within a rugged fjord in Norway, our team huddled around an enclosed metal racetrack, full of salt water, that stood about a meter off the ground on stilts. We called the hulking metal contraption our “fish run.” Inside, a salmon circled the 3-meter diameter loop, following its instincts and swimming tirelessly against the current. A stopwatch beeped, and someone yelled “Next fish!” We scooped up the swimmer to weigh it and record its health data before returning it to the school of salmon in the nearby pen. The sun was high in the sky as the team loaded the next fish into the racetrack. We kept working well into the evening, measuring hundreds of fish.

    This wasn’t some bizarre fish Olympics. Rather, it was a pivotal moment in the journey of our company, TidalX AI, which brings artificial intelligence and advanced robotics to aquaculture.

    A gif shows fish moving underwater. Colorful boxes are superimposed over the fish, and each box has the label \u201csalmon.\u201d Tidal’s AI systems track the salmon and estimate their biomass. TidalX AI

    Tidal emerged from X, the Moonshot Factory at Alphabet (the parent company of Google), which seeks to create technologies that make a difference to millions if not billions of people. That was the mission that brought a handful of engineers to a fish farm near the Arctic Circle in 2018. Our team was learning how to track visible and behavioral metrics of fish to provide new insights into their health and growth and to measure the environmental impact of fish farms. And aquaculture is just our beginning: We think the modular technologies we’ve developed will prove useful in other ocean-based industries as well.

    To get started, we partnered with Mowi ASA, the largest salmon-aquaculture company in the world, to develop underwater camera and software systems for fish farms. For two weeks in 2018, our small team of Silicon Valley engineers lived and breathed salmon aquaculture, camping out in an Airbnb on a small Norwegian island and commuting to and from the fish farm in a small motorboat. We wanted to learn as much as we could about the problems and the needs of the farmers. The team arrived with laptops, cords, gadgets, and a scrappy camera prototype cobbled together from off-the-shelf parts, which eventually became our window into the underwater world.

    An aerial photograph shows large circular pens in the water, all connected by cables to a boxy floating station. Mowi, the world’s largest producer of Atlantic salmon, operates this fish farm in the waters off Norway. Viken Kantarci/AFP/Getty Images

    Still, that early trip armed us with our first 1,000 fish data points and a growing library of underwater images (since then, our datasets have grown by a factor of several million). That first data collection allowed us to meticulously train our first AI models to discern patterns invisible to the human eye. The moment of truth arrived two months later, when our demo software successfully estimated fish weights from images alone. It was a breakthrough, a validation of our vision, and yet only the first step on a multiyear journey of technology development.

    Weight estimation was the first of a suite of features we would go on to develop, to increase the efficiency of aquaculture farms and help farmers take early action for the benefit of the salmon. Armed with better data about how quickly their fish are growing, farmers can more precisely calculate feeding rates to minimize both wasted food and fish waste, which can have an impact on the surrounding ocean. With our monitoring systems, farmers can catch pest outbreaks before they spread widely and require expensive and intensive treatments.

    The Origins of Tidal

    The ocean has long fascinated engineers at Alphabet’s Moonshot Factory, which has a mandate to create both novel technologies and profitable companies. X has explored various ocean-based projects over the past decade, including an effort to turn seawater into fuel, a project exploring whether underwater robots could farm seaweed for carbon sequestration and food, and a test of floating solar panels for clean energy.

    In some ways, building technologies for the seas is an obvious choice for engineers who want to make a difference. About two-thirds of our planet is covered in water, and more than 3 billion people rely on seafood for their protein. The ocean is also critical for climate regulation, life-giving oxygen, and supporting the livelihoods of billions of people. Despite those facts, the United Nations Sustainable Development Goal No. 14, which focuses on “life below water,” is the least funded of all the 17 goals.

    One of the most pressing challenges facing humanity is ensuring ongoing access to sustainable and healthy protein sources as the world’s population continues to grow. With the global population projected to reach 9.7 billion by 2050, the demand for seafood will keep rising, and it offers a healthier and lower-carbon alternative to other animal-based proteins such as beef and pork. However, today’s wild-fishing practices are unsustainable, with almost 90 percent of the world’s fisheries now considered either fully exploited (used to their full capacity) or overfished.

    Aquaculture offers a promising solution. Fish farming has the potential to alleviate pressure on wild fish stocks, provide a more sustainable way to produce protein, and support the livelihoods of millions. Fish is also a much more efficient protein source than land-based protein. Salmon have a “feed conversion ratio” of roughly one to one; that means they produce about one kilogram of body mass for every kilogram of feed consumed. Cows, on the other hand, require 8 to 12 kilograms of feed to gain a kilogram of mass.

    Three images of swimming fish are accompanied by charts.\u00a0

    Three images of swimming fish are accompanied by charts.\u00a0

    Three images of swimming fish are accompanied by charts.\u00a0 Tidal’s AI platform tracks both fish and food pellets [top] and can then automatically adjust feed rates to limit waste and reduce costs. The system’s sensors can detect sea lice on the salmon [center], which enables farmers to intervene early and track trends. The real-time estimation of biomass [bottom] gives farmers information about both average weight and population distribution, helping them plan the timing of harvests. TidalX AI

    However, the aquaculture industry faces growing challenges, including rising water temperatures, changing ocean conditions, and the pressing need for improved efficiency and sustainability. Farmers are accountable for pollution from excess feed and waste, and are grappling with fish diseases that can spread quickly among farmed populations.

    At Tidal, our team is developing technology that will both protect the oceans and address global food-security challenges. We’ve visited aquaculture farms in Norway, Japan, and many other countries to test our technology, which we hope will transform aquaculture practices and serve as a beneficial force for fish, people, and the planet.

    The Data Behind AI for Aquaculture

    Salmon aquaculture is the most technologically advanced sector within the ocean farming industry, so that’s where we began. Atlantic salmon are a popular seafood, with a global market of nearly US $20 billion in 2023. That year, 2.87 million tonnes of salmon were farmed in the Atlantic Ocean; globally, farmed salmon accounts for nearly three-quarters of all salmon sold.

    Our partnership with Mowi combined their deep aquaculture knowledge with our expertise in AI, underwater robotics, and data science. Our initial goal was to estimate biomass, a critical task in fish farming that involves accurately assessing the weight and distribution of fish within a pen in real time. Mastering this task established a baseline for improvement, because better measurements can unlock better management.

    Two photographs show the same long device with a light on the top and a cable coming out the bottom. One of the photographs shows the device in the water surrounded by fish.\u00a0 Tidal’s imaging platform, which includes lights, multiple cameras, and other sensors, moves through the fish pen to gather data. TidalX AI

    We quickly realized that reliable underwater computer-vision models didn’t exist, even from cutting-edge AI. State-of-the-art computer-vision models weren’t trained on underwater images and often misidentified salmon, sometimes with comic results—one model confidently classified a fish as an umbrella. In addition, we had to estimate the average weight of up to 200,000 salmon within a pen, but the reference data available—based on weekly manual sampling by farmers of just 20 to 30 salmon—didn’t represent the variability across the population. We had internalized the old computing adage “garbage in, garbage out,” and so we realized that our model’s performance would be only as good as the quality and quantity of the data we used to train it. Developing models for Mowi’s desired accuracy required a drastically larger dataset.

    We therefore set out to create a high-quality dataset of images from marine pens. In our earliest experiments on estimating fish weight from images, we had worked with realistic-looking rubber fish in our own lab. But the need for better data sent us to Norway in 2018 to collect footage. First, we tried taking photos of individual fish in small enclosures, but this method proved inefficient because the fish didn’t reliably swim in front of our camera.

    That’s when we designed our fish-run racetrack to capture images of individual fish from all angles. We then paired this footage with corresponding weight and health measurements to train our models. A second breakthrough came when we got access to data from the fish farms’ harvests, when every fish is individually weighed. That addition expanded our dataset a thousandfold and improved our model performance. Soon we had a model capable of making highly precise and accurate estimates of fish weight distributions for the entire population within a given enclosure.

    Crafting Resilient Hardware for an Unforgiving Ocean

    As we were building a precise and accurate AI model, we were simultaneously creating a comprehensive hardware package. The system included underwater cameras, an autonomous winch to move the cameras within the pen, and an integrated software platform.

    A man in a yellow vest stands at the edge of a netted wall, adjusting a device that\u2019s over the water. Tidal’s autonomous winch systems move the cameras on horizontal and vertical axes within the fish pen. TidalX AI

    Our initial field experiments had taught us the stark reality of operating technology in extreme environmental conditions, including freezing temperatures, high waves, and strong currents. To meet this challenge, we spent several years putting the Tidal technology through rigorous testing: We simulated extreme conditions, pushed the equipment to its breaking point, and even used standards typically reserved for military gear. We tested how well it worked under pressures intense enough to implode most electronics. Once satisfied with the lab results, we tested our technology on farms above the Arctic Circle.

    The result is a remarkably resilient system that features highly responsive top, stereo, and bottom cameras, with efficient lighting that minimizes stress on the fish. The smart winch moves the camera autonomously through the pen around the clock on horizontal and vertical axes, collecting tens of thousands of fish observations daily. The chief operating officer of Mowi Farming Norway, Oyvind Oaland, called our commercial product “the most advanced sensing and analysis platform in aquaculture, and undoubtedly the one with the greatest potential.”

    The Tidal system today provides farmers with real-time data on fish growth, health, and feeding, enabling them to make data-driven decisions to optimize their operations. One of our key innovations was the development and integration of the industry’s first AI-powered autonomous feeding system. By feeding fish just the amount that they need to grow, the system minimizes wasted food and fish excrement, therefore improving fish farms’ environmental impact. Merging our autonomous feeding system with our camera platform meant that farmers could save on cost and clutter by deploying a single all-in-one system in their pens.

    Developing the autonomous feeding system presented new challenges—not all of them technical. We initially aimed for an ideal feeding strategy based on the myriad factors influencing fish appetite, which would work seamlessly for every user straight out of the box. But we faced resistance from farmers when the strategy differed from their feeding policies, which were often based on decades of experience.

    A gif shows fish moving in the water and yellow boxes superimposed over small pellets in the water.  Tidal’s AI systems identify food pellets. TidalX AI

    This response forced us to rethink our approach and pivot from a one-size-fits-all solution to a modular system that farmers could customize . This allowed them to adjust the system to their specific feeding preferences first, building trust and acceptance. Farmers could initially set their preferred maximum and minimum feed rates and their tolerance for feed fall-through; over time, as they began to trust the technology more, they could let it run more autonomously. Once deployed within a pen, the system gathers data on fish behavior and how many feed pellets fall through the net, which improves the system’s estimate of fish appetite. These ongoing revisions not only improve feeding efficiency—thus optimizing growth, reducing waste, and minimizing environmental impact—but also build confidence among farmers.

    Tidal’s Impact on Sustainable Aquaculture

    Tidal’s technology has demonstrated multiple benefits. With the automated feed system, farmers are improving production efficiency, reducing costs, and reducing environmental impact. Our software can also detect health issues early on, such as sea-lice infestations and wounds, allowing farmers to promptly intervene with more-targeted treatments. When farmers have accurate biomass and fish welfare estimates, they can optimize the timing of harvests and minimize the risk that the harvested fish will be in poor health or too small to fetch a good market price. By integrating AI into every aspect of its system, we have created a powerful tool that enables farmers to make better-informed and sustainable decisions.

    The platform approach also fosters collaboration between technology experts and aquaculture professionals. We’re currently working with farmers and fish-health experts on new applications of machine learning, such as fish-behavior detection and ocean-simulation modeling. That modeling can help farmers predict and respond to serious challenges, such as harmful algal blooms caused by nutrient pollution and warming water temperatures.

    To date, we have installed systems in more than 700 pens around the globe, collected over 30 billion data points, processed 1.5 petabytes of video footage, and monitored over 50 million fish throughout their growth cycle. Thanks to years of research and development, commercial validation, and scaling, our company has now embarked on its next phase. In July 2024, Tidal graduated from Alphabet’s X and launched as an independent company, with investors including U.S. and Norwegian venture-capital firms and Alphabet.

    Tidal’s journey from a moon shot idea to a commercially viable company is just the start of what we hope to accomplish. With never-ending challenges facing our planet, leveraging cutting-edge technology to survive and thrive in a quickly adapting world will be more critical than ever before. Aquaculture is Tidal’s first step, but there is so much potential within the ocean that can be unlocked to support a sustainable future with economic and food security.

    We’re proud that our technology is already making salmon production more sustainable and efficient, thus contributing to the health of our oceans and the growing global population that depends upon seafood for protein.

    Tidal’s underwater perception technology has applications far beyond aquaculture, offering transformative potential across ocean-based industries, collectively referred to as the “blue economy.” While our roots are in “blue food,” our tools can be adapted for “blue energy” by monitoring undersea infrastructure like offshore wind farms, “blue transportation” by improving ocean simulations for more-efficient shipping routes, and “blue carbon” by mapping and quantifying the carbon storage capacity of marine ecosystems such as sea grasses.

    For example, we have already demonstrated that we can adapt our salmon biomass-estimation models to create detailed three-dimensional maps of sea-grass beds in eastern Indonesia, enabling us to estimate the amount of carbon stored below the water’s surface. We’re aiming to address a critical knowledge gap: Scientists have limited data on how much carbon sea-grass ecosystems can sequester, which undermines the credibility of marine-based carbon credit markets. Adapting our technology could advance scientific understanding and drive investment in protecting and conserving these vital ocean habitats.

    What started with fish swimming through a racetrack on one small Norwegian fish farm may become a suite of technologies that help humanity protect and make the most of our ocean resources. With its robust, AI-powered systems designed to withstand the harshest oceanic conditions, Tidal is well equipped to revolutionize the blue economy, no matter how rough the seas get.

  • 12 Graphs That Explain the State of AI in 2025
    by Eliza Strickland on 07. April 2025. at 10:00



    If you read the news about AI, you may feel bombarded with conflicting messages: AI is booming. AI is a bubble. AI’s current techniques and architectures will keep producing breakthroughs. AI is on an unsustainable path and needs radical new ideas. AI is going to take your job. AI is mostly good for turning your family photos into Studio Ghibli-style animated images.

    Cutting through the confusion is the 2025 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence. The 400+ page report is stuffed with graphs and data on the topics of R&D, technical performance, responsible AI, economic impacts, science and medicine, policy, education, and public opinion. As IEEE Spectrum does every year (see our coverage from 2021, 2022, 2023, and 2024), we’ve read the whole thing and plucked out the graphs that we think tell the real story of AI right now.

    1. U.S. Companies Are Out Ahead

    Graph showing notable AI models trend from 2003-2024: US 40, China 15, Europe 3 in 2024.

    While there are many different ways to measure which country is “ahead” in the AI race (journal articles published or cited, patents awarded, etc.), one straightforward metric is who’s putting out models that matter. The research institute Epoch AI has a database of influential and important AI models that extends from 1950 to the present, from which the AI Index drew the information shown in this chart.

    Last year, 40 notable models came from the United States, while China had 15 and Europe had 3 (incidentally, all from France). Another chart, not shown here, indicates that almost all of those 2024 models came from industry rather than academia or government. As for the decline in notable models released from 2023 to 2024, the index suggests it may be due to the increasing complexity of the technology and the ever-rising costs of training.

    2. Speaking of Training Costs...

    Bar graph showing AI training costs from 2017 to 2024, peaking at $191.9M for Gemini 1.0 Ultra.

    Yowee, but it’s expensive! The AI Index doesn’t have precise data, because many leading AI companies have stopped releasing information about their training runs. But the researchers partnered with Epoch AI to estimate the costs of at least some models based on details gleaned about training duration, type and quantity of hardware, and the like. The most expensive model for which they were able to estimate the costs was Google’s Gemini 1.0 Ultra, with a breathtaking cost of about US $192 million. The general scale up in training costs coincided with other findings of the report: Models are also continuing to scale up in parameter count, training time, and amount of training data.

    Not included in this chart is the Chinese upstart DeepSeek, which rocked financial markets in January with its claim of training a competitive large language model for just $6 million—a claim that some industry experts have disputed. AI Index steering committee co-director Yolanda Gil tells IEEE Spectrum that she finds DeepSeek “very impressive,” and notes that the history of computer science is rife with examples of early inefficient technologies giving way to more elegant solutions. “I’m not the only one who thought there would be a more efficient version of LLMs at some point,” she says. “We just didn’t know who would build it and how.”

    3. Yet the Cost of Using AI Is Going Down

    Line chart showing decreasing inference prices for GPT-3.5 and GPT-4 across benchmarks from 2022-2024.

    The ever-increasing costs of training (most) AI models risks obscuring a few positive trends that the report highlights: Hardware costs are down, hardware performance is up, and energy efficiency is up. That means inference costs, or the expense of querying a trained model, are falling dramatically. This chart, which is on a logarithmic scale, shows the trend in terms of AI performance per dollar. The report notes that the blue line represents a drop from $20 per million tokens to $0.07 per million tokens; the pink line shows a drop from $15 to $0.12 in less than a year’s time.

    4. AI’s Massive Carbon Footprint

    Bar chart showing increasing carbon emissions from training AI models, 2012\u20132024.

    While energy efficiency is a positive trend, let’s whipsaw back to a negative: Despite gains in efficiency, overall power consumption is up, which means that the data centers at the center of the AI boom have an enormous carbon footprint. The AI Index estimated the carbon emissions of select AI models based on factors such as training hardware, cloud provider, and location, and found that the carbon emissions from training frontier AI models have steadily increased over time—with DeepSeek being the outlier.

    The worst offender included in this chart, Meta’s Llama 3.1, resulted in an estimated 8,930 tonnes of CO2 emitted, which is the equivalent of about 496 Americans living a year of their American lives. That massive environmental impact explains why AI companies have been embracing nuclear as a reliable source of carbon-free power.

    5. The Performance Gap Narrows

    US vs China chatbot scores: US trend up from 1250 to 1385, China from 1150 to 1362, Jan 2024-Feb 2025.

    The United States may still have a commanding lead on the quantity of notable models released, but Chinese models are catching up on quality. This chart shows the narrowing performance gap on a chatbot benchmark. In January 2024, the top U.S. model outperformed the best Chinese model by 9.26 percent; by February 2025, this gap had narrowed to just 1.70 percent. The report found similar results on other benchmarks relating to reasoning, math, and coding.

    6. Humanity’s Last Exam

    Bar graph showing accuracy rates of various AI models, with "o1" having the highest at 8.80%.

    This year’s report highlights the undeniable fact that many of the benchmarks we use to gauge AI systems’ capabilities are “saturated” — the AI systems get such high scores on the benchmarks that they’re no longer useful. It has happened in many domains: general knowledge, reasoning about images, math, coding, and so on. Gil says she has watched with surprise as benchmark after benchmark has been rendered irrelevant. “I keep thinking [performance] is going to plateau, that it’s going to reach a point where we need new technologies or radically different architectures” to continue making progress, she says. “But that has not been the case.”

    In light of this situation, determined researchers have been crafting new benchmarks that they hope will challenge AI systems. One of those is Humanity’s Last Exam, which consists of extremely challenging questions contributed by subject-matter experts hailing from 500 institutions worldwide. So far, it’s still hard for even the best AI systems: OpenAI’s reasoning model, o1, has the top score so far with 8.8 percent correct answers. We’ll see how long that lasts.

    7. A Threat to the Data Commons

    Bar chart showing various robots.txt restriction categories in top web domains from 2016 to 2024.

    Today’s generative AI systems get their smarts by training on vast amounts of data scraped from the Internet, leading to the oft-stated idea that “data is the new oil” of the AI economy. As AI companies keep pushing the limits of how much data they can feed into their models, people have started worrying about “peak data,” and when we’ll run out of the stuff. One issue is that websites are increasingly restricting bots from crawling their sites and scraping their data (perhaps due to concerns that AI companies are profiting from the websites’ data while simultaneously killing their business models). Websites state these restrictions in machine readable robots.txt files.

    This chart shows that 48 percent of data from top web domains is now fully restricted. But Gil says it’s possible that new approaches within AI may end the dependence on huge data sets. “I would expect that at some point the amount of data is not going to be as critical,” she says.

    8. Here Comes the Corporate Money

    Bar chart: AI investment trends by activity (2013-2024). Highest: 2021 ($360.73B), lowest: 2013 ($14.57B).

    The corporate world has turned on the spigot for AI funding over the past five years. And while overall global investment in 2024 didn’t match the giddy heights of 2021, it’s notable that private investment has never been higher. Of the $150 billion in private investment in 2024, another chart in the index (not shown here) indicates that about $33 billion went to investments in generative AI.

    9. Waiting for That Big ROI

    AI use impact on cost and revenue by function (2024): highest cost decrease in service operations, highest revenue increase in marketing.

    Presumably, corporations are investing in AI because they expect a big return on investment. This is the part where people talk in breathless tones about the transformative nature of AI and about unprecedented gains in productivity. But it’s fair to say that corporations haven’t yet seen a transformation that results in significant savings or substantial new profits. This chart, with data drawn from a McKinsey survey, shows that of those companies that reported cost reductions, most had savings of less than 10 percent. Of companies that had a revenue increase due to AI, most reported gains of less than 5 percent. That big payoff may still be coming, and the investment figures suggest that a lot of corporations are betting on it. It’s just not here yet.

    10. Dr. AI Will See You Soon, Maybe

    Box plot showing that GPT-4 alone scores highest in clinical diagnosis compared to physicians + GPT-4 and physicians alone.

    AI for science and medicine is a mini-boom within the AI boom. The report lists a variety of new foundation models that have been released to help researchers in fields such as materials science, weather forecasting, and quantum computing. Many companies are trying to turn AI’s predictive and generative powers into profitable drug discovery. And OpenAI’s o1 reasoning model recently scored 96 percent on a benchmark called MedQA, which has questions from medical board exams.

    But overall, this seems like another area of vast potential that hasn’t yet translated into significant real-world impact—in part, perhaps, because humans still haven’t figured out quite how to use the technology. This chart shows the results of a 2024 study that tested whether doctors would make more accurate diagnoses if they used GPT-4 in addition to their typical resources. They did not, and it also didn’t make them faster. Meanwhile, GPT-4 on its own outperformed both the human-AI teams and the humans alone.

    11. U.S. Policy Action Shifts to the States

    Graph of AI-related proposed bills in the U.S. rising from 0 to 221, 2016-2024. Very few bills have passed, including only 4 in 2024.

    In the United States, this chart shows that there has been plenty of talk about AI in the halls of Congress, and very little action. The report notes that action in the United States has shifted to the state level, where 131 bills were passed into law in 2024. Of those state bills, 56 related to deepfakes, prohibiting either their use in elections or for spreading nonconsensual intimate imagery.

    Beyond the United States, Europe did pass its AI Act, which places new obligations on companies making AI systems that are deemed high risk. But the big global trend has been countries coming together to make sweeping and non-binding pronouncements about the role that AI should play in the world. So there’s plenty of talk all around.

    12. Humans Are Optimists

    Bar chart showing opinions on AI's impact on jobs, likely changing work habits more than replacing jobs.

    Whether you’re a stock photographer, a marketing manager, or a truck driver, there’s been plenty of public discourse about whether or when AI will come for your job. But in a recent global survey on attitudes about AI, the majority of people did not feel threatened by AI. While 60 percent of respondents from 32 countries believe that AI will change how they do their jobs, only 36 percent expected to be replaced. “I was really surprised” by these survey results, says Gil. “It’s very empowering to think, ‘AI is going to change my job, but I will still bring value.’” Stay tuned to find out if we all bring value by managing eager teams of AI employees.

  • How Ukraine’s Drones are Beating Russian Jamming
    by Tereza Pultarova on 06. April 2025. at 14:00



    After the Estonian startup KrattWorks dispatched the first batch of its Ghost Dragon ISR quadcopters to Ukraine in mid-2022, the company’s officers thought they might have six months or so before they’d need to reconceive the drones in response to new battlefield realities. The 46-centimeter-wide flier was far more robust than the hobbyist-grade UAVs that came to define the early days of the drone war against Russia. But within a scant three months, the Estonian team realized their painstakingly fine-tuned device had already become obsolete.

    Rapid advances in jamming and spoofing—the only efficient defense against drone attacks—set the team on an unceasing marathon of innovation. Its latest technology is a neural-network-driven optical navigation system, which allows the drone to continue its mission even when all radio and satellite-navigation links are jammed. It began tests in Ukraine in December, part of a trend toward jam-resistant, autonomous UAVs (uncrewed aerial vehicles). The new fliers herald yet another phase in the unending struggle that pits drones against the jamming and spoofing of electronic warfare, which aims to sever links between drones and their operators. There are now tens of thousands of jammers straddling the front lines of the war, defending against drones that are not just killing soldiers but also destroying armored vehicles, other drones, industrial infrastructure, and even tanks.

    Two soldiers in full military dress stand on a hill while one of them releases a drone. Ukrainian troops tested KrattWorks’ Ghost Dragon drone in Estonia last year.KrattWorks

    “The situation with electronic warfare is moving extremely fast,” says Martin Karmin, KrattWorks’ cofounder and chief operations officer. “We have to constantly iterate. It’s like a cat-and-mouse game.”

    I met Karmin at the company’s headquarters in the outskirts of Estonia’s capital, Tallinn. Just a couple of hundred kilometers to the east is the tiny nation’s border with Russia, its former oppressor. At 38, Karmin is barely old enough to remember what life was like under Russian rule, but he’s heard plenty. He and his colleagues, most of them volunteer members of the Estonian Defense League, have “no illusions” about Russia, he says with a shrug.

    His company is as much about arming Estonia as it is about helping Ukraine, he acknowledges. Estonia is not officially at war with Russia, of course, but regions around the border between the two countries have for years been subjected to persistent jamming of satellite-based navigation systems, such as the European Union’s Galileo satellites, forcing occasional flight cancellations at Tartu airport. In November, satellite imagery revealed that Russia is expanding its military bases along the Baltic states’ borders.

    “We are a small country,” Karmin says. “Innovation is our only chance.”

    Navigating by Neural Network

    In KrattWorks’ spacious, white-walled workshop, a handful of engineers are testing software. On the large ocher desk that dominates the room, a selection of KrattWorks’ devices is on display, including a couple of fixed-wing, smoke-colored UAVs designed to serve as aerial decoys, and the Ghost Dragon ISR quadcopter, the company’s flagship product.

    Now in its third generation, the Ghost Dragon has come a long way since 2022. Its original command-and-control-band radio was quickly replaced with a smart frequency-hopping system that constantly scans the available spectrum, looking for bands that aren’t jammed. It allows operators to switch among six radio-frequency bands to maintain control and also send back video even in the face of hostile jamming.

    A black quadcopter drone hovers in front of a coniferous tree. The Ghost Dragon reconnaissance drone from Krattworks can navigate autonomously, by detecting landmarks as it flies over them. KrattWorks

    The drone’s dual-band satellite-navigation receiver can switch among the four main satellite positioning services: GPS, Galileo, China’s BeiDou, and Russia’s GLONASS. It’s been augmented with a spoof-proof algorithm that compares the satellite-navigation input with data from onboard sensors. The system provides protection against sophisticated spoofing attacks that attempt to trick drones into self-destruction by persuading them they’re flying at a much higher altitude than they actually are.

    At the heart of the quadcopter’s matte grey body is a machine-vision-enabled computer running a 1-gigahertz Arm processor that provides the Ghost Dragon with its latest superpower: the ability to navigate autonomously, without access to any global navigation satellite system (GNSS). To do that, the computer runs a neural network that, like an old-fashioned traveler, compares views of landmarks with positions on a map to determine its position. More precisely, the drone uses real-time views from a downward-facing optical camera, comparing them against stored satellite images, to determine its position.

    A promotional video from Krattworks depicts scenarios in which the company’s drones augment soldiers on offensive maneuvers.

    “Even if it gets lost, it can recognize some patterns, like crossroads, and update its position,” Karmin says. “It can make its own decisions, somewhat, either to return home or to fly through the jamming bubble until it can reestablish the GNSS link again.”

    Designing Drones for High Lethality per Cost

    Just as machine guns and tanks defined the First World War, drones have become emblematic of Ukraine’s struggle against Russia. It was the besieged Ukraine that first turned the concept of a military drone on its head. Instead of Predators and Reapers worth tens of millions of dollars each, Ukraine began purchasing huge numbers of off-the-shelf fliers worth a few hundred dollars apiece—the kind used by filmmakers and enthusiasts—and turned them into highly lethal weapons. A recent New York Times investigation found that drones account for 70 percent of deaths and injuries in the ongoing conflict.

    “We have much less artillery than Russia, so we had to compensate with drones,” says Serhii Skoryk, commercial director at Kvertus, a Kyiv-based electronic-warfare company. “A missile is worth perhaps a million dollars and can kill maybe 12 or 20 people. But for one million dollars, you can buy 10,000 drones, put four grenades on each, and they will kill 1,000 or even 2,000 people or destroy 200 tanks.”

    A man in camouflage uniform is surrounded by military gear, including drones. Near the Russian border in Kharkiv Oblast, a Ukrainian soldier prepared first-person-view drones for an attack on 16 January 2025.Jose Colon/Anadolu/Getty Images

    Electronic warfare techniques such as jamming and spoofing aim to neutralize the drone threat. A drone that gets jammed and loses contact with its pilot and also loses its spatial bearings will either crash or fly off randomly until its battery dies. According to the Royal United Services Institute, a U.K. defense think tank, Ukraine may be losing about 10,000 drones per month, mostly due to jamming. That number includes explosives-laden kamikaze drones that don’t reach their targets, as well as surveillance and reconnaissance drones like KrattWorks’ Ghost Dragon, meant for longer service.

    “Drones have become a consumable item,” says Karmin. “You will get maybe 10 or 15 missions out of a reconnaissance drone, and then it has to be already paid off because you will lose it sooner or later.”

    Russia took an unexpected step in the summer of 2024, ditching sophisticated wireless control in favor of hard-wired drones fitted with spools of optical fiber.

    Tech minds on both sides of the conflict have therefore been working hard to circumvent electronic defenses. Russia took an unexpected step starting in early 2024, deploying hard-wired drones fitted with spools of optical fiber. Like a twisted variation on a child’s kite, the lethal UAVs can venture 20 or more kilometers away from the controller, the hair-thin fiber floating behind them, providing an unjammable connection.

    “Right now, there is no protection against fiber-optic drones,” Vadym Burukin, cofounder of the Ukrainian drone startup Huless, tells IEEE Spectrum. “The Russians scaled this solution pretty fast, and now they are saturating the battle front with these drones. It’s a huge problem for Ukraine.”

    A drone carrying a large cylindrical object flies over a blurry forest background. One way that drone operators can defeat electronic jamming is by communicating with their drone via a fiber optic line that pays out of a spool as the drone flies. This is a tactic favored by Russian units, although this particular first-person-view drone is Ukrainian. It was demonstrated near Kyiv on 29 January 2025.Efrem Lukatsky/AP

    Ukraine, too, has experimented with optical fiber, but the technology didn’t take off, as it were. “The optical fiber costs upwards from $500, which is, in many cases, more than the drone itself,” Burukin says. “If you use it in a drone that carries explosives, you lose some of that capacity because you have the weight of the cable.” The extra weight also means less capacity for better-quality cameras, sensors, and computers in reconnaissance drones.

    Small Drones May Soon Be Making Kill-or-No-Kill Decisions

    Instead, Ukraine sees the future in autonomous navigation. This past July, kamikaze drones equipped with an autonomous navigation system from U.S. supplier Auterion destroyed a column of Russian tanks fitted with jamming devices.

    “It was really hard to strike these tanks because they were jamming everything,” says Burukin. “The drones with the autopilot were the only equipment that could stop them.”

    A diagram shows a quadcopter drone flying above a communications tower as it attempts to navigate to an enemy tank. Auterion’s “terminal guidance” system uses known landmarks to orient a drone as it seeks out a target. Auterion

    The technology used to hit those tanks is called terminal guidance and is the first step toward smart, fully autonomous drones, according to Auterion’s CEO, Lorenz Meier. The system allows the drone to directly overcome the jamming whether the protected target is a tank, a trench, or a military airfield.

    “If you lock on the target from, let’s say, a kilometer away and you get jammed as you approach the target, it doesn’t matter,” Meier says in an interview. “You’re not losing the target as a manual operator would.”

    The visual navigation technology trialed by KrattWorks is the next step and an innovation that has only reached the battlefield this year. Meier expects that by the end of 2025, firms including his own will introduce fully autonomous solutions encompassing visual navigation to overcome GPS jamming, as well as terminal guidance and smart target recognition.

    “The operator would only decide the area where to strike, but the decision about the target is made by the drone,” Meier explains. “It’s already done with guided shells, but with drones you can do that at mass scale and over much greater distances.”

    Auterion, founded in 2017 to produce drone software for civilian applications such as grocery delivery, threw itself into the war effort in early 2024, motivated by a desire to equip democratic countries with technologies to help them defend themselves against authoritarian regimes. Since then, the company has made rapid strides, working closely with Ukrainian drone makers and troops.

    “A missile worth perhaps a million dollars can kill maybe 12 or 20 people. But for one million dollars, you can buy 10,000 drones, put four grenades on each, and they will kill 1,000 or even 2,000 people or destroy 200 tanks.” —Serhii Skoryk, Kvertus

    But purchasing Western equipment is, in the long term, not affordable for Ukraine, a country with a per capita GDP of US $5,760—much lower than the European average of $38,270. Fortunately, Ukraine can tap its engineering workforce, which is among the largest in Europe. Before the war, Ukraine was a go-to place for Western companies looking to set up IT- and software-development centers. Many of these workers have since joined Ukraine’s DIY military-technician (“miltech”) development movement.

    An engineer and founder at a Ukrainian startup that produces long-range kamikaze drones, who didn’t want to be named because of security concerns, told Spectrum that the company began developing its own computers and autonomous navigation software for target tracking “just to keep the price down.” The engineer said Ukrainian startups offer advanced military-drone technology at a price that is a small fraction of what established competitors in the West are charging.

    Within three years of the February 2022 Russian invasion, Ukraine produced a world-class defense-tech ecosystem that is not only attracting Western innovators into its fold, but also regularly surpassing them. The keys to Ukraine’s success are rapid iterations and close cooperation with frontline troops. It’s a formula that’s working for Auterion as well. “If you want to build a leading product, you need to be where the product is needed the most,” says Meier. “That’s why we’re in Ukraine.”

    Burukin, from Ukrainian startup Huless, believes that autonomy will play a bigger role in the future of drone warfare than Russia’s optical fibers will. Autonomous drones not only evade jamming, but their range is limited only by their battery storage. They also can carry more explosives or better cameras and sensors than the wired drones can. On top of that, they don’t place high demands on their operators.

    “In the perfect world, the drone should take off, fly, find the target, strike it, and report back on the task,” Burukin says. “That’s where the development is heading.”

    The cat-and-mouse game is nowhere near over. Companies including KrattWorks are already thinking about the next innovation that would make drone warfare cheaper and more lethal. By creating a drone mesh network, for example, they could send a sophisticated intelligence, surveillance, and reconnaissance drone followed by a swarm of simpler kamikaze drones to find and attack a target using visual navigation.

    “You can send, like, 10 drones, but because they can fly themselves, you don’t need a superskilled operator controlling every single one of these,” notes KrattWorks’ Karmin, who keeps tabs on tech developments in Ukraine with a mixture of professional interest, personal empathy, and foreboding. Rarely does a day go by that he does not think about the expanding Russian military presence near Estonia’s eastern borders.

    “We don’t have a lot of people in Estonia,” he says. “We will never have enough skilled drone pilots. We must find another way.”

  • Video Friday: RIVR Delivers Your Package
    by Evan Ackerman on 04. April 2025. at 16:00



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
    ICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.
    ICRA 2025: 19–23 May 2025, ATLANTA, GA.
    London Humanoids Summit: 29–30 May 2025, LONDON
    IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
    2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TEXAS
    RSS 2025: 21–25 June 2025, LOS ANGELES
    ETH Robotics Summer School: 21–27 June 2025, GENEVA
    IAS 2025: 30 June–4 July 2025, GENOA, ITALY
    ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
    IEEE World Haptics: 8–11 July 2025, SUWON, KOREA
    IFAC Symposium on Robotics: 15–18 July 2025, PARIS
    RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL
    RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS
    CLAWAR 2025: 5–7 September 2025, SHENZHEN
    World Robot Summit: 10–12 October 2025, OSAKA, JAPAN
    IROS 2025: 19–25 October 2025, HANGZHOU, CHINA
    IEEE Humanoids: 30 September–2 October 2025, SEOUL
    CoRL 2025: 27–30 September 2025, SEOUL

    Enjoy today’s videos!

    I love the platform and I love the use case, but this particular delivery method is...odd?

    [ RIVR ]

    This is just the beginning of what people and physical AI can accomplish together. To recognize business value from collaborative robotics, you have to understand what people do well, what robots do well, and how they best come together to create productivity. DHL and Robust.AI are partnering to define the future of human–robot collaboration.

    [ Robust AI ]

    Teleoperated robotic characters can perform expressive interactions with humans, relying on the operators’ experience and social intuition. In this work, we propose to create autonomous interactive robots by training a model to imitate operator data. Our model is trained on a dataset of human–robot interactions, where an expert operator is asked to vary the interactions and mood of the robot, while the operator commands and the poses of the human and robot are recorded.

    [ Disney Research Studios ]

    Introducing THEMIS V2, our all-new full-size humanoid robot. Standing at 1.6 meters with 40 degrees of freedom, THEMIS V2 now features enhanced 6 DoF arms and advanced 7 DoF end-effectors, along with an additional body-mounted stereo camera and up to 200 Tera Operations per Second (TOPS) of onboard AI computing power. These upgrades deliver exceptional capabilities in manipulation, perception, and navigation, pushing humanoid robotics to new heights.

    [ Westwood ]

    BMW x Figure Update: This isn’t a test environment—it’s real production operations. Real-world robots are advancing our Helix AI and strengthening our end-to-end autonomy to deploy millions of robots.

    [ Figure ]

    On 13 March, at WorldMinds 2025, in the Kaufleuten Theater of Zurich, our team demonstrated for the first time two autonomous vision-based racing drones. It was an epic journey to prepare for this event, given the poor lighting conditions and the safety constraints of a theater filled with more than 500 people! The background screen visualizes in real time the observations of the AI algorithm of each drone. No map, no IMU, no SLAM!

    [ University of Zurich (UZH) ]

    Unitree releases Dex5 dexterous hand. Single hand with 20 degrees of freedom (16 active plus 4 passive). Enable smooth backdrivability (direct force control). Equipped with 94 highly sensitive touch points (optional).

    [ Unitree ]

    You can say “real-world manipulation” all you want, but until it’s actually in the real world, I’m not buying it.

    [ 1X ]

    Developed by Pudu X-Lab, FlashBot Arm elevates the capabilities of our flagship FlashBot by blending advanced humanoid manipulation and intelligent delivery capabilities, powered by cutting-edge embodied AI. This powerful combination allows the robot to autonomously perform a wide range of tasks across diverse settings, including hotels, office buildings, restaurants, retail spaces, and health care facilities.

    [ Pudu Robotics ]

    If you ever wanted to manipulate a trilby with 25 robots, a solution now exists.

    [ Paper ] via [ EPFL Reconfigurable Robotics Lab ] published by [ IEEE Robotics and Automation Letters ]

    We’ve been sharing videos from the Suzumori Endo Robotics Lab at the Institute of Science Tokyo for many years, and Professor Suzumori is now retiring.

    Best wishes to Professor Suzumori!

    [ Suzumori Endo Lab ]

    No matter the vehicle, traditional control systems struggle when unexpected challenges—like damage, unforeseen environments, or new missions—push them beyond their design limits. Our Learning Introspective Control (LINC) program aims to fundamentally improve the safety of mechanical systems, such as ground vehicles, ships, and robotics, using various machine learning methods that require minimal computing power.

    [ DARPA ]

    NASA’s Perseverance rover captured new images of multiple dust devils while exploring the rim of the Jezero crater on Mars. The largest dust devil was approximately 210 feet wide (65 meters). In this Mars Report, atmospheric scientist Priya Patel explains what dust devils can teach us about weather conditions on the Red Planet.

    [ NASA ]

  • A Guide to IEEE Education Week’s Events
    by Angelique Parashis on 03. April 2025. at 18:00



    As technology evolves, staying current with the latest advancements and skills remains crucial. Continuous learning is essential for maintaining a competitive edge in the tech industry.

    IEEE Education Week, taking place from 6 to 12 April, emphasizes the importance of lifelong learning. During the week, technical professionals, students, educators, and STEM enthusiasts can access a variety of events, resources, and special offers from IEEE organizational units, societies, and councils. Whether you’re a seasoned professional or just starting your career, participating in IEEE Education Week can help you reassess and realign your skills to meet market demands.

    Here are some of the offerings:

    The IEEE Education Week website lists special offers and discounts. The IEEE Learning Network, for example, is offering a 25 percent discount on some of its popular course programs in technical areas including artificial intelligence, communications, and IEEE standards; use the code ILNIEW25, available until 30 April.

    Be sure to complete the IEEE Education Week quiz by noon ET on 11 April for a chance to earn a digital badge, which can be displayed on social media.

    Don’t miss this opportunity to invest in your future and explore IEEE’s vast educational offerings. To learn more about IEEE Education Week, watch this video and follow the event on Facebook or X.

  • Discover the Role of Filter Technologies in Advanced Communication Systems
    by Cadence on 02. April 2025. at 18:37



    Learn about carrier aggregation, microcell overlapping, and massive MIMO implementation. Delve into the world of surface acoustic wave (SAW) and bulk acoustic wave (SAW) filters and understand their strengths, limitations, and applications in the evolving 5G/6G landscape.

    Key highlights:

    • Uncover the design challenges of new technologies in the mobile ecosystem
    • Explore the field of SAW and SAW filters and discover their roles and performance nuances
    • Take a look at the impact of temperature on filter technologies and how it shapes their applications
    • Learn how simulation technology can bridge the gap between design concepts and real-world implementation

    Stay ahead in 5G/6G innovation and learn how filters shape seamless communication.

    Register now for this free webinar!

  • Nvidia Blackwell Ahead in AI Inference, AMD Second
    by Samuel K. Moore on 02. April 2025. at 15:00



    In the latest round of machine learning benchmark results from MLCommons, computers built around Nvidia’s new Blackwell GPU architecture outperformed all others. But AMD’s latest spin on its Instinct GPUs, the MI325, proved a match for the Nvidia H200, the product it was meant to counter. The comparable results were mostly on tests of one of the smaller-scale large language models, Llama2 70B (for 70 billion parameters). However, in an effort to keep up with a rapidly changing AI landscape, MLPerf added three new benchmarks to better reflect where machine learning is headed.

    MLPerf runs benchmarking for machine learning systems in an effort to provide an apples-to-apples comparison between computer systems. Submitters use their own software and hardware, but the underlying neural networks must be the same. There are a total of 11 benchmarks for servers now, with three added this year.

    It has been “hard to keep up with the rapid development of the field,” says Miro Hodak, the cochair of MLPerf Inference. ChatGPT appeared only in late 2022, OpenAI unveiled its first large language model (LLM) that can reason through tasks last September, and LLMs have grown exponentially—GPT3 had 175 billion parameters, while GPT4 is thought to have nearly 2 trillion. As a result of the breakneck innovation, “we’ve increased the pace of getting new benchmarks into the field,” says Hodak.

    The new benchmarks include two LLMs. The popular and relatively compact Llama2 70B is already an established MLPerf benchmark, but the consortium wanted something that mimicked the responsiveness people are expecting of chatbots today. So the new benchmark “Llama2-70B Interactive” tightens the requirements. Computers must produce at least 25 tokens per second under any circumstance and cannot take more than 450 milliseconds to begin an answer.

    Seeing the rise of “ agentic AI”—networks that can reason through complex tasks—MLPerf sought to test an LLM that would have some of the characteristics needed for that. They chose Llama3.1 405B for the job. That LLM has what’s called a wide context window. That’s a measure of how much information—documents, samples of code, et cetera—it can take in at once. For Llama3.1 405B, that’s 128,000 tokens, more than 30 times as much as Llama2 70B.

    The final new benchmark, called RGAT, is what’s called a graph attention network. It acts to classify information in a network. For example, the dataset used to test RGAT consists of scientific papers, which all have relationships between authors, institutions, and fields of study, making up 2 terabytes of data. RGAT must classify the papers into just under 3,000 topics.

    Blackwell, Instinct Results

    Nvidia continued its domination of MLPerf benchmarks through its own submissions and those of some 15 partners, such as Dell, Google, and Supermicro. Both its first- and second-generation Hopper architecture GPUs—the H100 and the memory-enhanced H200—made strong showings. “We were able to get another 60 percent performance over the last year” from Hopper, which went into production in 2022, says Dave Salvator, director of accelerated computing products at Nvidia. “It still has some headroom in terms of performance.”

    But it was Nvidia’s Blackwell architecture GPU, the B200, that really dominated. “The only thing faster than Hopper is Blackwell,” says Salvator. The B200 packs in 36 percent more high-bandwidth memory than the H200, but, even more important, it can perform key machine learning math using numbers with a precision as low as 4 bits instead of the 8 bits Hopper pioneered. Lower-precision compute units are smaller, so more fit on the GPU, which leads to faster AI computing.

    In the Llama3.1 405B benchmark, an eight-B200 system from Supermicro delivered nearly four times the tokens per second of an eight-H200 system by Cisco. And the same Supermicro system was three times as fast as the quickest H200 computer at the interactive version of Llama2 70B.

    Nvidia used its combination of Blackwell GPUs and Grace CPU, called GB200, to demonstrate how well its NVL72 data links can integrate multiple servers in a rack, so they perform as if they were one giant GPU. In an unverified result the company shared with reporters, a full rack of GB200-based computers delivers 869,200 tokens per second on Llama2 70B. The fastest system reported in this round of MLPerf was an Nvidia B200 server that delivered 98,443 tokens per second.

    AMD is positioning its latest Instinct GPU, the MI325X, as providing performance competitive with Nvidia’s H200. MI325X has the same architecture as its predecessor, MI300, but it adds even more high-bandwidth memory and memory bandwidth—256 gigabytes and 6 terabytes per second (a 33 percent and 13 percent boost, respectively).

    Adding more memory is a play to handle larger and larger LLMs. “Larger models are able to take advantage of these GPUs because the model can fit in a single GPU or a single server,” says Mahesh Balasubramanian, director of data-center GPU marketing at AMD. “So you don’t have to have that communication overhead of going from one GPU to another GPU or one server to another server. When you take out those communications, your latency improves quite a bit.” AMD was able to take advantage of the extra memory through software optimization to boost the inference speed of DeepSeek-R1 eightfold.

    On the Llama2 70B test, an eight-GPU MI325X computers came within 3 to 7 percent the speed of a similarly tricked-out H200-based system. And on image generation the MI325X system was within 10 percent of the Nvidia H200 computer.

    AMD’s other noteworthy mark this round was from its partner, Mangoboost, which showed nearly fourfold performance on the Llama2 70B test by doing the computation across four computers.

    Intel has historically put forth CPU-only systems in the inference competition to show that for some workloads you don’t really need a GPU. This time around saw the first data from Intel’s Xeon 6 chips, which were formerly known as Granite Rapids and are made using Intel’s 3-nanometer process. At 40,285 samples per second, the best image-recognition results for a dual-Xeon 6 computer was about one-third the performance of a Cisco computer with two Nvidia H100s.

    Compared with Xeon 5 results from October 2024, the new CPU provides about an 80 percent boost on that benchmark and an even bigger boost on object detection and medical imaging. Since it first started submitting Xeon results in 2021 (the Xeon 3), the company has achieved an elevenfold boost in performance on Resnet.

    For now, it seems Intel has quit the field in the AI accelerator-chip battle. Its alternative to the Nvidia H100, Gaudi 3, did not make an appearance in the new MLPerf results, nor in version 4.1, released last October. Gaudi 3 got a later-than-planned release because its software was not ready. In the opening remarks at Intel Vision 2025, the company’s invite-only customer conference, newly minted CEO Lip-Bu Tan seemed to apologize for Intel’s AI efforts. “I’m not happy with our current position,” he told attendees. “You’re not happy either. I hear you loud and clear. We are working toward a competitive system. It won’t happen overnight, but we will get there for you.”

    Google’s TPU v6e chip also made a showing, though the results were restricted to the image-generation task. At 5.48 queries per second, the 4-TPU system saw a 2.5-times boost over a similar computer using its predecessor TPU v5e in the October 2024 results. Even so, 5.48 queries per second was roughly in line with a similarly sized Lenovo computer using Nvidia H100s.


    This post was corrected on 2 April 2025 to give the right value for high-bandwidth memory in the MI325X. It was corrected again on 7 April, to make the chart easier to read.

  • Four Ways Engineers Are Trying to Break Physics
    by Dan Garisto on 02. April 2025. at 14:00



    In particle physics, the smallest problems often require the biggest solutions.

    Along the border of France and Switzerland, around a hundred meters underneath the countryside, protons speed through a 27-kilometer ring—about seven times the length of the Indy 500 circuit—until they crash into protons going in the opposite direction. These particle pileups produce a petabyte of data every second, the most interesting of which is poured into data centers, accessible to thousands of physicists worldwide.

    The Large Hadron Collider (LHC), arguably the largest experiment ever engineered, is needed to probe the universe’s smallest constituents. In 2012, two teams at the LHC discovered the elusive Higgs boson, the particle whose existence confirmed 50-year-old theories about the origins of mass. It was a scientific triumph that led to a Nobel Prize and worldwide plaudits.

    Since then, experiments at the LHC have focused on better understanding how the newfound Higgs fits into the Standard Model, particle physicists’ best theoretical description of matter and forces—minus gravity. “The Standard Model is beautiful,” says Victoria Martin, an experimental physicist at the University of Edinburgh. “Because it’s so precise, all the little niggles stand out.”

    Tunnel with cylindrical tube stretching into distance The Large Hadron Collider lives in a 27-kilometer tunnel ring, about 100 meters underneath France and Switzerland. It was used to discover the Higgs boson, but further research may require something larger still. Maximilien Brice/CERN

    The minor quibbles physicists have about the Standard Model could be explained by new particles: Dark matter, the invisible material whose gravity shapes the universe, is thought to be made of heretofore undiscovered particles. But such new particles may be out of reach for the LHC, even after it undergoes upgrades that are set to be completed later this decade. To address these lingering questions, particle physicists have been planning its successors. These next-generation colliders will improve on the LHC by smashing protons at higher energies or by making more precise collisions with muons, antimuons, electrons, and positrons. In doing so, they’ll allow researchers to peek into a whole new realm of physics.

    Martin herself is particularly interested in learning more about the Higgs, and learning exactly how the particle responsible for mass behaves. One possible find: Properties of the Higgs suggest that the universe might not be stable in the long, long term. [Editor’s note: About 10790 years. Other problems may be more pressing.] “We don’t really know exactly what we’re going to find,” Martin says. “But that’s okay, because it’s science, it’s research.”

    There are four main proposals for new colliders, and each one comes with its own slew of engineering challenges. To build them, engineers would need to navigate tricky regional geology, design accelerating cavities, handle the excess heat within the cavities, and develop powerful new magnets to whip the particles through these cavities. But perhaps more daunting are the geopolitical obstacles: coordinating multinational funding commitments and slogging through bureaucratic muck.

    Collider projects take years to plan and billions of dollars to finance. The fastest that any of the four machines would come on line is the late 2030s. But now is when physicists and engineers are making key scientific and engineering decisions about what’s coming next.

    Supercolliders at a glance


    Large Hadron Collider

    Size (circumference): 27 kilometers

    Collision energy: 13,600 giga-electron volts

    Colliding particles: protons and ions

    Luminosity: 2 × 1034 collisions per square centimeter per second (5 × 1034 for high-luminosity upgrade)

    Location: Switzerland–France border

    Start date: 2008–

    International Linear Collider

    Size (length): 31 km

    Collision energy: 500 GeV

    Colliding particles: electrons and positrons

    Luminosity (at peak energy): 3 × 1034 collisions per cm2 per second

    Location: Iwate, Japan

    Earliest start date: 2038


    Muon collider

    Size (circumference): 4.5 km (or 10 km)

    Collision energy: 3,000 GeV (or 10,000 GeV)

    Colliding particles: muons and antimuons

    Luminosity: 2 × 1035 collisions per cm2 per second

    Location: possibly Fermilab

    Earliest start date: 2045 (or in the mid-2050s)


    Future Circular Collider-ee | FCC-hh

    Size (circumference): 91 km

    Collision energy: 240 GeV | 85,000 GeV

    Colliding particles: electrons and positrons | protons

    Luminosity: 8.5 × 1034 | 30 × 1034 collisions per cm2 per second

    Location: Switzerland–France border

    Earliest start date: 2046 | 2070


    Circular Electron Positron Collider | Super proton–proton Collider (SPPC)

    Size (circumference): 100 km

    Collision energy: 240 GeV | 100,000 GeV

    Colliding particles: electrons and positrons | protons

    Luminosity: 8.3 × 1034 | 13 × 1034 collisions per cm2 per second

    Location: China

    Earliest start date: 2035 | 2060s

    Possible supercolliders of the future

    The LHC collides protons and other hadrons. Hadrons are like beanbags, full of quarks and gluons, that spray around everywhere upon collision.

    Next-generation colliders have two ways to improve on the LHC: They can go to higher energies or higher precision. Higher energies provide more data by producing more particles—potentially new, heavy ones. Higher-precision collisions give physicists cleaner data with a better signal-to-noise ratio because the particle crash produces less debris. Either approach could reveal new physics beyond the Standard Model.

    Three of the new colliders would improve on the LHC’s precision by colliding electrons and their antimatter counterparts, positrons, instead of hadrons. These particles are more like individual marbles—much lighter, and not made up of any smaller constituents. Compared with the collisions between messy, beanbag-like hadrons, a collision between electrons and positrons is much cleaner. After taking data for years, some of those colliders could be converted to smash protons as well, though at energies about eight times as high as those of the LHC.

    These new colliders range from technically mature to speculative. One such speculative option is to smash muons, electrons’ heavier cousins, which have never been collided before. In 2023, an influential panel of particle physicists recommended that the United States pursue development of such a machine, in a so-called “muon shot.” If it is built, a muon collider would likely be based at Fermilab, the center of particle physics in the United States.

    A muon collider “can bring us outside of the world that we know,” says Daniele Calzolari, a physicist working on muon collider design at CERN, the European Organization for Nuclear Research. “We don’t know exactly how everything will look like, but we believe we can make it work.”

    While muon colliders have remained conceptual for more than 50 years, their potential has long excited and intrigued physicists. Muons are heavy compared with electrons, almost as heavy as protons, but they lack the mess of quarks and gluons, so collisions between muons could be both high energy and high precision.

    A shiny metallic machine component set up in a lab setting. Superconducting radio-frequency cavities are used in particle colliders to apply electric fields to charged particles, speeding them up toward each other until they smash together. Newer methods of making these cavities are seamless, providing more-precise steering and, presumably, better collisions. Reidar Hahn/Fermi

    The trouble is that muons decay rapidly—in a mere 2.2 microseconds while at rest—so they have to be cooled, accelerated, and collided before they expire. Preliminary studies suggest a muon collider is possible, but key technologies, like powerful high-field solenoid magnets used for cooling, still need to be developed. In March 2025, Calzolari and his colleagues submitted an internal proposal for a preliminary demonstration of the cooling technology, which they hope will happen before the end of the decade.

    The accelerator that could theoretically come on line the soonest, would be the International Linear Collider (ILC) in Iwate, Japan. The ILC would send electrons and positrons down straight tunnels where the particles would collide to produce Higgs bosons that are easier to detect than at the LHC. The collider’s design is technically mature, so if the Japanese government officially approved the project, construction could begin almost immediately. But after multiple delays by the government, the ILC remains in a sort of planning purgatory, looking more and more unlikely.

    Chart of Standard Model particles showing quarks, leptons, gauge bosons, and the Higgs boson. The Standard Model of particle physics is the current best theory of all the understood matter and forces in our universe (except gravity). The model works extremely well, but scientists also know that it is incomplete. The next generation of supercolliders might give a glimpse at what’s beyond the Standard Model.

    So, the two colliders, both technically mature, that have perhaps the clearest path to construction are China’s Circular Electron Positron Collider (CEPC) and CERN’s Future Circular Collider (FCC-ee).

    CERN’s FCC-ee would be a 91-km ring, designed to initially collide electrons and positrons to study the parameters of particles like the Higgs in fine detail (the “ee” indicates collisions between electrons and positrons). Compared with the LHC’s collisions of protons or heavy ions, those between electrons and positrons “are much cleaner, so you can have a more precise measurement,” says Michael Benedikt, the head of the FCC-ee effort. After about a decade of operation—enough time to gather data and develop the needed magnets—it would be upgraded to collide protons and search for new physics at much higher energies (and then become known as the FCC-hh, for hadrons). The FCC-ee’s feasibility report just concluded, and CERN’s member states are now left deciding whether to pursue the project.

    Similarly, China’s CEPC would also be a 100-km ring designed to collide electrons and positrons for the first 18 years or so. And much like the FCC, a proton or other hadron upgrade is in the works after that. Later this year, Chinese researchers plan to submit the CEPC for official approval by the Chinese government as part of the next five-year-plan. As the two colliders (and their proton upgrades) are considered for construction in the next few years, policymakers will be thinking about more than just their potential for discovery.

    CEPC and FCC-ee are, in this sense, less abstract physics experiments and more engineering projects with concrete design challenges.

    Laying the groundwork

    When particles zip around the curve of a collider, they lose energy—much like a car braking on a racetrack. The effect is particularly pronounced for lightweight particles like electrons and positrons. To reduce this energy loss from sharp turns, CEPC and FCC-ee are both planned to have enormous tunnels, which, if built, would be among the longest in the world. The construction cost of such an enormous tunnel would be several billion U.S. dollars, roughly one-third of the total collider price.

    Finding a place to bury a 90-km ring is not easy, especially in Switzerland. The proposed path of the FCC-ee has an average depth of 200 meters, with a dip to 500 meters under Lake Geneva, fit snugly between the Jura Mountains to the northwest and the Prealps to the east. The land there was once covered by a sea, which left behind sedimentary rock—a mixture of sandstone and shale known as molasse. “We’ve done so much tunneling at CERN before. We were quite confident about the molasse rock,” says Liam Bromiley, a civil engineer at CERN.

    But the FCC-ee’s path also takes it through deposits of limestone, which is permeable and can hold karsts, or cavities, full of water. “If you hit one of those, you could end up flooding the tunnel,” Bromiley says. During the next two years, if the project is green-lit, engineers will drill boreholes into the limestone to determine whether there are karsts that can be avoided.

    Map showing collider sizes in Geneva, Switzerland, and Qinhuangdao, China. FCC-ee would be a 91-km ring spanning underneath Switzerland and France, near the current Large Hadron Collider. One of the proposed locations for the CEPC is near the northern port city of Qinhuangdao, where the 100 km circumference collider would be buried underground.Chris Philpot

    CEPC, in contrast, has a much looser spatial constraint, and can choose from nearly anywhere in China. Three main sites are being considered: Qinhuangdao (a northern port city), Changsha (a metropolis in central China), and Huzhou (a coastal city near Shanghai). According to Jie Gao, a particle physicist at the Institute of High Energy Physics, in Beijing, the ideal location will have hard rock, like granite, and low seismic activity. Additionally, Gao says, they want a site with good infrastructure to create a “science city” ideal for an international community of physicists.

    The colliders’ carbon footprints are also on the minds of physicists. One potential energy-saving measure: redirecting excess heat from operations. “In the past we used to throw it into the atmosphere,” Benedikt says. In recent years, heated water from one of the LHC’s cooling stations has kept part of the commune of Ferney-Voltaire warm during the winters, and Benedikt says the FCC-ee would expand these environmental efforts.

    Getting up to speed

    If the civil-engineering challenges are met, physicists will rely on a spate of technologies to accelerate, focus, and collide electrons and positrons at CEPC and FCC-ee more precisely and efficiently than they could at the LHC.

    When both types of particles are first produced from their sources, they start off at a comparatively low energy, around 4 giga-electron volts. To get them up to speed, electrons and positrons are sent through superconducting radio-frequency (SRF) cavities—gleaming metal bubbles strung together like beads of a necklace, which apply an electric field that pushes the charged particles forward.

    Cutaway diagrams of Future Circular Collider and Circular Electron Positron Collider designs. Both China’s Circular Electron Positron Collider (CEPC) [bottom] and CERN’s Future Circular Collider (FCC-ee) [top] have preliminary designs of the insides of their tunnels, including the collider itself, associated vacuum and control equipment, and detectors.Chris Philpot

    In the past, SRF cavities were welded together, which inherently left imperfections that led to beam instabilities. “You can never obtain a perfect surface along this weld,” Benedikt says. FCC-ee researchers have explored several techniques to create cavities without seams, including hydroforming, which is widely used for the components of high-end sports cars. A metal tube is placed in a pressurized cell and compressed against a die by liquid. The resulting cavity has no seams and is smooth as blown glass.

    To improve efficiency, engineers focus on the machines that power the SRF cavities, machines called klystrons. Klystrons have historically had efficiencies that peak around 65 percent, but design advances, such as the machines’ ability to bunch electrons together, are on track to reach efficiencies of 80 percent. “The efficiency of the klystron is becoming very important,” Gao says. Over 10 years of operation, these savings could amount to 1 terawatt hour—about enough electricity to power all of China for an hour.

    Another efficiency boost comes from focusing on the tunnel design. As electrons and positrons follow the curve of the ring, they will lose a considerable amount of energy, so SRF cavities will be placed around the ring to boost particle energies. The lost energy will be emitted as potent synchrotron radiation—about 10,000 times as much radiation as is emitted by protons circling the LHC today. “You do not want to send the synchrotron radiation into the detectors,” Benedikt says. To avoid this fate, neither FCC-ee nor CEPC will be perfectly circular. Shaped a bit like a racetrack, both colliders will have about 1.5-km-long straight sections before an interaction point. Other options are also on the table—in the past, researchers have even used repurposed steel from scrapped World War II battleships to shield particle detectors from radiation.

    Both CEPC and FCC-ee will be massive data-generating machines. Unlike the LHC, which is regularly stopped to insert new particles, the next-generation colliders will be fed with a continuous stream of particles, allowing it to stay in “collision mode” and take more data.

    At a collider, data is a function of ‘luminosity’— the ratio of detected events per square centimeter, per second. The more particle collisions, the “brighter” the collider. Firing particles at each other is a little like trying to get two bullets to collide—they often miss each other, which limits the luminosity. But physicists have a variety of strategies to squeeze more electrons and positrons into smaller areas to achieve more of these unlikely collisions. Compared to the Large Electron-Positron (LEP) collider of the 1990s, the new machines will produce 100,000 times as many Z bosons—particles responsible for radioactive decay. More Z bosons means more data. “The FCC-ee can produce all the data that were accumulated in operation over 10 years of LEP within minutes,” Benedikt says.

    Back to protons

    While both the FCC-ee and CEPC would start with electrons and positrons, they are designed to eventually collide protons. These upgrades are called FCC-hh and Super proton-proton Collider (SPPC). Using protons, FCC-hh and SPPC would reach a collision energy of 100,000 GeV, roughly an order of magnitude higher than the LHC’s 13,600 GeV. Though the collisions would be messy, their high energy would allow physicists to “explore fully new territory,” Benedikt says. While there’s no guarantee, physicists hope that territory teems with discoveries-in-waiting, such as dark-matter particles, or strange new collisions where the Higgs recursively interacts with itself many times.

    One pro of protons is that they are over 1,800 times as heavy as electrons, so they emit far less radiation as they follow the curve of the collider ring. But this extra heft comes with a substantial cost: Bending protons’ paths requires even stronger superconducting magnets.

    Magnet development has been the downfall of colliders before. In the early 1980s, a planned collider named Isabelle was scrapped because magnet technology was not far enough along. The LHC’s magnets are made from a strong alloy of niobium-titanium, wound together into a coil that produces magnetic fields when subjected to a current. These coils can produce field strengths over 8 teslas. The strength of the magnet pushes its two halves apart with a force of nearly 600 tons per meter. “If you have an abrupt movement of the turns in the coil by as little as 10 micrometers,” the entire magnet can fail, says Bernhard Auchmann, an expert on magnets at CERN.

    It is unlikely that any Collider—whether based in China, at CERN, the United States, or Japan—will be able to go it alone.

    Future magnets for FCC-hh and SPPC will need to have at least twice the magnetic field strength, about 16 to 20 T, pushing the limits of materials and physics. Auchmann points to three possible paths forward. The most straightforward option might be “niobium three tin” (Nb3Sn). Substituting tin for titanium allows the metal to host magnetic fields up to 16 T but makes it quite brittle, so you can’t “clamp the hell out of it,” Auchmann says. One possible solution involves placing (Nb3Sn) into a protective steel endoskeleton that prevents it from crushing itself.

    Then there are high-temperature superconductors. Some copper oxide–based magnets can exceed 20 T, but they are either too fragile or don’t produce magnetic fields that are constant enough. Currently, these materials are expensive, but demand from fusion startups, which also require these types of magnets, may push the price down, Auchmann says.

    Finally, there is a class of iron-based high-temperature superconductors that is being championed by physicists in China, thanks to the low price of iron and manufacturing-process improvements. “It’s cheap,” Gao says. “This technology is very promising.” Over the next decade or so, physicists will work on each of these materials, and hope to settle on one direction for next-generation magnets.

    Time and money

    While FCC-ee and CEPC (as well as their proton upgrades) share many of the same technical specifications, they differ dramatically in two critical factors: timelines and politics.

    Construction for CEPC could begin in two years; the FCC-ee would need to wait about another decade. The difference comes down largely to the fact that CERN has a planned upgrade to the LHC—enabling it to collect 10 times as much data—which will consume resources until nearly 2040. China, by contrast, is investing heavily in basic research and has the funds immediately at hand.

    The abstruse physics that happens at colliders is never as far from political realities on Earth as it seems. Japan’s ILC is in limbo because of budget issues. The muon collider is subject to the whims of the highly divided 119th U.S. Congress. Last year, a representative for Germany criticized the FCC-ee for being unaffordable, and CERN continues to struggle with the politics of including Russian scientists. Tensions between China and the United States are similarly on the rise following the Trump administration’s tariffs.

    How physicists plan to tackle these practical problems remains to be seen. But it is unlikely that any collider—whether based in China, at CERN, the United States, or Japan—will be able to go it alone. In addition to the tens of billions of dollars for construction and operation of the new facility, the physics expertise needed to run it and perform complex experiments at scale must be global. “By definition, it’s an international project,” Gao says. “The door is wide open.”

    This article was updated on 11 April 2025.

  • Complex Haptics Deliver a Pinch, a Stretch, or a Tap
    by Gwendolyn Rak on 02. April 2025. at 13:00



    Most haptic interfaces today are limited to simple vibrations. While visual displays and audio systems have continued to progress, those using our sense of touch have largely stagnated. Now, researchers have developed a haptics system that creates more complex tactile feedback. Beyond just buzzing, the device simulates sensations like pinching, stretching, and tapping for a more realistic experience.

    “The sensation of touch is the most personal connection that you can have with another individual,” says John Rogers, a professor at Northwestern University in Evanston, Ill., who led the project. “It’s really important, but it’s much more difficult than audio or video.”

    Co-led by Rogers and Yonggang Huang, also a professor at Northwestern, the work is largely geared toward medical applications. But the technology could be used in a wide range of uses, including virtual or augmented reality and the ability to feel the texture of clothing fabric or other items while shopping online. The research was published in the journal Science on 27 March.

    A Nuanced Sense of Touch

    Today’s haptic interfaces mostly rely on vibrating actuators, which are fairly simple to construct. “It’s a great place to start,” says Rogers. But going beyond vibration could help add the vibrancy of real-world interactions to the technology, he adds.

    These types of interactions require more-sophisticated mechanical forces, which include a combination of both normal forces directed perpendicular to the skin’s surface and shear forces directed parallel to the skin. Whether through vibration or applying pressure, forces directed vertically into the skin have been the main focus of haptic designs, according to Rogers. But these don’t fully engage the many receptors embedded in our skin.

    The researchers aimed to build an actuator that offers full freedom of motion, which they achieved with “very old physics,” Rogers says—namely, electromagnetism. The basic design of the device consists of three nested copper coils and a small magnet. Running current through the coils generates a magnetic field that then moves the magnet, which delivers force to the skin.

    “What we’ve put together is an engineering embodiment [of the physics] that provides a very compact force delivery system and offers full programmability in direction, amplitude, and temporal characteristics,” says Rogers. For a more elaborate setup, the researchers also developed a version that uses a collection of four magnets with different orientations of north and south poles. This creates even more complex sensations of pinching, stretching, and twisting.

    Haptics at Your Fingertips—or Anywhere

    Hand wearing finger splints and wrist support against a plain background. Because fingertips are highly sensitive, only small forces are needed for this application. John A. Rogers/Northwestern University

    Although much of the previous work in haptics has focused on fingertips and the hands, these devices could be placed elsewhere on the body, including the back, chest, or arms. However, these applications may have different requirements. Compared with places like the back, the fingertips are highly sensitive—both in terms of the force needed and the spatial density of receptors.

    “The fingertips are probably the most challenging in terms of density, but they’re easiest in terms of the forces that you need to deliver,” says Rogers. In other use cases, delivering enough power may be a challenge, he acknowledges.

    The force possible may also be limited by the size of the coils, says Gregory Gerling, a systems engineering professor at the University of Virginia and former chair of the IEEE Technical Committee on Haptics. The coil size dictates how much force you can generate, and at a certain point, the device won’t be wearable. However, he believes it is sufficient for VR applications.

    Gerling, an IEEE senior member, finds the use of magnetism in multiple directions interesting. Compared with other approaches that are based on hydraulics or air pressure, this system doesn’t require pumping fluids or gases. “You can be kind of untethered,” Gerling says. “Overall, it’s a very interesting, novel device, and maybe it takes the field in a slightly new direction.”

    Applications in VR, Neuropathy, and More

    The clearest application of the device is probably in virtual or augmented reality, says Rogers. These environments now have highly sophisticated audio and video inputs, “but the tactile component of that experience is still a work in progress,” he says.

    Their lab, however, is primarily focused on medical applications, including sensory substitution for patients who have lost sensation in a part of the body. A complex haptics interface could reproduce the sensation in another part of the body.

    For example, nerve damage in people with diabetic neuropathy makes it difficult for them to walk without looking at their feet. The lab is experimenting with placing an array of pressure sensors into the base of these patients’ shoes, then reproducing the pattern of pressure using a haptic array mounted on their upper thighs, where they still have sensation. The researchers are working with a rehabilitation facility in Chicago to test the approach, mainly with this population.

    Continuing to develop these medical applications will be a focus moving forward, says Rogers. In terms of engineering, he would like to further miniaturize the actuators to make dense arrays possible in regions of the body like the fingertips.

    Feeling the Music

    Additionally, the researchers explored the possibility of using the device to increase engagement in musical performances. Apart from perhaps feeling vibrations of the bass line, performances usually rely on sight and sound. Adding a tactile element could make for a more immersive experience, or help people with hearing impairment engage with the music.

    With the current tech, basic vibrating actuators can change the frequency of vibration to match the notes being played. While this can convey a simple melody, it lacks the richness of different instruments and musical components.

    The researchers’ full-freedom-of-motion actuator can convey a more vibrant sound. Voice, guitar, and drums, for instance, can each be converted into a delivery mechanism for a particular force. Like with vibration alone, the frequency of each force can be modulated to match the music. The experiment was exploratory, Rogers says, but it exploits the advanced capabilities of the system.

  • How Dairy Robots Are Changing Work for Cows (and Farmers)
    by Evan Ackerman on 01. April 2025. at 20:00



    “Mooooo.”

    This dairy barn is full of cows, as you might expect. Cows are being milked, cows are being fed, cows are being cleaned up after, and a few very happy cows are even getting vigorously scratched behind the ears. “I wonder where the farmer is,” remarks my guide, Jan Jacobs. Jacobs doesn’t seem especially worried, though—the several hundred cows in this barn are being well cared for by a small fleet of fully autonomous robots, and the farmer might not be back for hours. The robots will let him know if anything goes wrong.

    At one of the milking robots, several cows are lined up, nose to tail, politely waiting their turn. The cows can get milked by robot whenever they like, which typically means more frequently than the twice a day at a traditional dairy farm. Not only is getting milked more often more comfortable for the cows, cows also produce about 10 percent more milk when the milking schedule is completely up to them.

    “There’s a direct correlation between stress and milk production,” Jacobs says. “Which is nice, because robots make cows happier and therefore, they give more milk, which helps us sell more robots.”

    Jan Jacobs is the human-robot interaction design lead for Lely, a maker of agricultural machinery. Founded in 1948 in Maassluis, Netherlands, Lely deployed its first Astronaut milking robot in the early 1990s. The company has since developed other robotic systems that assist with cleaning, feeding, and cow comfort, and the Astronaut milking robot is on its fifth generation. Lely is now focused entirely on robots for dairy farms, with around 135,000 of them deployed around the world.

    Essential Jobs on Dairy Farms

    The weather outside the barn is miserable. It’s late fall in the Netherlands, and a cold rain is gusting in from the sea, which is probably why the cows have quite sensibly decided to stay indoors and why the farmer is still nowhere to be found. Lely requires that dairy farmers who adopt its robots commit to letting their cows move freely between milking, feeding, and resting, as well as inside and outside the barn, at their own pace. “We believe that free cow traffic is a core part of the future of farming,” Jacobs says as we watch one cow stroll away from the milking robot while another takes its place. This is possible only when the farm operates on the cows’ schedule rather than a human’s.

    A conventional dairy farm relies heavily on human labor. Lely estimates that repetitive daily tasks represent about a third of the average workday of a dairy farmer. In the morning, the cows are milked for the first time. Most dairy cows must be milked at least twice a day or they’ll become uncomfortable, and so the herd will line up on their own. Traditional milking parlors are designed to maximize human milking efficiency. A milking carousel, for instance, slowly rotates cows as they’re milked so that the dairy worker doesn’t have to move between stalls.

    Cows entering and exiting a Lely Astronaut milking robot in a modern dairy farm setting.

    Automated cow milking machine in a dairy farm, cow in position being milked. “We were spending 6 hours a day milking,” explains dairy farmer Josie Rozum, whose 120-cow herd at Takes Dairy Farm uses a pair of Astronaut A5 milking robots. “Now that the robots are handling all of that, we can focus more on animal care and comfort.”Lely

    An experienced human using well-optimized equipment can attach a milking machine to a cow in just 20 to 30 seconds. The actual milking takes only a few minutes, but with the average small dairy farm in North America providing a home for several hundred cows, milking typically represents a time commitment of 4 to 6 hours per day.

    There are other jobs that must be done every day at a dairy. Cows are happier with continuous access to food, which means feeding them several times a day. The feed is a mix of roughage (hay), silage (grass), and grain. The cows will eat all of this, but they prefer the grain, and so it’s common to see cows sorting their food by grabbing a mouthful and throwing it up into the air. The lighter roughage and silage flies farther than the grain does, leaving the cow with a pile of the tastier stuff as the rest gets tossed out of reach. This makes “feed pushing” necessary to shove the rest of the feed back within reach of the cow.

    And of course there’s manure. A dairy cow produces an average of 68 kilograms of manure a day. All that manure has to be collected and the barn floors regularly cleaned.

    Dairy Industry 4.0

    The amount of labor needed to operate a dairy meant that until the early 1900s, most family farms could support only about eight cows. The introduction of the first milking machines, called bucket milkers, helped farmers milk 10 cows per hour instead of 4 by the mid-1920s. Rural electrification furthered dairy automation starting in the 1950s, and since then, both farm size and milk production have increased steadily. In the 1930s, a good dairy cow produced 3,600 kilograms of milk per year. Today, it’s almost 11,000 kilograms, and Lely believes that robots are what will enable small dairy farms to continue to scale sustainably.

    Lely

    But dairy robots are expensive. A milking robot can cost several hundred thousand dollars, plus an additional US $5,000 to $10,000 per year in operating costs. The Astronaut A5, Lely’s latest milking robot, uses a laser-guided robot arm to clean the cow’s udder before attaching teat cups one at a time. While the cow munches on treats, the Astronaut monitors her milk output, collecting data on 32 parameters, including indicators of the quality of the milk and the health of the cow. When milking is complete, the robot cleans the udder again, and the cow is free to leave as the robot steam cleans itself in preparation for the next cow.

    Lely argues that although the initial cost is higher than that of a traditional milking parlor, the robots pay for themselves over time through higher milk production (due primarily to increased milking frequency) and lower labor costs. Lely’s other robots can also save on labor. The Vector mobile robot handles continuous feeding and feed pushing, and the Discovery Collector is a robotic manure vacuum that keeps the floors clean.

    Automated feeding robot is loaded with food by a small overhead crane before it leaves to deliver feed to cows inside a modern barn. At Takes Dairy Farm, Rozum and her family used to spend several hours per day managing food for the cows. “The feeding robot is another amazing piece of the puzzle for our farm that allows us to focus on other things.”Takes Family Farm

    For most dairy farmers, though, making more money is not the main reason to get a robot, explains Marcia Endres, a professor in the department of animal science at the University of Minnesota. Endres specializes in dairy-cattle management, behavior, and welfare, and studies dairy robot adoption. “When we first started doing research on this about 12 years ago, most of the farms that were installing robots were smaller farms that did not want to hire employees,” Endres says. “They wanted to do the work just with family labor, but they also wanted to have more flexibility with their time. They wanted a better lifestyle.”

    Flexibility was key for the Takes family, who added Lely robots to their dairy farm in Ely, Iowa, four years ago. “When we had our old milking parlor, everything that we did as a family was always scheduled around milking,” says Josie Rozum, who manages the farm and a creamery along with her parents—Dan and Debbie Takes—and three brothers. “With the robots, we can prioritize our personal life a little bit more—we can spend time together on Christmas morning and know that the cows are still getting milked.”

    Takes Family Dairy Farm’s 120-cow herd is milked by a pair of Astronaut A5 robots, with a Vector and three Discovery Collectors for feeding and cleaning. “They’ve become a crucial part of the team,” explains Rozum. “It would be challenging for us to find outside help, and the robots keep things running smoothly.” The robots also add sustainability to small dairy farms, and not just in the short term. “Growing up on the farm, we experienced the hard work, and we saw what that commitment did to our parents,” Rozum explains. “It’s a very tough lifestyle. Having the robots take over a little bit of that has made dairy farming more appealing to our generation.”

    Takes Dairy Farm

    Of the 25,000 dairy farms in the United States, Endres estimates about 10 percent have robots. This is about a third of the adoption rate in Europe, where farms tend to be smaller, so the cost of implementing the robots is lower. Endres says that over the last five years, she’s seen a shift toward robot adoption at larger farms with over 500 cows, due primarily to labor shortages. “These larger dairies are having difficulty finding employees who want to milk cows—it’s a very tedious job. And the robot is always consistent. The farmers tell me, ‘My robot never calls in sick, and never shows up drunk.’ ”

    Endres is skeptical of Lely’s claim that its robots are responsible for increased milk production. “There is no research that proves that cows will be more productive just because of robots,” she says. It may be true that farms that add robots do see increased milk production, she adds, but it’s difficult to measure the direct effect that the robots have. “I have many dairies that I work with where they have both a robotic milking system and a conventional milking system, and if they are managing their cows well, there isn’t a lot of difference in milk production.”

    Cow using an automated brush for grooming inside a modern barn. The Lely Luna cow brush helps to keep cows’ skin healthy. It’s also relaxing and enjoyable, so cows will brush themselves several times a day.Lely

    The robots do seem to improve the cows’ lives, however. “Welfare is not just productivity and health—it’s also the affective state, the ability to have a more natural life,” Endres says. “Again, it’s hard to measure, but I think that on most of these robot farms, their affective state is improved.” The cows’ relationship with humans changes too, comments Endres. When the cows no longer associate humans with being told where to go and what to do all the time, they’re much more relaxed and friendly toward people they meet. Rozum agrees. “We’ve noticed a tremendous change in our cows’ demeanor. They’re more calm and relaxed, just doing their thing in the barn. They’re much more comfortable when they can choose what to do.”

    Cows Versus Robots

    Cows are curious and clever animals, and have the same instinct that humans have when confronted with a new robot: They want to play with it. Because of this, Lely has had to cow-proof its robots, modifying their design and programming so that the machines can function autonomously around cows. Like many mobile robots, Lely’s dairy robots include contact-sensing bumpers that will pause the robot’s motion if it runs into something. On the Vector feeding robot, Lely product engineer René Beltman tells me, they had to add a software option to disable the bumper. “The cows learned that, ‘oh, if I just push the bumper, then the robot will stop and put down more feed in my area for me to eat.’ It was a free buffet. So you don’t want the cows to end up controlling the robot.” Emergency stop buttons had to be relocated so that they couldn’t be pressed by questing cow tongues.

    There’s also a social component to cow-robot interaction. Within their herd, cows have a well-established hierarchy, and the robots need to work within this hierarchy to do their jobs. For example, a cow won’t move out of the way if it thinks that another cow is lower in the hierarchy than it is, and it will treat a robot the same way. The engineers had to figure out how the Discovery Collector could drive back and forth to vacuum up manure without getting blocked by cows. “In our early tests, we’d use sensors to have the robot stop to avoid running into any of the cows,” explains Jacobs. “But that meant that the robot became the weakest one in the hierarchy, and it would just end up crying in the corner because the cows wouldn’t move for it. So now, it doesn’t stop.”

    Cows resting in pens with a robot cleaning the floor in a modern barn setting. One of the dirtiest jobs on a dairy farm is handled by the Discovery Collector, an autonomous manure vacuum. The robot relies on wheel odometry and ultrasonic sensors for navigation because it’s usually covered in manure.Evan Ackerman

    “We make the robot drive slower for the first week, when it’s being introduced to a new herd,” adds Beltman. “That gives the cows time to figure out that the robot is at the top of the hierarchy.”

    Besides maintaining their dominance at the top of the herd, the current generation of Lely robots doesn’t interact much with the cows, but that’s changing, Jacobs tells me. Right now, when a robot is driving through the barn, it makes a beeping sound to let the cows know it’s coming. Lely is looking into how to make these sounds more enjoyable for the cows. “This was a recent revelation for me,” Jacobs says. ”We’re not just designing interactions for humans. The cows are our users, too.”

    Human-Robot Interaction

    Last year, Jacobs and researchers from Delft University of Technology, in the Netherlands, presented a paper at the IEEE Human-Robot Interaction (HRI) Conference exploring this concept of robot behavior development on working dairy farms. The researchers visited robotic dairies, interviewed dairy farmers, and held workshops within Lely to establish a robot code of conduct—a guide that Lely’s designers and engineers use when considering how their robots should look, sound, and act, for the benefit of both humans and cows. On the engineering side, this includes practical things like colors and patterns for lights and different types of sounds so that information is communicated consistently across platforms.

    But there’s much more nuance to making a robot seem “reliable” or “friendly” to the end user, since such things are not only difficult to define but also difficult to implement in a way that’s appropriate for dairy farmers, who prioritize functionality.

    Jacobs doesn’t want his robots to try to be anyone’s friend—not the cow’s, and not the farmer’s. “The robot is an employee, and it should have a professional relationship,” he says. “So the robot might say ‘Hi,’ but it wouldn’t say, ‘How are you feeling today?’ ” What’s more important is that the robots are trustworthy. For Jacobs, instilling trust is simple: “You cannot gain trust by doing tricks. If your robot is reliable and predictable, people will trust it.”

    Automated milking machine attached to cow's udders, with cow standing on a slotted floor. The electrically driven, pneumatically balanced robotic arm that the Lely Astronaut uses to milk cows is designed to withstand accidental (or intentional) kicks.Lely

    The real challenge, Jacobs explains, is that Lely is largely on its own when it comes to finding the best way of integrating its robots into the daily lives of people who may have never thought they’d have robot employees. “There’s not that much knowledge in the robot world about how to approach these problems,” Jacobs says. “We’re working with almost 20,000 farmers who have a bigger robot workforce than a human workforce. They’re robot managers. And I don’t know that there necessarily are other companies that have a customer base of normal people who have strategic dependence on robots for their livelihood. That is where we are now.”

    From Dairy Farmers to Robot Managers

    With the additional time and flexibility that the robots enable, some dairy farmers have been able to diversify. On our way back to Lely’s headquarters, we stop at Farm Het Lansingerland, owned by a Lely customer who has added a small restaurant and farm shop to his dairy. Large windows look into the barn so that restaurant patrons can watch the robots at work, caring for the cows that produce the cheese that’s on the menu. A self-guided tour takes you right up next to an Astronaut A5 milking robot, while signs on the floor warn of Vector feeding robots on the move. “This farmer couldn’t expand—this was as many cows as he’s allowed to have here,” Jacobs explains to me over cheese sandwiches. “So, he needs to have additional income streams. That’s why he started these other things. And the robots were essential for that.”

    The farmer is an early adopter—someone who’s excited about the technology and actively interested in the robots themselves. But most of Lely’s tens of thousands of customers just want a reliable robotic employee, not a science project. “We help the farmer to prepare not just the environment for the robots, but also the mind,” explains Jacobs. “It’s a complete shift in their way of working.”

    Besides managing the robots, the farmer must also learn to manage the massive amount of data that the robots generate about the cows. “The amount of data we get from the robots is a game changer,” says Rozum. “We can track milk production, health, and cow habits in real time. But it’s overwhelming. You could spend all day just sitting at the computer, looking at data and not get anything else done. It took us probably a year to really learn how to use it.”

    The most significant advantages to farmers come from using the data for long-term optimization, says the University of Minnesota’s Endres. “In a conventional barn, the cows are treated as a group,” she says. “But the robots are collecting data about individual animals, which lets us manage them as individuals.” By combining data from a milking robot and a feeding robot, for example, farmers can close the loop, correlating when and how the cows are fed with their milk production. Lely is doing its best to simplify this type of decision making, says Jacobs. “You need to understand what the data means, and then you need to present it to the farmer in an actionable way.”

    A Robotic Dairy


    Illustration of an automated dairy farm with milking machines, feed dispensers, and cows in various areas.

    All dairy farms are different, and farms that decide to give robots a try will often start with just one or two. A highly roboticized dairy barn might look something like this illustration, with a team of many different robots working together to keep the cows comfortable and happy.

    A: One Astronaut A5 robot can milk up to 60 cows. After the Astronaut cleans the teats, a laser sensor guides a robotic arm to attach the teat cups. Milking takes just a few minutes.

    B: In the feed kitchen, the Vector robot recharges itself while different ingredients are loaded into its hopper and mixed together. Mixtures can be customized for different groups of cows.

    C: The Vector robot dispenses freshly mixed food in small batches throughout the day. A laser measures the height of leftover food to make sure that the cows are getting the right amounts.

    D: The Discovery Collector is a mop and vacuum for cow manure. It navigates the barn autonomously and returns to its docking station to remove waste, refill water, and wirelessly recharge.

    E: As it milks, the Astronaut is collecting a huge amount of data—32 different parameters per teat. If it detects an issue, the farmer is notified, helping to catch health problems early.

    F: Automated gates control meadow access and will keep a cow inside if she’s due to be milked soon. Cows are identified using RFID collars, which also track their behavior and health.

    A Sensible Future for Dairy Robots

    After lunch, we stop by Lely headquarters, where bright red life-size cow statues guard the entrance and all of the conference rooms are dairy themed. We get comfortable in Butter, and I ask Jacobs and Beltman what the future holds for their dairy robots.

    In the near term, Lely is focused on making its existing robots more capable. Its latest feed-pushing robot is equipped with lidar and stereo cameras, which allow it to autonomously navigate around large farms without needing to follow a metal strip bolted to the ground. A new overhead camera system will leverage AI to recognize individual cows and track their behavior, while also providing farmers with an enormous new dataset that could allow Lely’s systems to help farmers make more nuanced decisions about cow welfare. The potential of AI is what Jacobs seems most excited about, although he’s cautious as well. “With AI, we’re suddenly going to take away an entirely different level of work. So, we’re thinking about doing research into the meaningfulness of work, to make sure that the things that we do with AI are the things that farmers want us to do with AI.”

    “The idea of AI is very intriguing,” comments Rozum. “I think AI could help to simplify things for farmers. It would be a tool, a resource. But we know our cows best, and a farmer’s judgment has to be there too. There’s just some component of dairy farming that you cannot take the human out of. Robots are not going to be successful on a farm unless you have good farmers.”

    Lely is aware of this and knows that its robots have to find the right balance between being helpful, and taking over. “We want to make sure not to take away the kinds of interactions that give dairy farmers joy in their work,” says Beltman. “Like feeding calves—every farmer likes to feed the calves.” Lely does sell an automated calf feeder that many dairy farmers buy, which illustrates the point: What’s the best way of designing robots to give humans the flexibility to do the work that they enjoy?

    “This is where robotics is going,” Jacobs tells me as he gives me a lift to the train station. “As a human, you could have two other humans and six robots, and that’s your company.” Many industries, he says, look to robots with the objective of minimizing human involvement as much as possible so that the robots can generate the maximum amount of value for whoever happens to be in charge.

    Dairy farms are different. Perhaps that’s because the person buying the robot is the person who most directly benefits from it. But I wonder if the concern over automation of jobs would be mitigated if more companies chose to emphasize the sustainability and joy of work equally with profit. Automation doesn’t have to be zero-sum—if implemented thoughtfully, perhaps robots can make work easier, more efficient, and more fun, too.

    Jacobs certainly thinks so. “That’s my utopia,” he says. “And we’re working in the right direction.”

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 09. February 2022. at 15:31



    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


    Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

    Andrew Ng on...

    The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

    Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

    When you say you want a foundation model for computer vision, what do you mean by that?

    Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

    What needs to happen for someone to build a foundation model for video?

    Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

    Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

    Back to top

    It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

    Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

    “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
    —Andrew Ng, CEO & Founder, Landing AI

    I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

    I expect they’re both convinced now.

    Ng: I think so, yes.

    Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

    Back to top

    How do you define data-centric AI, and why do you consider it a movement?

    Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

    When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

    The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

    You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

    Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

    When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

    Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

    “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
    —Andrew Ng

    For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

    Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

    Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

    One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

    When you talk about engineering the data, what do you mean exactly?

    Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

    For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

    Back to top

    What about using synthetic data, is that often a good solution?

    Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

    Do you mean that synthetic data would allow you to try the model on more data sets?

    Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

    “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
    —Andrew Ng

    Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

    Back to top

    To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

    Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

    One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

    How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

    Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

    In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

    So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

    Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

    Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

    Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

    Back to top

    This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 08. February 2022. at 14:00



    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

    Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

    But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

    How is AI currently being used to design the next generation of chips?

    Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

    Portrait of a woman with blonde-red hair smiling at the camera Heather GorrMathWorks

    Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

    What are the benefits of using AI for chip design?

    Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

    So it’s like having a digital twin in a sense?

    Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

    So, it’s going to be more efficient and, as you said, cheaper?

    Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

    We’ve talked about the benefits. How about the drawbacks?

    Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

    Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

    One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

    How can engineers use AI to better prepare and extract insights from hardware or sensor data?

    Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

    One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

    What should engineers and designers consider when using AI for chip design?

    Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

    How do you think AI will affect chip designers’ jobs?

    Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

    How do you envision the future of AI and chip design?

    Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 07. February 2022. at 16:12



    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

    IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

    Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

    “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

    The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

    Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

    Golden dilution refrigerator hanging vertically Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

    In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

    As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

    In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

    “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

    On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

    While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

    “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

    This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

    “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

    Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.