IEEE News

IEEE Spectrum IEEE Spectrum

  • IEEE Offers AI Training Courses and a Mini MBA Program
    by Angelique Parashis on 21. February 2025. at 19:00



    Artificial intelligence is changing the way business is conducted. Organizations that understand where to deploy AI strategically—whether through process improvements, more effective data use, or elsewhere—are expected to outperform their competition and have greater growth and efficiency.

    In its annual report, “The Impact of Technology in 2025 and Beyond: An IEEE Global Study,” IEEE surveyed 350 global technology leaders including CIOs, CTOs, and IT directors. The survey revealed that more than half of the respondents ranked AI, which encompasses predictive and generative AI, machine learning, and natural language processing, as the most important technology coming into 2025.

    The tech leaders said they were ready to adopt AI, with many already leveraging its benefits and planning further exploration. Specifically, 20 percent of respondents reported regular use of generative AI, noting that it added value to their operations. Additionally, 24 percent acknowledged the benefits of the technology and said they intended to explore its practical applications. More than 30 percent had high expectations for AI and plan to experiment with it on smaller projects.

    The strategic importance of AI for companies

    AI’s impact varies across industries, with technology leading the way in integration. Despite AI’s increasing presence in company operations around the globe, it remains a source of confusion for most employees. As AI usage continues to rise, businesses should invest in bringing their staff up to speed on how to integrate the technology to improve their operations.

    In a survey last year by technology research and advisory company Valoir, 84 percent of employees reported being unclear about what generative AI is or how it works. Similarly, in Slingshot’s Digital Work Trends Report, 77 percent of employees surveyed said they didn’t have adequate training in AI tools or fully understand how AI related to their job.

    The effective use of AI can help companies and their employees make informed, data-driven decisions, improve resource allocation, provide more targeted and personalized customer experiences, and streamline project management. Business leaders who have a firm grasp on what AI can deliver will be better positioned for success.

    IEEE offers educational resources for AI training

    For businesses that want to train their staff on the technology, IEEE offers a comprehensive education program designed to enhance knowledge and skills in the rapidly evolving field.

    The resources, produced by IEEE Educational Activities, can ensure that employees are well-versed in the latest advancements and equipped with practical skills to drive innovation and efficiency within the organization.

    The courses are also available to individuals through the IEEE Learning Network.

    Upon successfully completing each course, participants earn professional development credits including professional development hours (PDHs) and continuing education units (CEUs). Additionally, they receive a shareable digital badge highlighting their proficiency—which can be showcased on social media platforms.

    IEEE and Rutgers offer a mini MBA

    The new IEEE | Rutgers Online Mini-MBA: Artificial Intelligence program is designed to help organizations and their employees master AI for innovation. The program provides learners with an enhanced understanding of applications tailored to specific industries and job functions. Participants learn how to strategically leverage the technology to address business challenges, optimize processes, make more effective use of data, better serve customer needs, and improve overall organizational success.

    For employers, the program is invaluable in training staff to stay ahead of the competition in a fast-evolving landscape. It offers individual access and company-specific cohorts, providing flexible learning options to meet your organization’s needs.

    IEEE members receive a 10 percent discount.

    Whether you’re an experienced professional or just starting out, IEEE’s education offerings can be invaluable for staying ahead. Learn more about IEEE’s corporate solutions, professional development programs, and individual eLearning courses.

  • Video Friday: Helix
    by Evan Ackerman on 21. February 2025. at 17:00



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY
    German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY
    European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY
    RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
    ICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.
    ICRA 2025: 19–23 May 2025, ATLANTA, GA.
    London Humanoids Summit: 29–30 May 2025, LONDON
    IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
    2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON
    RSS 2025: 21–25 June 2025, LOS ANGELES
    ETH Robotics Summer School: 21–27 June 2025, GENEVA
    IAS 2025: 30 June–4 July 2025, GENOA, ITALY
    ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
    IEEE World Haptics: 8–11 July 2025, SUWON, KOREA
    IFAC Symposium on Robotics: 15–18 July 2025, PARIS
    RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

    Enjoy today’s videos!

    We’re introducing Helix, a generalist Vision-Language-Action (VLA) model that unifies perception, language understanding, and learned control to overcome multiple longstanding challenges in robotics.

    This is moderately impressive; my favorite part is probably the handoffs and that extra little bit of HRI with what we’d call eye contact if these robots had faces. But keep in mind that you’re looking at close to best case for robotic manipulation, and that if the robots had been given the bag instead of well-spaced objects on a single color background, or if the fridge had a normal human amount of stuff in it, they might be having a much different time of it. Also, is it just me, or is the sound on this video very weird? Like, some things make noise, some things don’t, and the robots themselves occasionally sound more like someone just added in some “soft actuator sound” or something. Also also, I’m of a suspicious nature, and when there is an abrupt cut between “robot grasps door” and “robot opens door,” I assume the worst.

    [ Figure ]

    Researchers at EPFL have developed a highly agile flat swimming robot. This robot is smaller than a credit card, and propels on the water surface using a pair of undulating soft fins. The fins are driven at resonance by artificial muscles, allowing the robot to perform complex maneuvers. In the future, this robot can be used for monitoring water quality or help with measuring fertilizer concentrations in rice fields

    [ Paper ] via [ Science Robotics ]

    I don’t know about you, but I always dance better when getting beaten with a stick.

    [ Unitree Robotics ]

    This is big news, people: Sweet Bite Ham Ham, one of the greatest and most useless robots of all time, has a new treat.

    All yours for about US $100, overseas shipping included.

    [ Ham Ham ] via [ Robotstart ]

    MagicLab has announced the launch of its first generation self-developed dexterous hand product, the MagicHand S01. The MagicHand S01 has 11 degrees of freedom in a single hand. The MagicHand S01 has a hand load capacity of up to 5 kilograms, and in work environments, can carry loads of over 20 kilograms.

    [ MagicLab ]

    Thanks, Ni Tao!

    No, I’m not creeped out at all, why?

    [ Clone Robotics ]

    Happy 40th Birthday to the MIT Media Lab!

    Since 1985, the MIT Media Lab has provided a home for interdisciplinary research, transformative technologies, and innovative approaches to solving some of humanity’s greatest challenges. As we celebrate our 40th anniversary year, we’re looking ahead to decades more of imagining, designing, and inventing a future in which everyone has the opportunity to flourish.

    [ MIT Media Lab ]

    While most soft pneumatic grippers that operate with a single control parameter (such as pressure or airflow) are limited to a single grasping modality, this article introduces a new method for incorporating multiple grasping modalities into vacuum-driven soft grippers. This is achieved by combining stiffness manipulation with a bistable mechanism. Adjusting the airflow tunes the energy barrier of the bistable mechanism, enabling changes in triggering sensitivity and allowing swift transitions between grasping modes. This results in an exceptional versatile gripper, capable of handling a diverse range of objects with varying sizes, shapes, stiffness, and roughness, controlled by a single parameter, airflow, and its interaction with objects.

    [ Paper ] via [ BruBotics ]

    Thanks, Bram!

    In this article, we present a design concept, in which a monolithic soft body is incorporated with a vibration-driven mechanism, called Leafbot. This proposed investigation aims to build a foundation for further terradynamics study of vibration-driven soft robots in a more complicated and confined environment, with potential applications in inspection tasks.

    [ Paper ] via [ IEEE Transactions on Robots ]

    We present a hybrid aerial-ground robot that combines the versatility of a quadcopter with enhanced terrestrial mobility. The vehicle features a passive, reconfigurable single wheeled leg, enabling seamless transitions between flight and two ground modes: a stable stance and a dynamic cruising configuration.

    [ Robotics and Intelligent Systems Laboratory ]

    I’m not sure I’ve ever seen this trick performed by a robot with soft fingers before.

    [ Paper ]

    There are a lot of robots involved in car manufacturing. Like, a lot.

    [ Kawasaki Robotics ]

    Steve Willits shows us some recent autonomous drone work being done at the AirLab at CMU’s Robotics Institute.

    [ Carnegie Mellon University Robotics Institute ]

    Somebody’s got to test all those luxury handbags and purses. And by somebody, I mean somerobot.

    [ Qb Robotics ]

    Do not trust people named Evan.

    [ Tufts University Human-Robot Interaction Lab ]

    Meet the Mind: MIT Professor Andreea Bobu.

    [ MIT ]

  • Reinforcement Learning Triples Spot’s Running Speed
    by Evan Ackerman on 21. February 2025. at 14:00



    About a year ago, Boston Dynamics released a research version of its Spot quadruped robot, which comes with a low-level application programming interface (API) that allows direct control of Spot’s joints. Even back then, the rumor was that this API unlocked some significant performance improvements on Spot, including a much faster running speed. That rumor came from the Robotics and AI (RAI) Institute, formerly The AI Institute, formerly the Boston Dynamics AI Institute, and if you were at Marc Raibert’s talk at the ICRA@40 conference in Rotterdam last fall, you already know that it turned out not to be a rumor at all.

    Today, we’re able to share some of the work that the RAI Institute has been doing to apply reality-grounded reinforcement learning techniques to enable much higher performance from Spot. The same techniques can also help highly dynamic robots operate robustly, and there’s a brand new hardware platform that shows this off: an autonomous bicycle that can jump.


    See Spot Run

    This video is showing Spot running at a sustained speed of 5.2 meters per second (11.6 miles per hour). Out of the box, Spot’s top speed is 1.6 m/s, meaning that RAI’s spot has more than tripled (!) the quadruped’s factory speed.

    If Spot running this quickly looks a little strange, that’s probably because it is strange, in the sense that the way this robot dog’s legs and body move as it runs is not very much like how a real dog runs at all. “The gait is not biological, but the robot isn’t biological,” explains Farbod Farshidian, roboticist at the RAI Institute. “Spot’s actuators are different from muscles, and its kinematics are different, so a gait that’s suitable for a dog to run fast isn’t necessarily best for this robot.”

    The best Farshidian can categorize how Spot is moving is that it’s somewhat similar to a trotting gait, except with an added flight phase (with all four feet off the ground at once) that technically turns it into a run. This flight phase is necessary, Farshidian says, because the robot needs that time to successively pull its feet forward fast enough to maintain its speed. This is a “discovered behavior,” in that the robot was not explicitly programmed to “run,” but rather was just required to find the best way of moving as fast as possible.

    Reinforcement Learning Versus Model Predictive Control

    The Spot controller that ships with the robot when you buy it from Boston Dynamics is based on model predictive control (MPC), which involves creating a software model that approximates the dynamics of the robot as best you can, and then solving an optimization problem for the tasks that you want the robot to do in real time. It’s a very predictable and reliable method for controlling a robot, but it’s also somewhat rigid, because that original software model won’t be close enough to reality to let you really push the limits of the robot. And if you try to say, “Okay, I’m just going to make a superdetailed software model of my robot and push the limits that way,” you get stuck because the optimization problem has to be solved for whatever you want the robot to do, in real time, and the more complex the model is, the harder it is to do that quickly enough to be useful. Reinforcement learning (RL), on the other hand, learns offline. You can use as complex of a model as you want, and then take all the time you need in simulation to train a control policy that can then be run very efficiently on the robot.

    In simulation, a couple of Spots (or hundreds of Spots) can be trained in parallel for robust real-world performance.Robotics and AI Institute

    In the example of Spot’s top speed, it’s simply not possible to model every last detail for all of the robot’s actuators within a model-based control system that would run in real time on the robot. So instead, simplified (and typically very conservative) assumptions are made about what the actuators are actually doing so that you can expect safe and reliable performance.

    Farshidian explains that these assumptions make it difficult to develop a useful understanding of what performance limitations actually are. “Many people in robotics know that one of the limitations of running fast is that you’re going to hit the torque and velocity maximum of your actuation system. So, people try to model that using the data sheets of the actuators. For us, the question that we wanted to answer was whether there might exist some other phenomena that was actually limiting performance.”

    Searching for these other phenomena involved bringing new data into the reinforcement learning pipeline, like detailed actuator models learned from the real-world performance of the robot. In Spot’s case, that provided the answer to high-speed running. It turned out that what was limiting Spot’s speed was not the actuators themselves, nor any of the robot’s kinematics: It was simply the batteries not being able to supply enough power. “This was a surprise for me,” Farshidian says, “because I thought we were going to hit the actuator limits first.”

    Spot’s power system is complex enough that there’s likely some additional wiggle room, and Farshidian says the only thing that prevented them from pushing Spot’s top speed past 5.2 m/s is that they didn’t have access to the battery voltages so they weren’t able to incorporate that real-world data into their RL model. “If we had beefier batteries on there, we could have run faster. And if you model that phenomena as well in our simulator, I’m sure that we can push this farther.”

    Farshidian emphasizes that RAI’s technique is about much more than just getting Spot to run fast—it could also be applied to making Spot move more efficiently to maximize battery life, or more quietly to work better in an office or home environment. Essentially, this is a generalizable tool that can find new ways of expanding the capabilities of any robotic system. And when real-world data is used to make a simulated robot better, you can ask the simulation to do more, with confidence that those simulated skills will successfully transfer back onto the real robot.

    Ultra Mobility Vehicle: Teaching Robot Bikes to Jump

    Reinforcement learning isn’t just good for maximizing the performance of a robot—it can also make that performance more reliable. The RAI Institute has been experimenting with a completely new kind of robot that it invented in-house: a little jumping bicycle called the Ultra Mobility Vehicle, or UMV, which was trained to do parkour using essentially the same RL pipeline for balancing and driving as was used for Spot’s high-speed running.

    There’s no independent physical stabilization system (like a gyroscope) keeping the UMV from falling over; it’s just a normal bike that can move forward and backward and turn its front wheel. As much mass as possible is then packed into the top bit, which actuators can rapidly accelerate up and down. “We’re demonstrating two things in this video,” says Marco Hutter, director of the RAI Institute’s Zurich office. “One is how reinforcement learning helps make the UMV very robust in its driving capabilities in diverse situations. And second, how understanding the robots’ dynamic capabilities allows us to do new things, like jumping on a table which is higher than the robot itself.”

    “The key of RL in all of this is to discover new behavior and make this robust and reliable under conditions that are very hard to model. That’s where RL really, really shines.” —Marco Hutter, The RAI Institute

    As impressive as the jumping is, for Hutter, it’s just as difficult (if not more difficult) to do maneuvers that may seem fairly simple, like riding backwards. “Going backwards is highly unstable,” Hutter explains. “At least for us, it was not really possible to do that with a classical [MPC] controller, particularly over rough terrain or with disturbances.”

    Getting this robot out of the lab and onto terrain to do proper bike parkour is a work in progress that the RAI Institute says it will be able to demonstrate in the near future, but it’s really not about what this particular hardware platform can do—it’s about what any robot can do through RL and other learning-based methods, says Hutter. “The bigger picture here is that the hardware of such robotic systems can in theory do a lot more than we were able to achieve with our classic control algorithms. Understanding these hidden limits in hardware systems lets us improve performance and keep pushing the boundaries on control.”

    Teaching the UMV to drive itself down stairs in sim results in a real robot that can handle stairs at any angle.Robotics and AI Institute

    Reinforcement Learning for Robots Everywhere

    Just a few weeks ago, the RAI Institute announced a new partnership with Boston Dynamics “to advance humanoid robots through reinforcement learning.” Humanoids are just another kind of robotic platform, albeit a significantly more complicated one with many more degrees of freedom and things to model and simulate. But when considering the limitations of model predictive control for this level of complexity, a reinforcement learning approach seems almost inevitable, especially when such an approach is already streamlined due to its ability to generalize.

    “One of the ambitions that we have as an institute is to have solutions which span across all kinds of different platforms,” says Hutter. “It’s about building tools, about building infrastructure, building the basis for this to be done in a broader context. So not only humanoids, but driving vehicles, quadrupeds, you name it. But doing RL research and showcasing some nice first proof of concept is one thing—pushing it to work in the real world under all conditions, while pushing the boundaries in performance, is something else.”

    Transferring skills into the real world has always been a challenge for robots trained in simulation, precisely because simulation is so friendly to robots. “If you spend enough time,” Farshidian explains, “you can come up with a reward function where eventually the robot will do what you want. What often fails is when you want to transfer that sim behavior to the hardware, because reinforcement learning is very good at finding glitches in your simulator and leveraging them to do the task.”

    Simulation has been getting much, much better, with new tools, more accurate dynamics, and lots of computing power to throw at the problem. “It’s a hugely powerful ability that we can simulate so many things, and generate so much data almost for free,” Hutter says. But the usefulness of that data is in its connection to reality, making sure that what you’re simulating is accurate enough that a reinforcement learning approach will in fact solve for reality. Bringing physical data collected on real hardware back into the simulation, Hutter believes, is a very promising approach, whether it’s applied to running quadrupeds or jumping bicycles or humanoids. “The combination of the two—of simulation and reality—that’s what I would hypothesize is the right direction.”

  • Saving Public Data Takes More Than Simple Snapshots
    by Gwendolyn Rak on 19. February 2025. at 18:30



    Shortly after the Trump administration took office in the United States in late January, more than 8,000 pages across several government websites and databases were taken down, the New York Times found. Though many of these have now been restored, thousands of pages were purged of references to gender and diversity initiatives, for example, and others including the U.S. Agency for International Development (USAID) website remain down.

    By 11 February, a federal judge ruled that the government agencies must restore public access to pages and datasets maintained by the Centers for Disease Control and Prevention (CDC) and the Food and Drug Administration (FDA). While many scientists fled to online archives in a panic, ironically, the Justice Department had argued that the physicians who brought the case were not harmed because the removed information was available on the Internet Archive’s Wayback Machine. In response, a federal judge wrote, “The Court is not persuaded,” noting that a user must know the original URL of an archived page in order to view it.

    The administration’s legal argument “was a bit of an interesting accolade,” says Mark Graham, director of the Wayback Machine, who believes the judge’s ruling was “apropos.” Over the past few weeks, the Internet Archive and other archival sites have received attention for preserving government databases and websites. But these projects have been ongoing for years. The Internet Archive, for example, was founded as a nonprofit dedicated to providing universal access to knowledge nearly 30 years ago, and it now records more than a billion URLs every day, says Graham.

    Since 2008, Internet Archive has also hosted an accessible copy of the End of Term Web Archive, a collaboration that documents changes to federal government sites before and after administration changes. In the most recent collection, it has already archived more than 500 terabytes of material.

    Complementary Crawls

    The Internet Archive’s strength is scale, Graham says. “We can often [preserve] things quickly, at scale. But we don’t have deep experience in analysis.” Meanwhile, groups like the Environmental Data and Governance Initiative and the Association of Health Care Journalists provide help for activists and academics identifying and documenting changes.

    The Library Innovation Lab at Harvard Law School has also joined the efforts with its archive of data.gov, a 16 TB collection that includes more than 311,000 public datasets and is being updated daily with new data. The project began in late 2024, when the library realized that data sets are often missed in other web crawls, says Jack Cushman, a software engineer and director of the Library Innovation Lab.

    “You can miss anything where you have to interact with JavaScript or with a button or with a form.” —Jack Cushman, Library Innovation Lab

    A typical crawl has no trouble capturing basic HTML, PDF, or CSV files. But archiving interactive web services that are driven by databases poses a challenge. It would be impossible to archive a site like Amazon, for example, says Graham.

    The datasets the Library Innovation Lab (LIL) is working to archive are similarly tricky to capture. “If you’re doing a web crawl and just clicking from link to link, as the End of Term archive does, you can miss anything where you have to interact with JavaScript or with a button or with a form, where you have to ask for permission and then register or download something,” explains Cushman.

    “We wanted to do something that was complementary to existing web crawls, and the way we did that was to go into APIs,” he says. By going into the API’s, which bypass web pages to access data directly, the LIL’s program could fetch a complete catalog of the data sets—whether CSV, Excel, XML, or other file types—and pull the associated URLs to create an archive. In the case of data.gov, Cushman and his colleagues wrote a script to send the right 300 queries that would fetch 1,000 items per query, then go through the 300,000 total items to gather the data. “What we’re looking for is areas where some automation will unlock a lot of new data that wouldn’t otherwise be unlocked,” says Cushman.

    The other important factor for the LIL archive was to make sure the data was in a usable format. “You might get something in a web crawl where [the data] is there across 100,000 web pages, but it’s very hard to get it back out into a spreadsheet or something that you can analyze,” Cushman says. Making it usable, both in the data format and user interface, helps create a sustainable archive.

    Lots Of Copies Keep Stuff Safe

    The key to preserving the internet’s data is a principle that goes by the acronym LOCKSS: Lots Of Copies Keep Stuff Safe.

    When the Internet Archive suffered a cyberattack last October, the Archive took down the site for a three-and-a-half week period to audit the entire site and implement security upgrades. “Libraries have traditionally always been under attack, so this is no different,” Graham says. As part of its defense, the Archive now has several copies of the materials in disparate physical locations, both inside and outside the U.S.

    “The US government is the world’s largest publisher,” Graham notes. It publishes material on a wide range of topics, and “much of it is beneficial to people, not only in this country, but throughout the world, whether that is about energy or health or agriculture or security.” And the fact that many individuals and organizations are contributing to preservation of the digital world is actually a good thing.

    “The goal is for those copies to be diverse across every metric that you can think of. They should be on different kinds of media. They should be controlled by different people, with different funding sources, in different formats,” says Cushman. “Every form of similarity between your backups creates a risk of loss.” The data.gov archive has its primary copy stored through a cloud service with others as backup. The archive also includes open source software to make it easy to replicate.

    In addition to maintaining copies, Cushman says it’s important to include cryptographic signatures and timestamps. Each time an archive is created, it’s signed with cryptographic proof of the creator’s email address and time, which can help verify the validity of an archive.

    An Ongoing Challenge

    Since President Trump took office, a lot of material has been removed from US federal websitesquantifiably more than previous new administrations, says Graham. On a global scale, however, this isn’t unprecedented, he adds.

    In the U.S., official government websites have been changed with each new administration since Bill Clinton’s, notes Jason Scott, a “free range archivist” at the Internet Archive and co-founder of digital preservation site Archive Team. “This one’s more chaotic,” Scott says. But “the web is a very high entropy entity ... Google is an archive like a supermarket is a food museum.”

    The job of digital archivists is a difficult one, especially with a backlog of sites that have existed across the evolution of internet standards. But these efforts are not new. “The ramping up will only be in terms of disk space and bandwidth resources, not the process that has been ongoing,” says Scott.

    For Cushman, working on this project has underscored the value of public data. “The government data that we have is like a GPS signal,” he says. “It doesn’t tell us where to go, but it tells us what’s around us, so that we can make decisions. Engaging with it for the first time this way has really helped me appreciate what a treasure we have.”

  • A Rover Race on Mojave Desert Sands
    by Joanna Goodrich on 18. February 2025. at 19:00



    With NASA working on sending humans to Mars starting in the 2030s, colonizing the Red Planet seems more achievable than ever. The space agency is already leading yearlong simulated missions to better understand how living on Mars could affect humans.

    Because of the planet’s thin atmosphere, high radiation levels, and abrasive dust, people would need to live in specialized dwellings and use robots to perform outdoor tasks.

    With hopes of inspiring the next generation of engineers and scientists to develop space robots, IEEE held its first Robopalooza, a telepresence competition with robotic demonstrations, in November in Lucerne Valley, Calif. The competition is expected to become an annual event.

    The contest and demonstrations were held in conjunction with the IEEE Conference on Telepresence at Caltech. The events were organized by IEEE Telepresence, an IEEE Future Directions initiative that aims to advance telepresence technology to help redefine how people live and work.

    Seven teams from universities and robotics companies worldwide remotely operated a Helelani rover through an obstacle course inspired by the game Capture the Flag. The 318-kilogram vehicle was provided by the Pacific International Space Center for Exploration Systems (PISCES), an aerospace research center at the University of Hawaii in Hilo. The team that took the least time to retrieve the flag—located on a small hill in the middle of the 400-meter-long course—received US $5,000.

    Companies and university labs developing space robots demonstrated some of their creations to the more than 300 conference attendees including local preuniversity students.

    This year’s conference and competition are scheduled to be held in Leiden, Netherlands, from 8 to 10 September.

    Why humans need robots on Mars

    Science fiction writers have long explored the idea of people living on another planet, before astronauts even landed on the moon. It’s still a staple of popular series including the Dune, Red Rising, and Star Wars franchises, whose main characters don’t just reside in a galaxy far, far away. Paul Atreides, Darrow O’Lykos, and Luke Skywalker grew up or live on a desert planet much like Mars.

    Settling the Red Planet is not likely to be easy. Before sending people there, robots would need to build housing. The planet’s atmosphere is 95 percent carbon dioxide. The radiation there would kill human inhabitants in a few months if they weren’t adequately shielded from it. Also, according to NASA, Mars is covered in fine dust particles; breathing in the sharp-edged fragments could damage lungs.

    Once people inhabit the robot-built dwellings, they would need to use robots to complete outdoor tasks such as geological research, building maintenance, and water mining.

    Spacecraft aren’t immune to Mars’s dangers, either. The thin atmosphere makes it difficult for rovers to land, as there is minimal air resistance to slow down their descent. The planet’s radiation levels, up to 50 times higher than on Earth, gradually degrade a rover’s erosion-resistant coating, electronic systems, and other components. The abrasive dust also can damage spacecraft.

    Today’s rovers are slow-moving, averaging a ground speed of about 150 meters per hour on a flat surface, in part because of the 20- to 40-minute delay in communications between Earth and Mars, says Robert Mueller, who organized the telepresence competition. And rovers are expensive: NASA’s latest, Perseverance, cost around $1.7 billion to design and build.

    Racing robots in the desert

    When choosing a location for the Robopalooza, Mueller found that California’s Mojave Desert, with its hills and soft sand, closely resembled Mars’s topography. Mueller, an IEEE member, is a senior technologist and principal investigator at NASA’s Kennedy Space Center, near Cape Canaveral in Florida.

    The competing teams were located in Australia, Chile, and the United States.

    A camera mounted on the Helelani rover live-streamed its view to the participants’ computers so they could remotely maneuver the vehicle. The route ended at the top of Peterman Hill. The teams tried to navigate the rover around 14 traffic cones placed randomly along the course. If the rover touched a cone, 10 seconds were added to the team’s final time. If a team wasn’t able to maneuver the rover around a cone, 20 seconds were added.

    Seven teams—from North Dakota University; SK Godelius; the University of Adelaide, in Australia; the University of Alabama in Tuscaloosa; Virginia State University; and Western Australia Remote Operations (WARO32)—competed remotely. The California State Polytechnic University, Ponoma, team competed on-site from a trailer.

    With a finishing time of 20 minutes and 10 seconds—and no penalties—WARO32 won the competition.

    “The winning team operated the rover from Perth, Australia, which was 14,800 kilometers from the competition site. They were the team that was farthest away from the vehicle,” Mueller says. “This showcases that telepresence is achievable across Earth and that there is enormous potential for a variety of tasks to be performed using telepresence, such as telemedicine, remote machinery operation, and business and corporate communication.”

    Hector, a lunar lander, wears toddler-size Crocs to give it traction and balance.

    Preuniversity students try out space robots

    At the IEEE robotic demonstrations, representatives from robotics companies including Honeybee, Cislune, and Neurospace showed off some of their creations. They included a robot that extracts water from rocky soil, a lunar soil excavator, and a cargo vehicle that can adapt to different terrains.

    Mueller invited nearby teachers to bring their students to the IEEE event. More than 300 elementary, middle, and high school students attended.

    They had the opportunity to see top robotics companies demonstrate their machines and to play with Hector, a bipedal lunar lander created by two doctoral students from the University of Southern California, in Los Angeles.

    “Many students and other attendees were inspired by the potential of robotics and telepresence as they watched the robot racing in the Mojave Desert,” Mueller says. “The IEEE Telepresence Initiative is planning to make this competition an annual event, which will take place at remote locations across the world that have extreme conditions, mimicking extraterrestrial planetary surfaces.”

  • China Rescues Stranded Lunar Satellites
    by Andrew Jones on 18. February 2025. at 12:00



    China has managed to deliver a pair of satellites into lunar orbit despite the spacecraft initially being stranded in low Earth orbit following a rocket failure, using a mix of complex calculations, precise engine burns, and astrodynamic ingenuity.

    China launched the DRO-A and B satellites on 13 March last year on a Long March 2C rocket, aiming to send the pair into a distant retrograde orbit (DRO) around the moon. However, the rocket’s Yuanzheng-1S upper stage—intended to fire the spacecraft into a transfer orbit to the moon—failed, leaving the pair marooned in low Earth orbit.

    Little is known for certain about the satellites. They must be small, given the limited payload capabilities of the rocket used for the launch, and are thought to be for testing technology and the uses of the unusual retrograde orbit. (DRO orbits could be handy for lunar communications and observation.) Critically, the spacecraft’s small size means they have little propellant, making reaching lunar orbit from low Earth orbit unassisted a very tall order. However, Microsat, the institute under the Chinese Academy of Sciences (CAS) behind the mission, got to work on a rescue, despite the daunting distance.

    “Having to replan that in a hurry must be a nightmare, so it’s a very impressive achievement.” —Jonathan McDowell, Harvard-Smithsonian

    What followed was a 167-day-long effort that first got the spacecraft out to well beyond lunar distance and then successfully inserted the satellites into their intended orbit. The operation included five orbital maneuvers, five further trajectory corrections to fine-tune the satellites course, and three gravity assists from the Earth and moon.

    The first steps were small engine burns at perigee—the spacecraft’s closest orbital approach to Earth—which gradually raised the apogee—the farthest point of the orbit from Earth. Once the apogee was high enough, a larger burn put the spacecraft on an atypical course for the moon.

    From the Earth to the Moon

    Normally, spacecraft going to the moon follow the simplest trajectory, a so-called Hohmann transfer that burns a lot of propellant to get moving and then uses another big burn to drop into orbit once the spacecraft arrives at its destination after three to four days. Instead, the Chinese took advantage of a chaotic dynamical region around the Earth-moon system to save propellant. The Japanese Hiten probe had been rescued using a similar approach in 1990, but it was sent into a conventional lunar orbit.The calculations to reach DRO—a high-altitude, long-term stable orbit moving in a retrograde direction relative to the moon—would have been arduous.

    “A small error will make you miss your target by a long way.” —Jonathan McDowell, Harvard-Smithsonian

    “The astrodynamics of getting to the Moon is already much more complicated than just Earth orbit missions,” says Jonathan McDowell, a Harvard-Smithsonian astronomer and space activity tracker and analyst. “Involving so-called ‘weak capture’ and distant retrograde orbits is far more complicated still, and having to replan that in a hurry must be a nightmare, so it’s a very impressive achievement.”

    Weak capture refers to a celestial body gravitationally capturing a spacecraft without the need for a significant propulsive maneuver. This technique, crucial for a fuel-efficient lunar orbit insertion, demands precise timing and fine trajectory adjustments, as McDowell explains.

    “The way to think of these ‘modern’ and fancy orbit strategies is that you trade time for fuel. It takes much longer but you use less fuel. Once you get out to the apogee of the transfer trajectory—they don’t say how far out that was but I am guessing over a million kilometers—you can change your final destination a lot with just a small puff of the rockets. But by the same token, a small error will make you miss your target by a long way.”

    Slides from an apparent Microsat presentation emerged on social media, illustrating the circuitous path taken to deliver the spacecraft to lunar orbit. The institute, however, did not respond to a request for comment on the mission.

    DRO-A and B separated from each other after successfully entering their intended distant retrograde orbit. The pair have, according to U.S. Space Force space domain awareness, orbits with an apogee of around 580,000 kilometers relative to the Earth and a perigee of around 290,000 km, while the moon orbits Earth at an average distance of 385,000 km, indicating a very high orbit above the moon.

    There, the spacecraft are testing out the attributes of the unique orbit and testing technologies, including communications with another satellite, DRO-L, which was launched a month before DRO-A and B into low Earth orbit. Though not a major part of China’s lunar plans, the country is planning to establish lunar navigation and communications infrastructure to support lunar exploration, and the satellites could inform these plans.

    DRO-A, at least, also carries a science payload in the form of an all-sky monitor to detect gamma-ray bursts, particularly those associated with gravitational wave events, such as colliding black holes, neutron star collisions, and supernovae. The instrumentation is based on China’s earlier GECAM low Earth orbit gamma-ray-detecting mission, but with an unobstructed field of view in deep space and less interference.

    The rescue then is a boost for China’s lunar plans and space science objectives, and demonstrates Chinese prowess in astrodynamics. McDowell notes the closest approximation to this rescue is the Asiasat 3 mission, renamed HGS-1, where the satellite bound for geostationary (GEO) orbit was stuck in an elliptical transfer orbit in 1997. The satellite’s apogee was raised to make a pair of lunar flybys to eventually deliver it to GEO with fuel remaining to operate for four years.

    “[This] definitely shows that China is now on a par with the U.S. in its ability to manage complex astrodynamics,” McDowell says.

    China also pulled off a complex lunar far side sample return mission last year, requiring five separate spacecraft, and next year plans a landing at the lunar south pole to seek out volatiles including water. The successful salvaging of the DRO-A and B mission reinforces China’s growing expertise in deep space navigation and complex orbital rescues. With plans to establish a permanent moon base in the 2030s, such capabilities will be crucial for maintaining and supporting long-term Moon operations.

  • Willie Hobbs Moore: STEM Trailblazer
    by Willie D. Jones on 16. February 2025. at 14:00



    At a time in American history when even the most intelligent Black women were expected to become, at most, teachers or nurses, Willie Hobbs Moore broke with societal expectations to become a noted physicist and engineer.

    Moore probably is best known for being the first Black woman to earn a Ph.D. in science (physics) in the United States, in 1972. She also is renowned for being an unwavering advocate for getting more Black people into science, technology, engineering, and mathematics. Her achievements have inspired generations of Black students, and women especially, to believe that they could pursue a STEM career.

    Moore, who died in her Ann Arbor, Mich., home on 14 March 1994, two months shy of her 60th birthday, is the subject of the new book Willie Hobbs Moore—You’ve Got to Be Excellent! The biography, published by IEEE-USA, is the seventh in the organization’s Famous Women Engineers in History series.

    Moore attended the University of Michigan in Ann Arbor, where she earned bachelor’s and master’s degrees in electrical engineering and, in 1972, her barrier-breaking doctorate in physics. In 2013, the University of Michigan Women in Science and Engineering unit created the Willie Hobbs Moore Awards to honor students, staff, and faculty members who “demonstrate excellence promoting equity” in STEM fields. The university held a symposium in 2022 to honor Moore’s work and celebrate the 50th anniversary of her achievement.

    Physicist Donnell Walton, former director of the Corning West Technology Center in Silicon Valley and a National Society of Black Physicists board member, praised Moore, saying she indicated that what’s possible is not limited to what’s expected. Walton befriended Moore while he was pursuing his doctorate in applied physics at the university, he says, adding that he admired the strength and perseverance it took for her to thrive in academic and professional arenas where she was the only Black woman.

    Despite ingrained social norms that tended to push women and minorities into lower-status occupations, Moore refused to be dissuaded from her career. She conducted physics research at the University of Michigan and held several positions in industry before joining Ford Motor Co. in Dearborn, Mich., in 1977. She became a U.S. expert in Japanese quality systems and engineering design, improving Ford’s production processes. She rose through the ranks at the automaker and served as an executive who oversaw the warranty department within the company’s automobile assembly operation.

    An early trailblazer

    Moore was born in 1934 in Atlantic City, N.J. According to a Physics Today article that delved into her background, her father was a plumber and her mother worked part time as a hotel chambermaid.

    An A student throughout high school, Moore displayed a talent for science and mathematics. She became the first person in her family to earn a college degree.

    She began her studies at the Michigan engineering college in 1954—the same year that the U.S. Supreme Court ruled against legally mandated segregation in public schools.

    Moore was the only Black female undergraduate in the electrical engineering program. Her academic success makes it clear that being one of one was not an impediment. But race was occasionally an issue. In that same 2022 Physics Today article, Ronald E. Mickens, a physics professor at Clark Atlanta University, told a story about an incident from Moore’s undergraduate days that illustrates the point. One day she encountered the chairman of another engineering college department, and completely unprompted, he told her, “You don’t belong here. Even if you manage to finish, there is no place for you in the professional world you seek.”

    “There will always be prejudiced people; you’ve got to be prepared to survive in spite of their attitudes.” —Willie Hobbs Moore

    But she persevered, maintaining her standard of excellence in her academic pursuits. She earned a bachelor’s degree in EE in 1958, followed by an EE master’s degree in 1961. She was the first Black woman to earn those degrees at Michigan.

    She worked as an engineer at several companies before returning to the university in 1966 to begin working toward a doctorate. She conducted her graduate research under the direction of Samuel Krimm, a noted infrared spectroscopist. Krimm’s work focused on analyzing materials using infrared so he could study their molecular structures. Moore’s dissertation was a theoretical analysis of secondary chlorides for polyvinyl chloride polymers. PVC, a type of plastic, is widely used in construction, health care, and packaging. Moore’s work led to the development of additives that gave PVC pipes greater thermal and mechanical stability and improved their durability.

    Moore paid for her doctoral studies by working part time at the university, KMS Industries, and Datamax Corp., all in Ann Arbor. Joining KMS as a systems analyst, she supported the optics design staff and established computer requirements for the optics division. She left KMS in 1968 to become a senior analyst at Datamax. In that role, she headed the analytics group, which evaluated the company’s products.

    After earning her Ph.D. in 1972, for the next five years she was a postdoctoral Fellow and lecturer with the university’s Macromolecular Research Center.

    She authored more than a dozen papers on protein spectroscopy—the science of analyzing proteins’ structure, composition, and activity by measuring how they interact with electromagnetic radiation. Her work appeared in several prestigious publications including the Journal of Applied Physics, The Journal of Chemical Physics, and the Journal of Molecular Spectroscopy.

    Despite a promising career in academia, Moore left to work in industry.

    Ford’s quality control queen

    Moore joined Ford in 1977 as an assembly engineer. In an interview with The Ann Arbor News, she recalled contending with racial hostility and overt accusations that she was underqualified and had been hired only to fill a quota that was part of the company’s affirmative action program.

    She demonstrated her value to the organization and became an expert in Japanese methods of quality engineering and manufacturing, particularly those invented by Genichi Taguchi, a renowned engineer and statistician.

    The Taguchi method emphasized continuous improvement, waste reduction, and employee involvement in projects. Moore pushed Ford to use the approach, which led to higher-quality products and better efficiency. The changes proved critical to boosting the company’s competitiveness against Japanese automakers, which had begun to dominate the automobile market in the late 1970s and early 1980s.

    Eventually, Moore rose to the company’s executive ranks, overseeing the warranty department of Ford’s assembly operation.

    In 1985 Moore co-wrote the book Quality Engineering Products and Process Design Optimization with Yuin Wu, vice president of Taguchi Methods Training at ASI Consulting Group in Bingham Farms, Mich. ASI helps businesses develop strategies for improving productivity, engineering, and product quality. In their book, Moore and Wu wrote, “Philosophically, the Taguchi approach is technology rather than theory. It is inductive rather than deductive. It is an engineering tool. The Taguchi approach is concerned with productivity enhancement and cost-effectiveness.”

    Encouraging more Blacks to study STEM

    Moore was active in STEM education for minorities, as explored in an article about her published by the American Physical Society. She brought her skills and experience to volunteer activities, intending to produce more STEM professionals who looked like her. She was involved in community science and math programs in Ann Arbor, sponsored by The Links, a service organization for Black women. She also was active with Delta Sigma Theta, a historically Black, service-oriented sorority. She volunteered with the Saturday Academy, a community mentoring program that focuses on developing college-bound students’ life skills. Volunteers also provide subject matter instruction.

    She advised minority engineering students: “There will always be prejudiced people; you’ve got to be prepared to survive in spite of their attitudes.” Black students she encountered recall her oft-repeated mantra: “You’ve got to be excellent!”

    In a posthumous tribute essay about Moore, Walton recalled befriending her at the Saturday Academy while tutoring middle and high school students in science and mathematics.

    “Don Coleman, the former associate provost at Howard University and a good friend of mine,” Walton wrote, “noted that Dr. Hobbs Moore had tutored him when he was an engineering student at the University of Michigan. [Coleman] recalled that she taught the fundamentals and always made him feel as though she was merely reminding him of what he already knew rather than teaching him unfamiliar things.”

    Walton recalled how dedicated Moore was to ensuring Black students were prepared to follow in her footsteps. He said she was a mainstay at the Saturday Academy until her 24-year battle with cancer made it impossible for her to continue.

    She was posthumously honored with the Bouchet Award at the National Conference of Black Physics Students in 1995. Edward A. Bouchet was the first Black person to earn a Ph.D. in a science (physics) in the United States.

    Walton, who said he admired Moore for her determination to light the way for succeeding generations, says the programs that helped him as a young student are no longer being pursued with the fervor they once were.

    “Particularly right now,” he told the American Institute of Physics in 2024, “we’re seeing a retrenchment, a backlash against programs and initiatives that deal with the historical underrepresentation of women and other people who we know have a history in the United States of being excluded. And if we don’t have interventions in place, there’s nothing to say that it won’t continue.” In the interview, Walton said he is concerned that instead of there being more STEM professionals like Moore, there might be fewer.

    A lasting legacy

    Moore’s life is a testament to perseverance, excellence, and the power of mentorship. Her achievements prove that it’s possible to overcome the inertia of low societal expectations and improve the world.

    Willie Hobbs Moore—You’ve Got to Be Excellent! Biography is available for free to members. The non-member price is US $2.99

  • Video Friday: PARTNR
    by Evan Ackerman on 14. February 2025. at 17:00



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY
    German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY
    European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY
    RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
    ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
    ICRA 2025: 19–23 May 2025, ATLANTA, GA
    London Humanoids Summit: 29–30 May 2025, LONDON
    IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
    2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX
    RSS 2025: 21–25 June 2025, LOS ANGELES
    ETH Robotics Summer School: 21–27 June 2025, GENEVA
    IAS 2025: 30 June–4 July 2025, GENOA, ITALY
    ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
    IEEE World Haptics: 8–11 July 2025, SUWON, KOREA

    Enjoy today’s videos!

    There is an immense amount of potential for innovation and development in the field of human-robot collaboration — and we’re excited to release Meta PARTNR, a research framework that includes a large-scale benchmark, dataset and large planning model to jump start additional research in this exciting field.

    [ Meta PARTNR ]

    Humanoid is the first AI and robotics company in the UK, creating the world’s leading, commercially scalable, and safe humanoid robots.

    [ Humanoid ]

    To complement our review paper, “Grand Challenges for Burrowing Soft Robots,” we present a compilation of soft burrowers, both organic and robotic. Soft organisms use specialized mechanisms for burrowing in granular media, which have inspired the design of many soft robots. To improve the burrowing efficacy of soft robots, there are many grand challenges that must be addressed by roboticists.

    [ Faboratory Research ] at [ Yale University ]

    Three small lunar rovers were packed up at NASA’s Jet Propulsion Laboratory for the first leg of their multistage journey to the Moon. These suitcase-size rovers, along with a base station and camera system that will record their travels on the lunar surface, make up NASA’s CADRE (Cooperative Autonomous Distributed Robotic Exploration) technology demonstration.]

    [ NASA ]

    MenteeBot V3.0 is a fully vertically integrated humanoid robot, with full-stack AI and proprietary hardware.

    [ Mentee Robotics ]

    What do assistance robots look like? From robotic arms attached to a wheelchair to autonomous robots that can pick up and carry objects on their own, assistive robots are making a real difference to the lives of people with limited motor control.

    [ Cybathlon ]

    Robots can not perform reactive manipulation and they mostly operate in open-loop while interacting with their environment. Consequently, the current manipulation algorithms either are very inefficient in performance or can only work in highly structured environments. In this paper, we present closed-loop control of a complex manipulation task where a robot uses a tool to interact with objects.

    [ Paper ] via [ Mitsubishi Electric Research Laboratories ]

    Thanks, Yuki!

    When the future becomes the present, anything is possible. In our latest campaign, “The New Normal,” we highlight the journey our riders experience from first seeing Waymo to relishing in the magic of their first ride. How did your first-ride feeling change the way you think about the possibilities of AVs?

    [ Waymo ]

    One of a humanoid robot’s unique advantages lies in its bipedal mobility, allowing it to navigate diverse terrains with efficiency and agility. This capability enables Moby to move freely through various environments and assist with high-risk tasks in critical industries like construction, mining, and energy.

    [ UCR ]

    Although robots are just tools to us, it’s still important to make them somewhat expressive so they can better integrate into our world. So, we created a small animation of the robot waking up—one that it executes all by itself!

    [ Pollen Robotics ]

    In this live demo, an OTTO AMR expert will walk through the key differences between AGVs and AMRs, highlighting how OTTO AMRs address challenges that AGVs cannot.

    [ OTTO ] by [ Rockwell Automation ]

    This Carnegie Mellon University Robotics Institute Seminar is from CMU’s Aaron Johnson, on “Uncertainty and Contact with the World.”

    As robots move out of the lab and factory and into more challenging environments, uncertainty in the robot’s state, dynamics, and contact conditions becomes a fact of life. In this talk, I’ll present some recent work in handling uncertainty in dynamics and contact conditions, in order to both reduce that uncertainty where we can but also generate strategies that do not require perfect knowledge of the world state.

    [ CMU RI ]

  • Are You Ready to Let an AI Agent Use Your Computer?
    by Eliza Strickland on 13. February 2025. at 14:00



    Two years after the generative AI boom really began with the launch of ChatGPT, it no longer seems that exciting to have a phenomenally helpful AI assistant hanging around in your web browser or phone, just waiting for you to ask it questions. The next big push in AI is for AI agents that can take action on your behalf. But while agentic AI has already arrived for power users like coders, everyday consumers don’t yet have these kinds of AI assistants.

    That will soon change. Anthropic, Google DeepMind, and OpenAI have all recently unveiled experimental models that can use computers the way people do—searching the web for information, filling out forms, and clicking buttons. With a little guidance from the human user, they can do thinks like order groceries, call an Uber, hunt for the best price for a product, or find a flight for your next vacation. And while these early models have limited abilities and aren’t yet widely available, they show the direction that AI is going.

    “This is just the AI clicking around,” said OpenAI CEO Sam Altman in a demo video as he watched the OpenAI agent, called Operator, navigate to OpenTable, look up a San Francisco restaurant, and check for a table for two at 7pm.

    Zachary Lipton, an associate professor of machine learning at Carnegie Mellon University, notes that AI agents are already being embedded in specialized software for different types of enterprise customers such as salespeople, doctors, and lawyers. But until now, we haven’t seen AI agents that can “do routine stuff on your laptop,” he says. “What’s intriguing here is the possibility of people starting to hand over the keys.”

    AI Agents from Anthropic, Google DeepMind, and OpenAI

    Anthropic was the first to unveil this new functionality, with an announcement in October that its Claude chatbot can now “use computers the way humans do.” The company stressed that it was giving the models this capability as a public beta test, and that it’s only available to developers who are building tools and products on top of Anthropic’s large language models. Claude navigates by viewing screenshots of what the user sees and counting the pixels required to move the cursor to a certain spot for a click. A spokesperson for Anthropic says that Claude can do this work on any computer and within any desktop application.

    Next out of the gate was Google DeepMind with its Project Mariner, built on top of Google’s Gemini 2 language model. The company showed Mariner off in December but called it an “early research prototype” and said it’s only making the tool available to “trusted testers” for now. As another precaution, Mariner currently only operates within the Chrome browser, and only within an active tab, meaning that it won’t run in the background while you work on other tasks. While this requirement seems to somewhat defeat the purpose of having a time-saving AI helper, it’s likely just a temporary condition for this early stage of development.

    Finally, in January OpenAI launched its computer-use agent (CUA), called Operator. OpenAI called it a “research preview” and made it available only to users who pay US $200 per month for OpenAI’s premium service, though the company said it’s working toward broader release. Yash Kumar, an engineer on the Operator team, says the tool can work with essentially any website. “We’re starting with the browser because this is where the majority of work happens,” Kumar says. But he notes that “the CUA model is also trained to use a computer, so it’s possible we could expand it” to work with other desktop apps.

    Like the others, Operator relies on chain-of-thought reasoning to take instructions and break them down into a series of tasks that it can complete. If it needs more information to complete a task—like, for example, if you prefer to buy red or yellow onions—it will pause and ask for input. It also asks for confirmation before taking a final step, like booking the restaurant table or putting in the grocery order.

    Safety Concerns for Computer-Use Agents

    Here are some things that computer-use agents can’t yet do: log in to sites, agree to terms of service, solve captchas, and enter credit card or other payment details. If an agent comes up against one of these roadblocks, it hands the steering wheel back to the human user. OpenAI notes that Operator doesn’t take screenshots of the browser while the user is entering login or payment information.

    The three companies have all noted that putting an AI in charge of your computer could pose safety risks. Anthropic has specifically raised the concern of prompt injection attacks, or ways in which malicious actors can add something to the user’s prompt to make the model take an unexpected action. “Since Claude can interpret screenshots from computers connected to the internet, it’s possible that it may be exposed to content that includes prompt injection attacks,” Anthropic wrote in a blog post.

    CMU’s Lipton says that the companies haven’t revealed much information about the computer-use agents and how they work, so it’s hard to assess the risks. “If someone is getting your computer operator to do something nefarious, does that mean they already have access to your computer?” he wonders, and if so, why wouldn’t the miscreant just take action directly?

    Still, Lipton says, with all the actions we take and purchases we make online, “It doesn’t require a wild leap of imagination to imagine actions that would leave the user in a pickle.” For example, he says, “Who will be the first person who wakes up and says, ‘My [agent] bought me a fleet of cars?’”

    The Future of Computer-Use Agents

    While none of the companies have revealed a timeline for making their computer-use agents broadly available, it seems likely that consumers will begin to get access to them this year—either through the big AI companies or through startups creating cheaper knockoffs.

    OpenAI’s Kumar says it’s an exciting time, and that Operator marks a step toward a more collaborative future for humans and AI. “It’s a stepping stone on our path to AGI,” he says, referring to the long-promised dream/nightmare of artificial general intelligence. “The ability to use the same interfaces and tools that humans interact with on a daily basis broadens the utility of AI, helping people save time on everyday tasks.”

    If you remember the prescient 2013 movie Her, it seems like we’re edging toward the world that existed at the beginning of the film, before the sultry-voiced Samantha began speaking into the protagonist’s ear. It’s a world in which everyone has a boring and neutral AI to help them read and respond to messages and take care of other mundane tasks. Once the AI companies solidly achieve that goal, they’ll no doubt start working on Samantha.

  • IEEE Unveils the 2025–2030 Strategic Plan
    by IEEE on 12. February 2025. at 19:00



    IEEE’s 2020–2025 strategic plan set direction for the organization and informed its efforts over the last four years. The IEEE Board of Directors, supported by the IEEE Strategy and Alignment Committee, has updated the goals of the plan, which now covers 2025 through 2030. Even though the goals have been updated, IEEE’s mission and vision remain constant.

    The 2025–2030 IEEE Strategic Plan’s six new goals focus on furthering the organization’s role as a leading trusted source, driving technological innovation ethically and with integrity, enabling interdisciplinary opportunities, inspiring future generations of technologists, further engaging the public, and empowering technology professionals throughout their careers.

    Together with IEEE’s steadfast mission, vision, and core values, the plan will guide the organization’s priorities.

    “The IEEE Strategic Plan provides the ‘North Star’ for IEEE activities,” says Kathleen Kramer, 2025 IEEE president and CEO. “It offers aspirational, guiding priorities to steer us for the near future. IEEE organizational units are aligning their initiatives to these goals so we may move forward as One IEEE.”

    Input from a variety of groups

    To gain input for the new strategic plan from the IEEE community, in-depth stakeholder interviews were conducted with the Board of Directors, senior professional staff leadership, young professionals, students, and others. IEEE also surveyed more than 35,000 individuals including volunteers; members and nonmembers; IEEE young professionals and student members; and representatives from industry. In-person focus groups were conducted in eight locations around the globe.

    The goals were ideated through working sessions with the IEEE directors, directors-elect, senior professional staff leadership, and the IEEE Strategy and Alignment Committee, culminating with the Board approving them at its November 2024 meeting.

    These six new goals will guide IEEE through the near future:

    • Advance science and technology as a leading trusted source of information for research, development, standards, and public policy
    • Drive technological innovation while promoting scientific integrity and the ethical development and use of technology
    • Provide opportunities for technology-related interdisciplinary collaboration, research, and knowledge sharing across industry, academia, and government
    • Inspire intellectual curiosity and support discovery and invention to engage the next generation of technology innovators
    • Expand public awareness of the significant role that engineering, science, and technology play across the globe
    • Empower technology professionals in their careers through ongoing education, mentoring, networking, and lifelong engagement

    Work on the next phase is ongoing and is designed to guide the organization in cascading the goals into tactical objectives to ensure that organizational unit efforts align with the holistic IEEE strategy. Aligning organizational unit strategic planning with the broader IEEE Strategic Plan is an important next step.

    In delivering on its strategic plan, IEEE will continue to foster a collaborative environment that is open, inclusive, and free of bias. The organization also will continue to sustain its strength, reach, and vitality of our organization for future generations and ensure our role as a 501(c)(3) public charity.

  • Dual-Arm HyQReal Puts Powerful Telepresence Anywhere
    by Evan Ackerman on 11. February 2025. at 16:00



    In theory, one of the main applications for robots should be operating in environments that (for whatever reason) are too dangerous for humans. I say “in theory” because in practice it’s difficult to get robots to do useful stuff in semi-structured or unstructured environments without direct human supervision. This is why there’s been some emphasis recently on teleoperation: Human software teaming up with robot hardware can be a very effective combination.

    For this combination to work, you need two things. First, an intuitive control system that lets the user embody themselves in the robot to pilot it effectively. And second, a robot that can deliver on the kind of embodiment that the human pilot needs. The second bit is the more challenging, because humans have very high standards for mobility, strength, and dexterity. But researchers at the Italian Institute of Technology (IIT) have a system that manages to check both boxes, thanks to its enormously powerful quadruped, which now sports a pair of massive arms on its head.


    “The primary goal of this project, conducted in collaboration with INAIL, is to extend human capabilities to the robot, allowing operators to perform complex tasks remotely in hazardous and unstructured environments to mitigate risks to their safety by exploiting the robot’s capabilities,” explains Claudio Semini, who leads the Robot Teleoperativo project at IIT. The project is based around the HyQReal hydraulic quadruped, the most recent addition to IIT’s quadruped family.

    Hydraulics have been very visibly falling out of favor in robotics, because they’re complicated and messy, and in general robots don’t need the absurd power density that comes with hydraulics. But there are still a few robots in active development that use hydraulics specifically because of all that power. If your robot needs to be highly dynamic or lift really heavy things, hydraulics are, at least for now, where it’s at.

    IIT’s HyQReal quadruped is one of those robots. If you need something that can carry a big payload, like a pair of massive arms, this is your robot. Back in 2019, we saw HyQReal pulling a three-tonne airplane. HyQReal itself weighs 140 kilograms, and its knee joints can output up to 300 newton-meters of torque. The hydraulic system is powered by onboard batteries and can provide up to 4 kilowatts of power. It also uses some of Moog’s lovely integrated smart actuators, which sadly don’t seem to be in development anymore. Beyond just lifting heavy things, HyQReal’s mass and power make it a very stable platform, and its aluminum roll cage and Kevlar skin ensure robustness.

    Photo of a quadruped robot with large blue arms sticking out of its head. The HyQReal hydraulic quadruped is tethered for power during experiments at IIT, but it can also run on battery power.IIT

    The arms that HyQReal is carrying are IIT-INAIL arms, which weigh 10 kg each and have a payload of 5 kg per arm. To put that in perspective, the maximum payload of a Boston Dynamics Spot robot is only 14 kg. The head-mounted configuration of the arms means they can reach the ground, and they also have an overlapping workspace to enable bimanual manipulation, which is enhanced by HyQReal’s ability to move its body to assist the arms with their reach. “The development of core actuation technologies with high power, low weight, and advanced control has been a key enabler in our efforts,” says Nikos Tsagarakis, head of the HHCM Lab at IIT. “These technologies have allowed us to realize a low-weight bimanual manipulation system with high payload capacity and large workspace, suitable for integration with HyQReal.”

    Maximizing reachable space is important, because the robot will be under the remote control of a human, who probably has no particular interest in or care for mechanical or power constraints—they just want to get the job done.

    To get the job done, IIT has developed a teleoperation system, which is weird-looking because it features a very large workspace that allows the user to leverage more of the robot’s full range of motion. Having tried a bunch of different robotic telepresence systems, I can vouch for how important this is: It’s super annoying to be doing some task through telepresence, and then hit a joint limit on the robot and have to pause to reset your arm position. “That is why it is important to study operators’ quality of experience. It allows us to design the haptic and teleoperation systems appropriately because it provides insights into the levels of delight or frustration associated with immersive visualization, haptic feedback, robot control, and task performance,” confirms Ioannis Sarakoglou, who is responsible for the development of the haptic teleoperation technologies in the HHCM Lab. The whole thing takes place in a fully immersive VR environment, of course, with some clever bandwidth optimization inspired by the way humans see that transmits higher-resolution images only where the user is looking.

    A person holding a robotic apparatus on their hand while wearing a virtual reality headset. HyQReal’s telepresence control system offers haptic feedback and a large workspace.IIT

    Telepresence Robots for the Real World

    The system is designed to be used in hazardous environments where you wouldn’t want to send a human. That’s why IIT’s partner on this project is INAIL, Italy’s National Institute for Insurance Against Accidents at Work, which is understandably quite interested in finding ways in which robots can be used to keep humans out of harm’s way.

    In tests with Italian firefighters in 2022 (using an earlier version of the robot with a single arm), operators were able to use the system to extinguish a simulated tunnel fire. It’s a good first step, but Semini has plans to push the system to handle “more complex, heavy, and demanding tasks, which will better show its potential for real-world applications.”

    As always with robots and real-world applications, there’s still a lot of work to be done, Semini says. “The reliability and durability of the systems in extreme environments have to be improved,” he says. “For instance, the robot must endure intense heat and prolonged flame exposure in firefighting without compromising its operational performance or structural integrity.” There’s also managing the robot’s energy consumption (which is not small) to give it a useful operating time, and managing the amount of information presented to the operator to boost situational awareness while still keeping things streamlined and efficient. “Overloading operators with too much information increases cognitive burden, while too little can lead to errors and reduce situational awareness,” says Yonas Tefera, who lead the development of the immersive interface. “Advances in immersive mixed-reality interfaces and multimodal controls, such as voice commands and eye tracking, are expected to improve efficiency and reduce fatigue in the future.”

    There’s a lot of crossover here with the goals of the DARPA Robotics Challenge for humanoid robots, except IIT’s system is arguably much more realistically deployable than any of those humanoids are, at least in the near term. While we’re just starting to see the potential of humanoids in carefully controlled environment, quadrupeds are already out there in the world, proving how reliable their four-legged mobility is. Manipulation is the obvious next step, but it has to be more than just opening doors. We need it to use tools, lift debris, and all that other DARPA Robotics Challenge stuff that will keep humans safe. That’s what Robot Teleoperativo is trying to make real.

    You can find more detail about the Robot Teleoperativo project in this paper, presented in November at the 2024 IEEE Conference on Telepresence, in Pasadena, Calif.

  • Celebrating Steve Jobs’s Impact on Consumer Tech and Design
    by San Murugesan on 10. February 2025. at 19:00



    Although Apple cofounder Steve Jobs died on 5 October 2011 at age 56, his legacy endures. His name remains synonymous with innovation, creativity, and the relentless pursuit of excellence. As a pioneer in technology and design, Jobs dared to imagine the impossible, transforming industries and reshaping human interaction with technology. His work continues to inspire engineers, scientists, and technologists worldwide. His contributions to technology, design, and human-centric innovation shape the modern world.

    On the eve of what would have been his 70th birthday, 24 February, I examine his legacy, its contemporary relevance, and the enduring lessons that can guide us toward advancing technology for the benefit of humanity.

    Jobs’s lasting impact: A revolution in technology

    Jobs was more than a successful tech entrepreneur; he was a visionary who changed the world through his unyielding drive for innovation. He revolutionized many areas including computing, telecommunications, entertainment, and design. The products and services he pioneered have become integral to modern life and form the foundation for further technological advancements.

    Celebrated for his vision, he also was criticized for his short temper, impatience, and lack of empathy. His autocratic, demanding leadership alienated colleagues and caused conflicts. But those traits also fueled innovation and offered lessons in both leadership pitfalls and aspirations.

    Here are some of what I consider to be his most iconic innovations and contributions.

    The Macintosh, the iPhone, the iPad, and more

    The Macintosh, introduced in 1984, was the first commercially successful personal computer to feature a graphical user interface, built-in screen, and mouse. It made computers that followed it more accessible and user-friendly, and it sparked a revolution in personal and business computing.

    Pixar Animation Studios, launched in 1986, became a creative powerhouse, revolutionizing animated storytelling with films such as Toy Story and Finding Nemo.

    The iPod—which came out in 2001and its accompanying iTunes store transformed the music industry by offering a seamless, legal way to purchase songs and albums and then digitally store them. It redefined music consumption. By combining hardware innovation with a revolutionary ecosystem, Jobs proved that technology could disrupt established industries and create value for creators and users.

    The iPhone, which was launched in 2007, integrated a telephone, a music player, and connectivity to the Internet. It revolutionized mobile phone technology and reshaped global communication. The device set the minimum standard for smartphones that other manufacturers have now adopted.

    The iPad, introduced in 2010, pioneered a new era in mobile computing, enhancing content consumption, creativity, and productivity.

    Apple Park redefined the high-tech corporate campus. One of the final projects he proposed was the construction of a circular corporate campus in Cupertino, Calif. Nicknamed The Spaceship when it opened in 2017, the facility housed 12,000 employees in a four-story building with underground parking. A whopping 80 percent of the grounds were landscaped. “It’s curved all the way around,” Jobs said. “There’s not a straight piece of glass in this building.”

    As Simon Sadler, a design professor at the University of California, Davis, outlined in a Places Journal article, Jobs also was an architect.

    Jobs demonstrated the value of amalgamating technology, art, and user-centric design. His legacy and philosophy exemplify simplicity, elegance, and functionality.

    Five core lessons from Jobs’s life and work

    As outlined in my 2021 article in The Institute about lessons I’ve learned from him, Jobs’s life and career offer valuable insights for technologists, developers, marketers, and business leaders. In today’s rapidly evolving technological landscape, five key lessons from his legacy remain particularly relevant.

    1. Innovation requires bold vision and risk-taking. Jobs created products people didn’t even realize they needed until they experienced them. He famously said, “It’s not the customer’s job to know what they want.” His work demonstrates that innovation can come from taking calculated risks and pushing boundaries. Further, Jobs fostered continuous innovation rather than resting on successful products, and he pushed Apple to keep improving and reinventing. I learned to envision possibilities beyond current limitations and create solutions that shape the future.

    2. Simplicity is the ultimate sophistication. Jobs championed minimalism and clarity in design and user interface, recognizing that simplicity enhances usability. His approach underscores the importance of user experience. Regardless of technological sophistication, a product’s success depends on its accessibility, intuitive design, and value to users. His lesson for technologists and developers is to strip away complexity and focus on what truly matters.

    3. Passion and persistence drive success. Jobs’s career was marked by major setbacks and unequivocal triumphs. A thought-provoking question remains: Why did Apple’s board fire him in 1985 despite his immense potential? As Michael Schlossberg explains, the reasons are complex but, in essence, it boils down to a significant disagreement between Jobs, CEO John Sculley, whom Jobs hired, and the board. As Schlossberg underscores, the episode serves as an excellent case study in corporate governance.

    After being ousted from Apple, Jobs founded NeXT and led Pixar before returning to Apple in 1997 to orchestrate one of history’s most remarkable corporate turnarounds.

    The story highlights the value of resilience and passion. For engineers and scientists, perseverance is crucial, as failure often precedes research, development, and innovative breakthroughs.

    4. Technology must serve the users. Jobs was committed to creating technology that seamlessly integrates into human life. His principle was that technology must serve a purpose that meets human needs. It’s a goal that motivates engineers and technologists to focus their innovations in AI, robotics, biotechnology, and other areas, addressing human needs while considering ethical and societal implications.

    5. Challenge conventional thinking. Apple’s Think Different campaign encapsulated Jobs’s philosophy: Challenge norms, question limitations, and pursue unconventional ideas that can change the world. His vision encourages researchers and engineers to push boundaries and explore new frontiers.

    Jobs’s early insights on AI

    Decades before artificial intelligence became mainstream, Jobs anticipated its transformative potential. In a 1983 speech at the International Design Conference in Aspen, Colo., he predicted AI-driven systems would fundamentally reshape daily life. His vision aligns closely with today’s advancements in generative AI.

    Jobs saw books as a powerful but static medium that lacked interactivity. He envisioned interactive tools that would allow deeper engagement with the text—asking questions and exploring the author’s thoughts beyond the written words.

    In 1985, Jobs envisioned the creation of a new kind of interactive tool, what we consider today to be artificial intelligence.

    In the video, he said: “Someday, a student will be able to not only read Aristotle’s words but also ask Aristotle a question and receive an answer.”

    Beyond interactivity, Jobs anticipated advances in brain-inspired AI systems. He believed computing, which was facing roadblocks, would evolve by understanding the brain’s architecture, and he predicted imminent breakthroughs. His early advocacy for AI-driven technologies—such as speech recognition, computer vision, and natural language processing—culminated in Apple’s 2010 acquisition of Siri, bringing AI-powered personal assistance into daily life.

    With AI-driven chatbots and Apple Intelligence, Jobs’s vision of seamless, user-centric AI has become a reality. He saw computers as “bicycles for the mind”—tools to amplify human capabilities. If he were still alive, he likely would be at the forefront of AI innovation, ensuring it enhances creativity, decision-making, and human intelligence.

    “As a pioneer in technology and design, Steve Jobs dared to imagine the impossible, transforming industries and reshaping human interaction with technology.”

    Jobs’s approach to AI would extend far beyond functionality. I think he would prioritize humanizing AI—infusing it with emotional intelligence to understand and respond to human emotions authentically. Whether through AI companions for the elderly or empathetic customer service agents, I believe he would have pushed AI to foster more meaningful connections with users.

    Furthermore, he likely would have envisioned AI seamlessly integrated into daily life, not as a detached digital assistant but as an adaptive and intuitive extension of user needs. Imagine an AI-driven device that learns about its user while safeguarding privacy—automatically adjusting its interface based on mood, location, and context to create effortless and natural interactions.

    Jobs’s enduring legacy in shaping personal computing suggests that had he lived to witness the ongoing AI revolution, he would have played a pivotal role in shaping it as a tool for human advancement and creativity. He would have championed AI as a tool for empowerment rather than alienation.

  • Gyroscope-on-a-Chip Targets GPS’s Dominance
    by Willie D. Jones on 10. February 2025. at 18:00



    This year, two companies—Santa Clara, California-based Anello Photonics and Montreal-based One Silicon Chip Photonics (OSCP)—have introduced new gyroscope-on-a-chip navigation systems, allowing for precise heading and distance tracking without satellite signals.

    Such inertial navigation is increasingly important today, because GPS is susceptible to jamming and spoofing, which can disrupt navigation or provide misleading location data. These problems have been well-documented in conflict zones, including Ukraine and the Middle East, where military operations have faced significant GPS interference. For drones, which rely on GPS for positioning, the loss of signal can be catastrophic, leaving them unable to navigate and sometimes resulting in crashes.

    Optical gyroscopes have long been seen as an alternative navigation technology to satellite-based global navigation systems. Larger-sized units like ring-laser gyroscopes have been around since the 1970s. However, shrinking these devices down to chip-size was far easier said than done.

    The optical gyroscopes produced starting in the mid-1970s had difficulties maintaining the necessary optical signal strength for precise rotation sensing. Shrinking them only made the signal-to-noise ratio worse. So, as most microelectronic devices followed the miniaturization pathway described by Moore’s Law, light-based gyroscopes remained big, bulky, and power hungry.

    If you drive 100 kilometers, the system’s distance measurement will be accurate to within 100 meters, or 0.1 percent of the distance traveled.” Mario Paniccia, Anello Photonics

    That was the state of things until Caltech electrical engineering and medical engineering professor Ali Hajimiri and his team made a breakthrough that overcame previous size and accuracy limitations. In a 2018 paper, they described how they created a solid-state gyroscope small enough to fit on a grain of rice. Like the optical gyroscopes that appeared before it, this gyroscope leveraged the Sagnac effect, a principle first demonstrated in 1913 by French physicist Georges Sagnac.

    The Sagnac effect occurs when a beam of light is split into two and sent in opposite directions along a circular path. If the device rotates, one beam reaches the detector ahead of the other, allowing precise measurement of the rotation angle. Because this method does not rely on external signals, it is immune to electromagnetic interference, vibration, and cyberattacks via an open communication channel—making it an ideal solution for applications where GPS is unreliable or completely denied.

    By introducing a technique for eliminating noise, Hajimiri and his colleagues were able to create an optical gyroscope that was one–five hundredth the size of commercially available fiber-optic gyroscopes and comparable in terms of sensitivity.

    A small square red device with a row of pins on one side placed beside a slightly smaller U.S. quarter coin. This pocket-size, chip-based optical gyroscope from Anello Photonics is just as accurate as bulkier versions.Anello Photonics


    Anello and OSCP Enter the Market

    Less than a decade after Hajimiri’s breakthrough, Anello Photonics and OSCP are now looking to reshape the navigation market with their gyroscope-based systems. They have introduced further refinements that allow more miniaturization without diminishing the gyroscopes’ effectiveness. Anello’s low-loss silicon nitride waveguides allow light to circulate longer within the gyroscope, improving signal strength and reducing error accumulation. Anello’s techniques further suppress other noise sources, so a smaller waveguide holding less light—and therefore a fainter signal—is still sufficient for accurate rotation readings.

    The result, says Mario Paniccia, Anello CEO, was showcased at CES in 2024 and earlier this year. Paniccia explains that his company’s inertial measurement units (IMUs), which consist of three chip-based gyroscopes and additional components, fit in the palm of a person’s hand. They deliver high precision for multiple applications, including agriculture, where autonomous tractors must maintain perfectly straight furrows for up to 800 meters. Longer distances are also no problem for the navigation system, he says. “If you drive 100 kilometers,” says Paniccia, “the system’s distance measurement will be accurate to within 100 meters, or 0.1 percent of the distance traveled.”

    OSCP is also making strides in miniaturized navigation technology. At CES 2025, OSCP founder and CEO Kazem Zandi unveiled an upgraded multi-gyroscope IMU that is half the size of its predecessor. “It’s not only smaller, but also more power efficient and less expensive,” said Zandi at the Las Vegas–based tech expo. “These gyroscopes can provide dead reckoning with location accuracy to within centimeters.”

    Anello’s and OCSP’s IMUs are designed to work alongside GPS, constantly monitoring location inputs. If GPS interference is detected, artificial intelligence (AI) within either company’s system automatically shifts navigation control to the gyroscopes. “If, for example, you’re in New York,” explains Paniccia, “and the gyroscopes indicate that you’ve traveled 100 meters forward, but the GPS says you’re now in Texas, the algorithms know to port control over to the [gyroscope].”

    According to Paniccia, Anello’s latest system was designed specifically for unmanned underwater and surface vehicles operating in the vastness of the open ocean, where no landmarks exist to assist navigation. “In the ocean, everything looks the same,” he says. In the maritime space, in which currents make navigation more complicated than tracking location on land or in the air, Paniccia says the Anello device’s location error is more like 3 or 4 percent of the distance traveled.

    Paniccia says he envisions a future where miniaturized gyroscopes could be embedded into handheld devices for firefighters, allowing them to navigate through smoke-filled buildings where stairways and exits are no longer visible. “It will essentially be an electronic compass,” he says.

  • It’s Time To Rethink 6G
    by William Webb on 10. February 2025. at 14:00



    Is the worldwide race to keep expanding mobile bandwidth a fool’s errand? Could maximum data speeds—on mobile devices, at home, at work—be approaching “fast enough” for most people for most purposes?

    These heretical questions are worth asking, because industry bandwidth tracking data has lately been revealing something surprising: Terrestrial and mobile-data growth is slowing down. In fact, absent a dramatic change in consumer tech and broadband usage patterns, data-rate demand appears set to top out below 1 billion bits per second (1 gigabit per second) in just a few years.

    This is a big deal. A presumption of endless growth in wireless and terrestrial broadband data rates has for decades been a key driver behind telecom research funding. To keep telecom’s R&D engine rooms revving, research teams around the world have innovated a seemingly endless succession of technologies to expand bandwidth rates, such as 2G’s move to digital cell networks, 3G’s enhanced data-transfer capabilities, and 5G’s low-latency wireless connectivity.

    Yet present-day consumer usage appears set to throw a spanner in the works. Typical real-world 5G data rates today achieve up to 500 megabits per second for download speeds (and less for uploads). And some initial studies suggest 6G networks could one day supply data at 100 Gb/s. But the demand side of the equation suggests a very different situation.

    Mainstream consumer applications requiring more than 1 Gb/s border on the nonexistent.

    This is in part because mobile applications that need more than 15 to 20 Mb/s are rare, while mainstream consumer applications requiring more than 1 Gb/s border on the nonexistent.

    At most, meeting the demand for multiple simultaneous active applications and users requires hundreds of Mb/s range. To date, no new consumer technologies have emerged to expand the bandwidth margins much beyond the 1 Gb/s plateau.

    Yet wireless companies and researchers today still set their sights on a marketplace where consumer demand will gobble up as much bandwidth as can be provided by their mobile networks. The thinking here seems to be that if more bandwidth is available, new use cases and applications will spontaneously emerge to consume it.

    Is that such a foregone conclusion, though? Many technologies have had phases where customers eagerly embrace every improvement in some parameter—until a saturation point is reached and improvements are ultimately met with a collective shrug.

    Consider a very brief history of airspeed in commercial air travel. Passenger aircraft today fly at around 900 kilometers per hour—and have continued to traverse the skies at the same airspeed range for the past five decades. Although supersonic passenger aircraft found a niche from the 1970s through the early 2000s with the Concorde, commercial supersonic transport is no longer available for the mainstream consumer marketplace today.

    To be clear, there may still be niche use cases for many gigabits per second of wireless bandwidth—just as there may still be executives or world leaders who continue to look forward to spanning the globe at supersonic speeds.

    But what if the vast majority of 6G’s consumer bandwidth demand ultimately winds up resembling today’s 5G profile? It’s a possibility worth imagining.

    Consider a Bandwidth-Saturated World

    Transmitting high-end 4K video today requires 15 Mb/s, according to Netflix. Home broadband upgrades from, say, hundreds of Mb/s to 1,000 Mb/s (or 1 Gb/s) typically make little to no noticeable difference for the average end user. Likewise, for those with good 4G connectivity, 5G makes much less of an improvement on the mobile experience than advertisers like to claim—despite 5G networks being, according to Cisco, 1.4 to 14 times as fast as 4G.

    So, broadly, for a typical mobile device today, going much above 15 Mb/s borders on pointless. For a home, assuming two or three inhabitants all separately browsing or watching, somewhere between 100 Mb/s and 1 Gb/s marks the approximate saturation point beyond which further improvements become less and less noticeable, for most use cases.

    Probing a more extreme use case, one of the largest bandwidth requirements in recent consumer tech is Microsoft Flight Simulator 2024, whose “jaw-dropping bandwidth demand,” in the words of Windows Central, amounts to a maximum of 180 Mb/s.

    Stop to think about that for one moment. Here is a leading-edge tech product requiring less than one-fifth of 1 Gb/s, and such a voracious bandwidth appetite today is considered “jaw-dropping.”

    But what about the need to “future proof” the world’s networks? Perhaps most mobile and terrestrial networks don’t need many-Gb/s connectivity now, say the bigger-is-always-better proponents. But the world will soon!

    For starters, then, what bandwidth-hogging technologies are today on the horizon?

    In September, Apple unveiled its iPhone 16, which CEO Tim Cook said would feature generative AI broadly “across [Apple] products.” Could Apple’s new AI capabilities perhaps be a looming, bandwidth-consuming dark horse?

    One high-bandwidth use case would involve the latest iPhone using the camera to recognize a scene and comment on what’s in it. However, that’s not dissimilar to Google Lens’s visual search feature, which hasn’t markedly changed network traffic. Indeed, this sort of feature, perhaps used a few times per day, could require bandwidth equivalent to a second or two of high-definition video. None of this would come close to saturating the general bandwidth capacities noted above.

    To play devil’s advocate a little more, consider a representative batch of five soon-to-be-scaled-up, potentially high-bandwidth consumer technologies that do already exist. Do any of them appear poised to generate the many-Gb/s demand that present-day net usage does not?

    What about autonomous cars, for instance? Surely they’ll need as much bandwidth as they can possibly be given.

    Yet, the precious few autonomous cars out in the world today are generally designed to work without much in the way of instantaneous Internet communication. And no autonomous tech around the bend appears set to change the equation substantially, concerning instantaneous bandwidth needs. The future of autonomy may be revolutionary and ultimately inevitable, but it doesn’t appear to require network connectivity much beyond a decent 5G connection.

    No new technology has emerged that demands network requirements much beyond what 4G and 5G already deliver.

    Much the same argument holds for the Internet of things (IoT), which is not expected to increase network traffic above what a decent 4G connection could yield.

    Holographic communications likewise offer no greater bandwidth sink than any of the above case studies do. For a typical user, holograms are in fact just stereographic video projections. So if a single 4K stream demands 15 Mb/s, then stereo 4K streams would require 30 Mb/s. Of course, sophisticated representations of entire 3D scenes for large groups of users interacting with one another in-world could conceivably push bandwidth requirements up. But at this point, we’re getting into Matrix-like imagined technologies without any solid evidence to suggest a good 4G or 5G connection wouldn’t meet the tech’s bandwidth demands.

    AI in general is the wild card in the deck. The mysterious future directions for this technology suggest that AI broadband and wireless bandwidth needs could conceivably exceed 1 Gb/s. But consider at least the known knowns in the equation: At the moment, present-day AI applications involve small amounts of prompt text or a few images or video clips sent to and from an edge device like a smartphone or a consumer tablet. Even if one allows for the prompt text and photo and video bandwidth requirements to dramatically expand from there, it seems unlikely to match or exceed the already strenuous requirements of a simple 4K video stream. Which, as noted above, would appear to suggest modest bandwidth demands in the range of 15 Mb/s.

    The metaverse, meanwhile, has flopped. But even if it picks up steam again tomorrow, current estimates of its bandwidth needs run from 100 Mb/s to 1 Gb/s—all within 5G’s range. Admittedly, the most aggressive longer-term forecasts for the metaverse suggest that cutting-edge applications could demand as much as 5 Gb/s bandwidth. And while it’s true that in January, Verizon delivered more than 5 Gb/s bandwidth in an experimental 5G network, that result is unlikely to be replicable for most consumers in most settings anytime soon.

    Yet, even allowing for the practical unreachability of 5 Gb/s speeds on a real-world 5G network, a reader should still weigh the fact that any such imagined applications that might ultimately consume 5 Gb/s of bandwidth represent an extreme. And only the upper end of that subset is what might one day exceed data speeds that present-day 5G tech delivers.

    I would argue, in other words, that no new technology has emerged that demands network requirements much beyond what 4G and 5G already deliver. So at this point future-proofing telecom in the anticipation of tens or more Gb/s of consumer bandwidth demand seems like expensive insurance being taken out against an improbable event.

    Consumers Have Already Discovered the Gigabit Plateau

    As can be seen in the charts below—excerpted from my book, The End of Telecoms History, and compiled from a mix of sources, including Cisco and Barclays Research—a downward trend in data growth has been evident for at least the past decade.

    The statistics being tracked in the charts “Growth of Mobile-Data Usage” and “Growth of Landline-Data Usage” may seem a little counterintuitive at first. But it’s important to clarify that these charts do not suggest that overall bandwidth usage is declining. Rather, the conclusion these charts lead to is that the rate of bandwidth growth is slowing.

    Approaching Zero Growth


    As mobile-data growth slows, the telecom industry faces a new reality. Current 5G networks [black and orange lines] and terrestrial broadband networks [orange line] meet most consumer needs. Providers of both have seen a decrease in the growth of their data usage over recent years.


    Let’s start with mobile data. Between 2015 and 2023, there’s a consistent decline in bandwidth growth of some 6 percent per year. The overall trend is a little harder to interpret in landline bandwidth data, because there’s a large COVID-related peak in 2020 and 2021. But even after accounting for this entirely understandable anomaly, the trend is that home and office broadband growth fell on average by around 3 percent per year between 2015 and 2023.

    Extrapolating the trends from both of these curves leads to the ultimate conclusion that data growth should ultimately fall to zero or at least a negligibly small number by around 2027.

    This is an unpopular conclusion. It runs contrary to the persistent drumbeat of a many-Gb/s future that telecom “experts” have been claiming for years. For example, in November 2023 the Biden White House published its spectrum strategy, which states, “According to one estimate, data traffic on macro cellular networks is expected to increase by over 250 percent in the next 5 years, and over 500 percent in the next 10 years.”

    Additionally, the Stockholm-based telecom company Ericsson recently predicted near-term “surge[s] in mobile data traffic.” And the United Kingdom’s telecommunications regulator, Ofcom forecast a bandwidth growth-rate of 40 percent for the foreseeable future.

    But, as shown in the charts here, many mobile and Internet users in the developed world seem to be accessing all the bandwidth they need. Data rates are no longer the constraining and determinative factor that they used to be.

    The need to continue developing faster and bigger networks may therefore be overplayed today. That chapter of the Internet’s history is arguably now over, or it soon will be.

    The Telecom Industry Will Be Shifting Gears, Too

    The implications of having enough coverage and bandwidth are most obvious in the equipment-supply industry.

    Major network suppliers may need to become accustomed to the new reality of data rates leveling out. Are Ericsson’s and Nokia’s recent layoffs and the bankruptcies of smaller suppliers (such as Airspan Networks) a harbinger of what’s coming for telecom markets?

    Operators are already investing less in 5G equipment and are likely already close to “maintenance only” spending. Most mobile and fixed operators have not seen revenue growth above inflation for many years but hold out hope that somehow this will turn around. Perhaps, though, if the numbers referenced here are to be believed, that turnaround isn’t coming.

    An illustration of a man holding a bottle full of items and pouring it out. Davide Comai

    Telecommunications has historically been a high-growth industry, but current trends suggest it’s heading toward something more static—more like a public utility, where in this case the public good is delivering data connectivity reliably. Extrapolating these trends, equipment suppliers won’t need to invest as much on bandwidth expansion but instead will focus on improving the margins on existing lines of products.

    Some degree of bandwidth expansion for 6G networks will still be necessary. The metaverse example above suggests a range of “ceiling heights” in the maximum Gb/s that users will demand in the years ahead. For most, 1 Gb/s still appears to be more than enough. For those who use high-end applications like future immersive virtual worlds, perhaps that ceiling is closer to 5 Gb/s. But concentrating research efforts on 6G deployments that can deliver 10 Gb/s and higher for everyone appears not to be grounded in any currently imaginable consumer technologies.

    To adjust to a potential new reality of operating their wireless networks at closer to utility-like or commodity-like terms, many telecom companies may face a future of restructuring and cost cutting. A useful analogy here are budget airlines, which thrive because most consumers select their airfare on the basis of cost. Similarly, the way for future telecom companies to win a larger share of the customer base may be increasingly dictated not by technological innovation but by price and customer service.

    To be clear, the need for new telecom research will continue. But with bandwidth expansion deprioritized, other innovations will certainly include cheaper and more efficient or more reliable ways to deliver existing services.

    If consumer demand for ever more mobile data continues to dry up, regulators would no longer need to find new spectrum bands for cellular every few years and then conduct auctions. Indeed, the demand for spectrum may abate across most areas. Regulators may also have to consider whether fewer operators may be better for a country, with perhaps only a single underlying fixed and mobile network in many places—just as utilities for electricity, water, gas, and the like are often structured around single (or a limited set of) operators.

    Finally, politicians will need to rethink their desire to be at the forefront of metrics such as homes connected by fiber, 5G deployment, or national leadership in 6G. That’s a bit like wanting to be top of the league for the number of Ferraris per capita. Instead, the number of homes with sufficient connectivity and percentage of the country covered by 10 Mb/s mobile may be better metrics to pursue as policy goals.

    Another area of research will surely involve widening coverage in underserved areas and regions of the world—while still keeping costs low with more environmentally friendly solutions. Outside of urban areas, broadband is sometimes slow, with mobile connectivity nonexistent. Even urban areas contain so-called not-spots, while indoor coverage can be particularly problematic, especially when the building is clad with materials that are near-impenetrable to radio waves.

    Broadly, there are two main ways for telecoms to shore up the current digital divide. The first is regulatory. Government funding, whether through new regulation and existing grants already on the books, can go to telecom providers in many regions that have been identified for broadband expansion. Indirect sources of funding should not be overlooked either—for instance, to allow operators to retain radio-spectrum license fees and without paying auction fees.

    The second component is technological. Lower-cost rural telecom deployments could include satellite Internet deployments. Better indoor coverage can happen via private 5G networks or through improved access to existing and enhanced Wi-Fi.

    The above scenarios represent a major change of direction—from an industry built around innovating a new mobile generation every decade toward an industry focused on delivering lower prices and increased reliability. The coming 6G age might not be what telecom forecasters imagine. Its dawn may not herald a bold summit push toward 10 Gb/s and beyond. Instead, the 6G age could usher in something closer to an adjustment period, with the greatest opportunities for those who best understand how to benefit from the end of the era of rapid bandwidth growth in telecom history.

  • Advanced Magnet Manufacturing Begins in the United States
    by Glenn Zorpette on 09. February 2025. at 14:00



    In mid-January, a top United States materials company announced that it had started to manufacture rare earth magnets. It was important news—there are no large U.S. makers of the neodymium magnets that underpin huge and vitally important commercial and defense industries, including electric vehicles. But it created barely a ripple during a particularly loud and stormy time in U.S. trade relations.

    The press release, from MP Materials, was light on details. The company disclosed that it had started producing the magnets, called neodymium-iron-boron (NdFeB), on a “trial” basis and that the factory would begin gradually ramping up production before the end of this year. According to MP’s spokesman, Matt Sloustcher, the facility will have an initial capacity of 1,000 tonnes per annum, and has the infrastructure in place to scale up to 2,000 to 3,000 tonnes per year. The release also said that the facility, in Fort Worth, Texas, would supply magnets to General Motors and other U.S. manufacturers.

    NdFeB magnets are the most powerful and valuable type. They are used in motors for electric vehicles and for heating, ventilating, and cooling (HVAC) systems, in wind-turbine generators, in tools and appliances, and in audio speakers, among other gear. They are also critical components of countless military systems and platforms, including fighter and bomber aircraft, submarines, precision guided weapons, night-vision systems, and radars.

    A magnet manufacturing surge fueled by Defense dollars

    Front view of a sleek building with several large glass walls. MP Materials’ has named its new, state-of-the-art magnet manufacturing facility Independence.Business Wire

    The Texas facility, which MP Materials has named Independence, is not the only major rare-earth-magnet project in the U.S. Most notably, Vacuumschmelze GmbH, a magnet maker based in Hanau, Germany, has begun constructing a plant in South Carolina through a North American subsidiary, e-VAC Magnetics. To build the US $500 million factory, the company secured $335 million in outside funds, including at least $100 million from the U.S. government. (E-VAC, too, has touted a supply agreement with General Motors for its future magnets.)

    In another intriguing U.S. rare-earth magnet project, Noveon Magnetics, in San Marcos, Texas, is producing what it claims are 2,000 tonnes of NdFeB magnets per year. The company is making some of the magnets in the standard way, starting with metal alloys, and others in a unique process based on recycling the materials from discarded magnets. USA Rare Earth announced on 8 January that it had manufactured a small amount of NdFeB magnets at a plant in Stillwater, Oklahoma.

    Yet another company, Quadrant Magnetics, announced in January, 2022, that it would begin construction on a $100 million NdFeB magnet factory in Louisville, Kentucky. However, 11 months later, U.S. federal agents arrested three of the company’s top executives, charging them with passing off Chinese-made magnets as locally produced and giving confidential U.S. military data to Chinese agencies.

    The multiple US neodymium-magnet projects are noteworthy but even collectively they won’t make a noticeable dent in China’s dominance. “Let me give you a reality check,” says Steve Constantinides, an IEEE member and magnet-industry consultant based in Honeoye, N.Y. “The total production of neo magnets was somewhere between 220 and 240 thousand tonnes in 2024,” he says, adding that 85 percent of the total, at least, was produced in China. And “the 15 percent that was not made in China was made in Japan, primarily, or in Vietnam.” (Other estimates put China’s share of the neodymium magnet market as high as 90 percent.)

    But look at the figures from a different angle, suggests MP Materials’s Sloustcher. “The U.S. imports just 7,000 tonnes of NdFeB magnets per year,” he points out. “So in total, these [U.S.] facilities can supplant a significant percentage of U.S. imports, help re-start an industry, and scale as the production of motors and other magnet-dependent industries” returns to the United States, he argues.

    And yet, it’s hard not to be a little awed by China’s supremacy. The country has some 300 manufacturers of rare-earth permanent magnets, according to Constantinides. The largest of these, JL MAG Rare-Earth Co. Ltd., in Ganzhou, produced at least 25,000 tonnes of neodymium magnets last year, Constantinides figures. (The company recently announced that it was building another facility, to begin operating in 2026, that it says will bring its installed capacity to 60,000 tonnes a year.)

    That 25,000 tonnes figure is comparable to the combined output of all of the rare-earth magnet makers that aren’t in China. The $500-million e-VAC plant being built in South Carolina, for example, is reportedly designed to produce around 1,500 tonnes a year.

    But even those numbers do not fully convey China’s dominance of permanent magnet manufacturing. Where ever a factory is, making neodymium magnets requires supplies of rare-earth metal, and that nearly always leads straight back to China. “Even though they only produce, say, 85 percent of the magnets, they are producing 97 percent of the metal” in the world, says Constantinides. “So the magnet manufacturers in Japan and Europe are highly dependent on the rare-earth metal coming from China.”

    MP’s Mine-to-Manufacturing strategy

    And there, at least, MP Materials may have an interesting edge. Hardly any firms, even in China, do what MP is attempting: produce finished magnets starting with ore that the company mines itself. Even large companies typically perform just one or at most two of the four major steps along the path to making a rare-earth magnet: mining the ore, refining the ore into rare-earth oxides, reducing the oxides to metals, and then, finally, using the metals to make magnets. Each step is an enormous undertaking requiring entirely different equipment, processes, knowledge, and skill sets.

    Close-up of stacked bricks of a brownish, slightly iridescent material. The rare earth metal produced at MP Materials’ magnet manufacturing facility in Fort Worth, Texas, consists of mostly neodymium and praseodymium.Business Wire

    “The one advantage they get from [doing it all] is that they get better insights into how different markets are actually growing,” says Stan Trout, a magnet industry consultant in Denver, Colorado. “Getting the timing right on any expansion is important,” Trout adds. “And so MP should be getting that information as well as anybody, with the different plants that they have, because they interact with the market in several different ways and can really see what demand is like in real time, rather than as some projection in a forecast.”

    Still, it’s going to be an uphill climb. “There are a lot of both hard and soft subsidies in the supply chain in China,” says John Ormerod, an industry consultant based in Knoxville, Tenn. “It’s going to be difficult for a US manufacturer to compete with the current price levels of Chinese-made magnets,” he concludes.

    And it’s not going to get better any time soon. China’s rare-earth magnet makers are only using about 60 percent of their production capacity, according to both Constantinides and Ormerod—and yet they are continuing to build new plants. “There’s going to be roughly 500,000 tonnes of capacity by the end of this year,” says Ormerod, citing figures gathered by Singapore-based analyst Thomas Kruemmer. “The demand is only about 50 percent of that.”

    The upshot, all of the analysts agree, will be downward price pressure on rare earth magnets in the near future, at least. At the same time, the U.S. Department of Defense has made it a requirement that rare-earth magnets for its systems must be produced entirely, starting with ore, in “friendly” countries—which does not include China. “The DoD will need to pay a premium over cheaper imported magnets to establish a price floor enabling domestic U.S. producers to successfully and continuously supply the DoD,” says Constantinides.

    But is what’s good for America good for General Motors, in this case? We’re all going to find out in a year or two. At the moment, few analysts are bullish on the prospect.

    “The automotive industry has been extremely cost-conscious, demanding supplier price reductions of even fractions of a cent per piece,” notes Constantinides. And even the Trump administration’s tariffs are unlikely to alter the basic math of market economics, he adds. “The application of tariffs to magnets in an attempt to ‘level the playing field’ incentivizes companies to find work-arounds, such as exporting magnets from China to Malaysia or Mexico, then re-exporting from there to the USA. This is not theoretical, these work-arounds have been used for decades to avoid even the past or existing low tariff rates of about 3.5 percent.”

    Correction, 12 February 2025: An earlier version of this article stated that Noveon Magnetics was producing rare-earth magnets only from materials reclaimed from discarded magnets. In fact, Noveon is producing magnets from recycled materials and also from “virgin” alloys.

  • New IEEE Standard for Securing Biomedical Devices and Data
    by Kathy Pretz on 07. February 2025. at 19:00



    If you have an implanted medical device, have been hooked up to a machine in a hospital, or have accessed your electronic medical records, you might assume the infrastructure and data are secure and protected against hackers. That isn’t necessarily the case, though. Connected medical devices and systems are vulnerable to cyberattacks, which could reveal sensitive data, delay critical care, and physically harm patients.

    The U.S. Food and Drug Administration, which oversees the safety and effectiveness of medical equipment sold in the country, has recalled medical devices in the past few years due to cybersecurity concerns. They include pacemakers, DNA sequencing instruments, and insulin pumps.

    In addition, hundreds of medical facilities have experienced ransomware attacks, in which malicious people encrypt a hospital’s computer systems and data and then demand a hefty ransom to restore access. Tedros Adhanom Ghebreyesus, the World Health Organization’s director-general, warned the U.N. Security Council in November about the “devastating effects of ransomware and cyberattacks on health infrastructure.”

    To help better secure medical devices, equipment, and systems against cyberattacks, IEEE has partnered with Underwriters Laboratories, which tests and certifies products, to develop IEEE/UL 2933, Standard for Clinical Internet of Things (IoT) Data and Device Interoperability with TIPPSS (Trust, Identity, Privacy, Protection, Safety, and Security).

    “Because most connected systems use common off-the-shelf components, everything is now hackable, including medical devices and their networks,” says Florence Hudson, chair of the IEEE 2933 Working Group. “That’s the problem this standard is solving.”

    Hudson, an IEEE senior member, is executive director of the Northeast Big Data Innovation Hub at Columbia. She is also founder and CEO of cybersecurity consulting firm FDHint, also in New York.

    A framework for strengthening security

    Released in September, IEEE 2933 covers ways to secure electronic health records, electronic medical records, and in-hospital and wearable devices that communicate with each other and with other health care systems. TIPPSS is a framework that addresses the different security aspects of the devices and systems.

    “If you hack an implanted medical device, you can immediately kill a human. Some implanted devices, for example, can be hacked within 15 meters of the user,” Hudson says. “From discussions with various health care providers over the years, this standard is long overdue.”

    More than 300 people from 32 countries helped develop the IEEE 2933 standard. The working group included representatives from health care–related organizations including Draeger Medical Systems, Indiana University Health, Medtronic, and Thermo Fisher Scientific. The FDA and other regulatory agencies participated as well. In addition, there were representatives from research institutes including Columbia, European University Cyprus, the Jožef Stefan Institute, and Kingston University London.

    “Because most connected systems use common off-the-shelf components, everything is now hackable, including medical devices and their networks.”

    The working group received an IEEE Standards Association Emerging Technology Award last year for its efforts.

    IEEE 2933 was sponsored by the IEEE Engineering in Medicine and Biology Society because, Hudson says, “it’s the engineers who have to worry about ways to protect the equipment.”

    She says the standard is intended for the entire health care industry, including medical device manufacturers; hardware, software, and firmware developers; patients; care providers; and regulatory agencies.

    Six security measures to reduce cyberthreats

    Hudson says that security in the design of hardware, firmware, and software needs to be the first step in the development process. That’s where TIPPSS comes in.

    “It provides a framework that includes technical recommendations and best practices for connected health care data, devices, and humans,” she says.

    TIPPSS focuses on the following six areas to secure the devices and systems covered in the standard.

    • Trust. Establish reliable and trustworthy connections among devices. Allow only designated devices, people, and services to have access.
    • Identity. Ensure that devices and users are correctly identified and authenticated. Validate the identity of people, services, and things.
    • Privacy. Protect sensitive patient data from unauthorized access.
    • Protection. Implement measures to safeguard devices from cyberthreats and protect them and their users from physical, digital, financial, and reputational harm.
    • Safety. Ensure that devices operate safely and do not pose risks to patients.
    • Security. Maintain the overall security of the device, data, and patients.

    TIPPSS includes technical recommendations such as multifactor authentication; encryption at the hardware, software, and firmware levels; and encryption of data when at rest or in motion, Hudson says.

    In an insulin pump, for example, data at rest is when the pump is gathering information about a patient’s glucose level. Data in motion travels to the actuator, which controls how much insulin to give and when it continues to the physician’s system and, ultimately, is entered into the patient’s electronic records.

    “The framework includes all these different pieces and processes to keep the data, devices, and humans safer,” Hudson says.

    Four use cases

    Included in the standard are four scenarios that outline the steps users of the standard would take to ensure that the medical equipment they interact with is trustworthy in multiple environments. The use cases include a continuous glucose monitor (CGM), an automated insulin delivery (AID) system, and hospital-at-home and home-to-hospital scenarios. They include devices that travel with the patient, such as CGM and AID systems, as well as devices a patient uses at home, as well as pacemakers, oxygen sensors, cardiac monitors, and other tools that must connect to an in-hospital environment.

    The standard is available for purchase from IEEE and UL (UL2933:2024).

    On-demand videos on TIPPSS cybersecurity

    IEEE has held a series of TIPPSS framework workshops, now available on demand. They include IEEE Cybersecurity TIPPSS for Industry and Securing IoTs for Remote Subject Monitoring in Clinical Trials. There are also on-demand videos about protecting health care systems, including the Global Connected Healthcare Cybersecurity Workshop Series, Data and Device Identity, Validation, and Interoperability in Connected Healthcare, and Privacy, Ethics, and Trust in Connected Healthcare.

    IEEE SA offers a conformity assessment tool, the IEEE Medical Device Cybersecurity Certification Program. The straightforward evaluation process has a clear definition of scope and test requirements specific to medical devices for assessment against the IEEE 2621 test plan, which helps manage cybersecurity vulnerabilities in medical devices.

  • Video Friday: Agile Humanoids
    by Evan Ackerman on 07. February 2025. at 16:30



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY
    German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY
    European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY
    RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
    ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
    ICRA 2025: 19–23 May 2025, ATLANTA, GA
    London Humanoids Summit: 29–30 May 2025, LONDON
    IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
    2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX
    RSS 2025: 21–25 June 2025, LOS ANGELES
    IAS 2025: 30 June–4 July 2025, GENOA, ITALY
    ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL

    Enjoy today’s videos!

    Humanoid robots hold the potential for unparalleled versatility for performing human-like, whole-body skills. ASAP enables highly agile motions that were previously difficult to achieve, demonstrating the potential of delta action learning in bridging simulation and real-world dynamics. These results suggest a promising sim-to-real direction for developing more expressive and agile humanoids.

    [ ASAP ] from [ Carnegie Mellon University ] and [ Nvidia ]

    Big News: Swiss-Mile is now RIVR! We’re thrilled to unveil our new identity as RIVR, reflecting our evolution from a university spin-off to a global leader in Physical AI and robotics. In 2025, we’ll be deploying our groundbreaking wheeled-legged robots with major logistics carriers for last-mile delivery to set new standards for efficiency and sustainability.

    [ RIVR ]

    Robotics is one of the best ways to reduce worker exposure to safety risks. However, one of the biggest barriers to adopting robots in these industries is the challenge of navigating the rugged terrain found in these environments. UCR’s robots navigate difficult terrain, debris-strewn floors, and confined spaces without requiring facility modifications, disrupting existing workflows, or compromising schedules, significantly improving efficiency while keeping workers safe.

    [ UCR ]

    This paper introduces a safety filter to ensure collision avoidance for multirotor aerial robots. The proposed method allows computational scalability against thousands of constraints and, thus, complex scenes with numerous obstacles. We experimentally demonstrate its ability to guarantee the safety of a quadrotor with an onboard LiDAR, operating in both indoor and outdoor cluttered environments against both naive and adversarial nominal policies.

    [ Autonomous Robots Lab ]

    Thanks, Kostas!

    Brightpick Giraffe is an autonomous mobile robot (AMR) capable of reaching heights of 20 feet (6 m), resulting in three times the warehouse storage density compared to manual operations.

    [ Giraffe ] via [ TechCrunch ]

    IROS 2025, coming this fall in Hangzhou, China.

    [ IROS 2025 ]

    This cute lil guy is from a “Weak Robots Exhibition” in Japan.

    [ RobotStart ]

    I see no problem with cheating via infrastructure to make autonomous vehicles more reliable.

    [ Oak Ridge National Laboratory ]

    I am not okay with how this coffee cup is handled. Neither is my editor.

    [ Qb Robotics ]

    Non-prehensile pushing to move and re-orient objects to a goal is a versatile loco-manipulation skill. In this paper, we develop a learning-based controller for a mobile manipulator to move an unknown object to a desired position and yaw orientation through a sequence of pushing actions. Through our extensive hardware experiments, we show that the approach demonstrates high robustness against unknown objects of different masses, materials, sizes, and shapes.

    [ Paper ] from [ ETH Zurich and Instituto Italiano de Technologia ]

    Verity, On, and Maersk have collaborated to bridge the gap between the physical and digital supply chain—piloting RFID-powered autonomous inventory tracking at a Maersk facility in California. Through RFID integration, Verity pushes inventory visibility to unprecedented levels.

    [ Verity ]

    For some reason, KUKA is reaffirming its commitment to environmental responsibility and diversity.

    [ KUKA ]

    Here’s a panel from the recent Humanoids Summit on generative AI for robotics, which includes panelists from OpenAI and Agility Robotics. Just don’t mind the moderator, he’s a bit of a dork.

    [ Humanoids Summit ]

  • New Quantum Sensors Promise Precision and Secrecy
    by Margo Anderson on 05. February 2025. at 13:00



    Remote sensing—a category broad enough to include both personal medical monitors and space weather forecasting—is poised for a quantum upgrade, much like computing and cryptography before it. A new type of quantum sensor that promises both higher sensitivity and greater security has been proposed and tested in proof-of-concept form. What remains to be seen is how broadly it will be adopted, and whether such quantum enhancements might ultimately make for better medical and space weather tech.

    “Our scheme is hybridizing two different quantum technologies,” says Jacob Dunningham, professor of physics at the University of Sussex in the United Kingdom. “It’s combining quantum communications with quantum sensing. So it’s a way of being able to measure something and get the data back in a way that no eavesdropper can hack into or spoof.”

    Dunningham and PhD student Sean Moore—now a postdoc at the LIP6 computer science lab in Paris—proposed what they are calling their secure quantum remote sensing (SQRS) system on 14 January in the journal Physical Review A.

    The researchers’ simplest SQRS model uses individual photons as the workhorse qubit of the system, although unlike qubits used in, say, quantum computing, none of the qubits here need to be entangled. Their SQRS model also assumes some classical communications on an open channel, between sender and receiver of the qubits. And with these ingredients, the researchers suggest, one could perform high-precision remote measurements whose results are available neither to the person doing the actual measurement nor to any potential eavesdropper who might hack into the communications channels.

    Alice and Bob and SQRS

    Say that Alice wants a measurement performed remotely. To make this measurement via SQRS, she would need to send individual photons to Bob, who’s located where Alice wants the measurement performed. Bob then performs the measurement, encoding his results onto the phase of the single photons that Alice has sent as part of the process. Bob then messages his encoded measurement results back to Alice via the classical communication channel. Because the method ensures Bob doesn’t know the original states of the photons Alice sent, he can’t extract any meaningful information out of the phase data he sends back to Alice. He may have performed the measurement, but he doesn’t have access to the measurement’s result. Only Alice has that.

    Plus, any eavesdropper, Eve, could intercept Alice’s individual photons and classical messages from Bob back to Alice, and she wouldn’t be able to wring meaning from it either. This is because, in part, Bob’s measurement also introduces quantum randomness into the process in ways that Eve cannot plausibly recreate—and Bob could not observe without disturbing the system.

    According to Moore, the proposed SQRS protocol addresses the sort of remote measurement situation where Bob is what the researchers call an “honest and curious” observer. “Honest and curious is a certain perspective used in quantum cryptography where we assume that some party does what they’re told, [such as not actively trying to leak data]” Moore says. “But we don’t necessarily want them to gain any information.”

    Last month, a team of researchers at Guangxi University in Guangxi, China reported they confirmed the SQRS protocol works, at least at a proof-of-principle level. (The group’s findings, however, have to date only been published on the ArXiv online preprint server and have not yet been peer reviewed.)

    According to Wei Kejin, associate professor at Guangxi’s school of physical science and engineering, the group was able to use a weak light source—not even a single-photon generator, but rather a simpler light source that, over time, deals out individual photons only statistically on average.

    Such relatively accessible, entanglement-free light sources, Kejin says, “are generally easier to implement, making them more suitable for real-world applications.”

    The Guangxi group reports 6 percent of their SQRS system’s remote measurements were erroneous. However, Kejin says that a 6 percent error rate in the setup is less significant than it may at first appear. This is because the statistics improve in the SQRS system’s favor with more photons generated. “Error correction and privacy amplification techniques can be employed to distill a secure key,” Kejin says. “Thus, the technology remains viable for real-world applications, particularly in secure communications where high precision and reliability are paramount.”

    Next Steps for SQRS—and Its Applications

    According to Jaewoo Joo, senior lecturer in the school of mathematics and physics at the University of Portsmouth in the U.K., who’s unaffiliated with the research, one practical SQRS application could involve high-precision, quantum radar. The enhanced quantum-level accuracy of the radar measurements would be one attraction, Joo says, but also no adversary or interloper could hack into the radar’s observations, he adds. Or, Joo says, medical monitors at a patient’s home or at a remote clinic could be used by doctors centrally located in a hospital, for instance, and the data sent back to the hospital would be secure and free from tampering or hacking.

    To realize the kinds of scenarios Joo describes would very likely involve whole networks of SQRS systems, not just the most basic SQRS setup, with one Alice and one Bob. Dunningham and Moore describe that simple, foundational model of SQRS in a paper published two years ago. It was the basic, foundational SQRS setup, in fact, that the Guanxi group has been working to experimentally test.

    The more complex, networked SQRS system that’s likely to be needed is what’s described in January’s Physical Review A paper. The networked SQRS system involves Alice along with multiple “Bobs”—each of which operates their own individual sensor, on which each Bob performs similar kinds of measurements as in the basic SQRS protocol. The key difference between basic SQRS and networked SQRS is in the latter system, some of the qubits in the system do need to be entangled.

    Introducing networks of sensors and entangled qubits, Dunningham and Moore find, can further enhance the accuracy and security of the system.

    Dunningham says quantum effects would also amplify the accuracy of the overall system, with a boost that’s proportional to the square root of the number of sensors in the network. “So if you had 100 sensors, you get a factor of 10 improvement,” he says. “And those sort of factors are huge in metrology. People get excited about a few percent. So the advantages are potentially very big.”

    Envisioning a networked SQRS system, for instance, Dunningham describes enhanced atomic clocks in orbit providing ultra-high-precision timekeeping with high-security quantum protections ensuring no hacking or spoofing.

    “You can get a big, precision-measurement advantage as well as maintaining the security,” he says.

  • AI Expert Engineers Meta’s Video Recommendations
    by Willie D. Jones on 04. February 2025. at 19:00



    Every time someone views a video on Facebook or Instagram, they engage with Amey Dharwadker’s work. The machine learning engineering manager at Facebook’s parent company, Meta, leads Facebook’s video recommendations quality ranking team.

    Dharwadker’s work focuses on improving the quality and integrity of video recommendations for billions of daily active users globally, balancing technical excellence with strategic vision. The IEEE senior member’s pioneering work addresses the challenges of understanding user interests, optimizing content distribution, and mitigating biases that would otherwise skew recommendations results.

    Amey Dharwadker


    Employer:

    Meta in Menlo Park, Calif.

    Title:

    Machine learning engineering manager

    Member grade:

    Senior member

    Alma maters:

    National Institute of Technology, Tiruchirappalli, India; Columbia

    He has developed solutions such as the Conformity-Aware Ranking Model, which enhances recommendation accuracy by accounting for individual preferences and the effect that collective influences— such as a video going viral—have on a single user’s choices. Dharwadker and his team were also behind the Personalized Interest Exploration Framework, meant to broaden users’ experiences by showing them diverse content that generally aligns with their individual preferences.

    For his contributions, Dharwadker was named Search Professional of the Year last year by the Information Retrieval Specialist Group of the British Computer Society.

    Dharwadker says he is pleased with the recognition but adds, “The technical complexity of delivering personalized video recommendations to billions of users, combined with the immediate and meaningful impact on how people discover and interact with content, is rewarding in and of itself.”

    An early problem solver

    Born in Goa, India, Dharwadker was inspired by the region’s natural beauty and cultural richness. His father, a surgeon, played a pivotal role in nurturing his interest in mathematics and science, emphasizing logical and analytical problem-solving skills. Dharwadker’s early experiences ignited his passion for engineering and innovation, he says.

    “I was always a curious child, constantly fascinated by how things worked,” he says. “I loved experimenting with simple science projects, like building circuits or creating makeshift models to visualize concepts I learned. This curiosity extended to understanding abstract concepts in science and mathematics, where I enjoyed figuring out the ‘why’ behind the ‘how.’”

    He earned his bachelor’s degree in electronics and communication engineering in 2011 from the National Institute of Technology, Tiruchirappalli, in India’s Tamil Nadu state. His undergraduate studies solidified his passion for applying computer vision and machine learning to solve real-world problems. Motivated to deepen his expertise, he pursued a master’s degree in electrical engineering at Columbia, graduating in 2014.

    A pioneer in social media recommendations ranking

    Dharwadker began his professional journey in 2011 at Analog Devices in Bengaluru, India. During his tenure there as an automotive vision software engineer, he developed real-time computer vision algorithms for advanced driver assistance systems, the technology used in most late-model cars. These systems include adaptive cruise control, automatic emergency braking, and automated lane assistance. His role allowed him to see the tangible impact of AI on safety and efficiency, setting the foundation for his future work.

    While completing his master’s degree in 2014, Dharwadker interned at Facebook in Menlo Park, Calif., focusing on machine learning for Ads Ranking models so that users see the advertisements most relevant to them and advertisers maximize their investment.

    Impressed by the challenges and opportunities of building systems capable of analyzing user preferences and delivering personalized content to billions of users. Dharwadker joined Meta full time in January 2015 as a machine learning engineer working on the Facebook news feed ranking team. The group is behind the algorithms that determine the relevance and order of posts appearing on a user’s feed. The team aims to prioritize the most engaging, meaningful, and important content for each user.

    “Leadership is more than just a title. It’s about uplifting those around you, amplifying voices that deserve to be heard, breaking down barriers for others to thrive, and multiplying the scale and impact of leaders by creating more of them.”

    Dharwadker’s contributions to recommendation systems became transformative. He developed Embeddings for Feed and Pages, a patented and widely cited method for turning the content of a social media feed or page into a string of numbers that captures its essence. This innovation allows algorithms to identify similarities among pages for more accurate content grouping and enhanced personalized content recommendations.

    Later, as a technical leader, Dharwadker led the Facebook watch ranking team, which works on similar technologies as the news feed ranking team but focuses specifically on video content. The group drove advances in video recommendations for more than 1.25 billion monthly users.

    In 2023 Dharwadker transitioned to his current position. His typical workday includes reviewing key metrics on video recommendation quality and engagement, guiding his team of machine learning engineers through architectural design reviews, and brainstorming solutions to technical challenges.

    “Another important aspect of my job,” Dharwadker says, “is mentoring engineers on my team via regular one-on-ones in which I focus on supporting their career growth.”

    Leadership recognition and IEEE involvement

    Dharwadker’s work has earned global acclaim. In addition to the recognition from the British Computer Society, he was named the India Youth Engineering Icon of the Year by the Institution of Engineering and Technology in 2023. Also that year, he was presented the Outstanding Leadership Award at the Internet 2.0 Conference in Las Vegas.

    While continuing to advance his career, Dharwadker actively contributes to the engineering community as a passionate mentor. He says he aims to help foster a culture of engineering excellence and develop the next generation of experts in artificial intelligence and recommendation systems.

    His involvement with IEEE began in 2009 during his undergraduate studies.

    “I presented my first research paper at the IEEE India Council International Conference in 2010,” he says. “That experience introduced me to the profound impact of IEEE’s technical platforms and inspired my continued involvement.”

    Since then, he says, he has viewed the organization as a platform for collaboration, learning, and professional growth.

    He participates in the IEEE Computer Society and reviews research papers for the IEEE Transactions on Pattern Analysis and Machine Intelligence and IEEE Access journals.

    The experience he’s gained by mentoring up-and-coming professionals in his field through IEEE, he says, has helped him become a more effective manager at Meta.

    Dharwadker’s career is driven by curiosity, innovation, and a commitment to make a positive impact. He sums up his leadership philosophy this way: “Leadership is more than just a title. It’s about uplifting those around you, amplifying voices that deserve to be heard, breaking down barriers for others to thrive, and multiplying the scale and impact of leaders by creating more of them.”

  • The Lost Story of Alan Turing’s Secret “Delilah” Project
    by Jack Copeland on 04. February 2025. at 14:00



    It was 8 May 1945, Victory in Europe Day. With the German military’s unconditional surrender, the European part of World War II came to an end. Alan Turing and his assistant Donald Bayley celebrated victory in their quiet English way, by taking a long walk together. They had been working side by side for more than a year in a secret electronics laboratory, deep in the English countryside. Bayley, a young electrical engineer, knew little about his boss’s other life as a code breaker, only that Turing would set off on his bicycle every now and then to another secret establishment about 10 miles away along rural lanes, Bletchley Park. As Bayley and the rest of the world would later learn, Bletchley Park was the headquarters of a vast, unprecedented code-breaking operation.

    When they sat down for a rest in a clearing in the woods, Bayley said, “Well, the war’s over now—it’s peacetime, so you can tell us all.”

    Black and white photo of young man Donald Bayley (1921-2020) graduated with a degree in electrical engineering, and was commissioned into the Royal Electrical and Mechanical Engineers. There, he was selected to work with Alan Turing on the Delilah project. In later life he designed the teletypewriter-based “Piccolo” system for secret diplomatic radio communications, adopted by the British Foreign and Commonwealth Office and used worldwide for decades.Bonhams

    “Don’t be so bloody silly,” Turing replied.

    “That was the end of that conversation,” Bayley recalled 67 years later.

    Turing’s incredible code-breaking work is now no longer secret. What’s more, he is renowned both as a founding father of computer science and as a pioneering figure in artificial intelligence. He is not so well-known, however, for his work in electrical engineering. This may be about to change.

    In November 2023, a large cache of his wartime papers—nicknamed the “Bayley papers”—was auctioned in London for almost half a million U.S. dollars. The previously unknown cache contains many sheets in Turing’s own handwriting, telling of his top-secret “Delilah” engineering project from 1943 to 1945. Delilah was Turing’s portable voice-encryption system, named after the biblical deceiver of men. There is also material written by Bayley, often in the form of notes he took while Turing was speaking. It is thanks to Bayley that the papers survived: He kept them until he died in 2020, 66 years after Turing passed away.


    When the British Government learned about the sale of these papers at auction, it acted swiftly to put a ban on their export, declaring them to be “an important part of our national story,” and saying “It is right that a UK buyer has the opportunity to purchase these papers.” I was lucky enough to get access to the collection prior to the November sale, when the auction house asked for my assistance in identifying some of the technical material. The Bayley papers shine new light on Turing the engineer.

    At the time, Turing was traveling from the abstract to the concrete. The papers offer intriguing snapshots of his journey from his prewar focus on mathematical logic and number theory, into a new world of circuits, electronics, and engineering math.

    Alan Turing’s Delilah Project

    During the war, Turing realized that cryptology’s new frontier was going to be the encryption of speech. The existing wartime cipher machines—such as the Japanese “ Purple” machine, the British Typex, and the Germans’ famous Enigma and teletypewriter-based SZ42—were all for encrypting typewritten text. Text, though, is scarcely the most convenient way for commanders to communicate, and secure voice communication was on the military wish list.

    Bell Labs’ pioneering SIGSALY speech-encryption system was constructed in New York City, under a U.S. Army contract, during 1942 and 1943. It was gigantic, weighing over 50 thousand kilograms and filling a room. Turing was familiar with SIGSALY and wanted to miniaturize speech encryption. The result, Delilah, consisted of three small units, each roughly the size of a shoebox. Weighing just 39 kg, including its power pack, Delilah would be at home in a truck, a trench, or a large backpack.

    Black and white photo of a room full of electronics. Bell Labs’ top secret installation of the SIGSALY voice-encryption system was a room-size machine that weighed over 50,000 kilograms. NSA

    In 1943, Turing set up bench space in a Nissen hut and worked on Delilah in secret. The hut was at Hanslope Park, a military-run establishment in the middle of nowhere, England. Today, Hanslope Park is still an ultrasecret intelligence site known as His Majesty’s Government Communications Centre. In the Turing tradition, HMGCC engineers supply today’s British intelligence agents with specialized hardware and software.

    Turing seems to have enjoyed the two years he spent at Hanslope Park working on Delilah. He made an old cottage his home and took meals in the Army mess. The commanding officer recalled that he “soon settled down and became one of us.” In 1944, Turing acquired his young assistant, Bayley, who had recently graduated from the University of Birmingham with a bachelor’s degree in electrical engineering. The two became good friends, working together on Delilah until the autumn of 1945. Bayley called Turing simply “Prof,” as everyone did in the Bletchley-Hanslope orbit.

    “I admired the originality of his mind,” Bayley told me when I interviewed him in the 1990s. “He taught me a great deal, for which I have always been grateful.”

    In return, Bayley taught Turing bench skills. When he first arrived at Hanslope Park, Bayley found Turing wiring together circuits that resembled a “spider’s nest,” he said. He took Turing firmly by the hand and dragged him through breadboarding boot camp.

    Black and white photo of electronic contraption with cylinder and bulbs. Alan Turing and his assistant Donald Bayley created this working prototype of their voice-encryption system, called Delilah.The National Archives, London

    A year later, as the European war ground to a close, Turing and Bayley got a prototype system up and running. This “did all that could be expected of it,” Bayley said. He described the Delilah system as “one of the first to be based on rigorous cryptographic principles.”

    How Turing’s Voice-Encryption System Worked

    Turing drew inspiration for the voice-encryption system from existing cipher machines for text. Teletypewriter-based cipher machines such as the Germans’ sophisticated SZ42—broken by Turing and his colleagues at Bletchley Park—worked differently from the better known Enigma machine. Enigma was usually used for messages transmitted over radio in Morse code. It encrypted the letters A through Z by lighting up corresponding letters on a panel, called the lampboard, whose electrical connections with the keyboard were continually changing. The SZ42, by contrast, was attached to a regular teletypewriter that used a 5-bit telegraph code and could handle not just letters, but also numbers and a range of punctuation. Morse code was not involved. (This 5-bit telegraph code was a forerunner of ASCII and Unicode and is still used by some ham radio operators.)

    The SZ42 encrypted the teletypewriter’s output by adding a sequence of obscuring telegraph characters, called key (the singular form “key” was used by the codebreakers and codemakers as a mass noun, like “footwear” or “output”), to the plain message. For example, if the German plaintext was ANGREIFEN UM NUL NUL UHR (Attack at zero hundred hours), and the obscuring characters that were being used to encrypt these three words (and also the space between them) were Y/RABV8WOUJL/H9VF3JX/D5Z, then the cipher machine would first add “Y” to “A”—that is to say, add the 5-bit code of the first letter of the key to the 5-bit code of the first letter of the plaintext—and then added “/” to “N”, then “R” to “G”, and so on. Under the SZ42’s rules for character addition (which were hardwired into the machine), these 24 additions would produce PNTDOOLLHANC9OAND9NK9CK5, which was the encrypted message. This principle of generating the obscuring key and then adding it to the plain message was the concept that Turing extended to the new territory of speech encryption.

    Blue paper with foldmarks with white schematic The Delilah voice-encryption machine contained a key unit that generated the pseudorandom numbers used to obscure messages. This blueprint of the key unit features 8 multivibrators (labeled “v1,” “v2,” and so forth).The National Archives, London

    Inside the SZ42, the key was produced by a key generator, consisting of a system of 12 wheels. As the wheels turned, they churned out a continual stream of seemingly random characters. The wheels in the receiver’s machine were synchronized with the sender’s, and so produced the same characters—Y/RABV8WOUJL/H9VF3JX/D5Z in our example. The receiving machine subtracted the key from the incoming ciphertext PNTDOOLLHANC9OAND9NK9CK5, revealing the plaintext ANGREIFEN9UM9NUL9NUL9UHR (a space was always typed as “9”).

    Applying a similar principle, Delilah added the obscuring key to spoken words. In Delilah’s case, the key was a stream of pseudorandom numbers—that is, random-seeming numbers that were not truly random. Delilah’s key generator contained five rotating wheels and some fancy electronics concocted by Turing. As with the SZ42, the receiver’s key generator had to be synchronized with the sender’s, so that both machines produced identical key. In their once highly secret but now declassified report, Turing and Bayley commented that the problem of synchronizing the two key generators had presented them with “formidable difficulties.” But they overcame these and other problems, and eventually demonstrated Delilah using a recording of a speech given by Winston Churchill, successfully encrypting, transmitting, and decrypting it.

    A yellow piece of paper with a handwritten circuit diagram and math. This loose-leaf sheet shows a circuit used by Turing in an experiment to measure the cut-off voltage at a triode tube, most likely in connection with the avalanche-effect basic to a multivibrator. Multivibrators were an essential component of Delilah’s key-generation module. Bonhams

    The encryption-decryption process began with discretizing the audio signal, which today we’d call analog-to-digital conversion. This produced a sequence of individual numbers, each corresponding to the signal’s voltage at a particular point in time. Then numbers from Delilah’s key were added to these numbers. During the addition, any digits that needed to be carried over to the next column were left out of the calculation—called “noncarrying” addition, this helped scramble the message. The resulting sequence of numbers was the encrypted form of the speech signal. This was transmitted automatically to a second Delilah at the receiving end. The receiving Delilah subtracted the key from the incoming transmission, and then converted the resulting numbers to voltages to reproduce the original speech.

    The result was “whistly” and full of background noise, but usually intelligible—although if things went wrong, there could be “a sudden crack like a rifle shot,” Turing and Bayley reported cheerfully.

    But the war was winding down, and the military was not attracted to the system. Work on the Delilah project stopped not long after the war ended, when Turing was hired by the British National Physical Laboratory to design and develop an electronic computer. Delilah “had little potential for further development,” Bayley said and “was soon forgotten.” Yet it offered a very high level of security, and was the first successful demonstration of a compact portable device for voice encryption.

    What’s more, Turing’s two years of immersion in electrical engineering stood him in good stead, as he moved on to designing electronic computers.

    Turing’s Lab Notebook

    The two years Turing spent on Delilah produced the Bayley papers. The papers comprise a laboratory notebook, a considerable quantity of loose sheets (some organized into bundles), and—the jewel of the collection—a looseleaf ring binder bulging with pages.

    The greenish-gray quarto-size lab notebook, much of it in Turing’s handwriting, details months of work. The first experiment Turing recorded involved measuring a pulse emitted by a multivibrator, which is a circuit that can be triggered to produce a single voltage pulse or a chain of pulses. In the experiment, the pulse was fed into an oscilloscope and its shape examined. Multivibrators were crucial components of Turing’s all-important key generator, and the next page of the notebook, labeled “Measurement of ‘Heaviside function,’ ” shows the voltages measured in part of the same multivibrator circuit.

    An old notebook with graph paper on the left and a table on the right A key item in the Bayley papers is this lab notebook, whose first 24 pages are in Turing’s handwriting. These detail Turing’s work on the Delilah project prior to Bayley’s arrival in March 1944.Bonhams

    Today, there is intense interest in the use of multivibrators in cryptography. Turing’s key generator, the most original part of Delilah, contained eight multivibrator circuits, along with the five-wheel assembly mentioned previously. In effect the multivibrators were eight more very complicated “wheels,” and there was additional circuitry for enhancing the random appearance of the numbers the multivibrators produced.

    Subsequent experiments recorded in the lab book tested the performance of all the main parts of Delilah—the pulse modulator, the harmonic analyzer, the key generator, the signal and oscillator circuits, and the radio frequency and aerial circuits. Turing worked alone for approximately the first six months of the project, before Bayley’s arrival in March 1944, and the notebook is in Turing’s handwriting up to and including the testing of the key generator. After this, the job of recording experiments passed to Bayley.

    The Bandwidth Theorem

    A page with math scribbled all over it. Two loose pages, in Turing’s handwriting, explain the so-called bandwidth theorem, now known as the Nyquist-Shannon sampling theorem. This was likely written out for Bayley’s benefit. Bonhams

    Among the piles of loose sheets covered with Turing’s riotously untidy handwriting, one page is headed “Bandwidth Theorem.” Delilah was in effect an application of a bandwidth theorem that today is known as the Nyquist-Shannon sampling theorem. Turing’s proof of the theorem is scrawled over two sheets. Most probably he wrote the proof out for Bayley’s benefit. The theorem—which expresses what the sampling rate needs to be if sound waves are to be reproduced accurately—governed Delilah’s conversion of sound waves into numbers, done by sampling vocal frequencies several thousand times a second.

    At Bell Labs, Claude Shannon had written a paper sketching previous work on the theorem and then proving his own formulation of it. Shannon wrote the paper in 1940, although it was not published until 1949. Turing worked at Bell Labs for a time in 1943, in connection with SIGSALY, before returning to England and embarking on Delilah. It seems likely that he and Shannon would have discussed sampling rates.

    Turing’s “Red Form” Notes

    During the war, Hanslope Park housed a large radio-monitoring section. Shifts of operators continuously searched the airwaves for enemy messages. Enigma transmissions, in Morse code, were identified by their stereotypical military format, while the distinctive warble of the SZ42’s radioteletype signals was instantly recognizable. After latching onto a transmission, an operator filled out an Army-issue form (preprinted in bright red ink). The frequency, the time of interception, and the letters of ciphertext were noted down. This “red form” was then rushed to the code breakers at Bletchley Park.

    Old yellow paper with red marking. Writing paper was in short supply in wartime Britain, and Turing used the blank reverse sides of these “red forms,” designed for radio operators to note down information about intercepted signals.Bonhams

    Writing paper was in short supply in wartime Britain. Turing evidently helped himself to large handfuls of red forms, scrawling out screeds of notes about Delilah on the blank reverse sides. In one bundle of red forms, numbered by Turing at the corners, he considered a resistance-capacitance network into which a “pulse of area A at time 0” is input. He calculated the charge as the pulse passes through the network, and then calculated the “output volts with pulse of that area.” The following sheets are covered with integral equations involving time, resistance, and charge. Then a scribbled diagram appears, in which a wavelike pulse is analyzed into discrete “steps”—a prelude to several pages of Fourier-type analysis. Turing appended a proof of what he termed the “Fourier theorem,” evidence that these pages may have been a tutorial for Bayley.

    The very appearance of these papers speaks to the character and challenging nature of the Delilah project. The normally top-secret Army red forms, the evidence of wartime shortages, the scribbled formulas, the complexity of the mathematics, the tutorials for Bayley—all contribute to the picture of the Prof and his young assistant working closely together at a secret military establishment on a device that pushed the engineering envelope.

    Turing’s Lectures for Electrical Engineers

    The cover of the looseleaf ring binder is embossed in gilt letters “Queen Mary’s School, Walsall,” where Bayley had once been a pupil. It is crammed with handwritten notes taken by Bayley during a series of evening lectures that Turing gave at Hanslope Park. The size of Turing’s audience is unknown, but there were numerous young engineers like Bayley at Hanslope.

    These notes can reasonably be given the title Turing’s Lectures on Advanced Mathematics for Electrical Engineers. Running to 180 pages, they are the most extensive noncryptographic work by Turing currently known, vying in length with his 1940 write-up about Enigma and the Bombe, affectionately known at Bletchley Park as “Prof’s Book.”

    Stepping back a little helps to put this important discovery into context. The traditional picture of Turing held him to be a mathematician’s mathematician, dwelling in a realm far removed from practical engineering. In 1966, for instance, Scientific American ran an article by the legendary computer scientist and AI pioneer John McCarthy, in which he stated that Turing’s work did not play “any direct role in the labors of the men who made the computer a reality.” It was a common view at the time.

    Old two-ring binder opened to a page of math. A binder filled with Bayley’s notes of Turing’s lectures is the jewel of the recently sold document collection.Bonhams

    As we now know, though, after the war Turing himself designed an electronic computer, called the Automatic Computing Engine, or ACE. What’s more, he designed the programming system for the Manchester University “Baby” computer, as well as the hardware for its punched-tape input/output. Baby came to life in mid-1948. Although small, it was the first truly stored-program electronic computer. Two years later, the prototype of Turing’s ACE ran its first program. The prototype was later commercialized as the English Electric DEUCE (Digital Electronic Universal Computing Engine). Dozens of DEUCEs were purchased—big sales in those days—and so Turing’s computer became a major workhorse during the first decades of the Digital Age.

    Yet the image has persisted of Turing as someone who made fundamental yet abstract contributions, rather than as someone whose endeavors sometimes fit onto the spectrum from bench electronics through to engineering theory. The Bayley papers bring a different Turing into focus: Turing the creative electrical engineer, with blobs of solder all over his shoes—even if his soldered joints did have a tendency to come apart, as Bayley loved to relate.

    Turing’s lecture notes are in effect a textbook, terse and selective, on advanced math for circuit engineers, although now very out-of-date, of course.

    There is little specifically about electronics in the lectures, aside from passing mentions, such as a reference to cathode followers. When talking about the Delilah project, Bayley liked to say that Turing had only recently taught himself elementary electronics, by studying an RCA vacuum tube manual while he crossed the Atlantic from New York to Liverpool in March 1943. This cannot be entirely accurate, however, because in 1940 Turing’s “Prof’s Book” described the use of some electronics. He detailed an arrangement of 26 thyratron tubes powered by a 26-phase supply, with each tube controlling a double-coil relay “which only trips if the thyratron fails to fire.”

    Turing’s knowledge of practical electronics was probably inferior to his assistant’s, initially anyway, since Bayley had studied the subject at university and then was involved with radar before his transfer to Hanslope Park. When it came to the mathematical side of things, however, the situation was very different. The Bayley papers demonstrate the maturity of Turing’s knowledge of the mathematics of electrical circuit design—knowledge that was essential to the success of the Delilah project.

    The unusual breadth of Turing’s intellectual talents—mathematician, logician, code breaker, philosopher, computer theoretician, AI pioneer, and computational biologist—is already part and parcel of his public persona. To these must now also be added an appreciation of his idiosyncratic prowess in electrical engineering.

    Some of the content in this story originally appeared in Jack Copeland’s report for the Bonhams auction house.

  • Better AI Is a Matter of Timing
    by Dina Genkina on 03. February 2025. at 16:00



    AI is changing everything in data centers: New AI-specific chips, new cooling techniques, and new storage drives. Now even the method for keeping time is starting to change, with an announcement from SiTime that the company has developed a new clock that is optimized for AI workloads.

    The company says the development will lead to significant energy savings and lower costs for AI training and inference. SiTime was able to achieve these savings by using microelectromechanical systems (MEMS) as the core timekeeping component instead of traditional quartz crystals.

    Almost every part of a computer has some kind of clock. CPUs, GPUs, network interface cards, switches, and sometimes even active interconnects contain their own timekeeping component. For more traditional computing workloads, these clocks usually fall into two categories: Fast, precisely timed clocks or clocks that are well-synchronized across multiple GPUs (or CPUs), says Ian Cutress, chief analyst at More Than Moore and who works with SiTime.

    “The problem with AI is that it’s doing both,” says Cutress. “You want your chip to go as fast as possible, but then you also want to synchronize across 100,000 chips.”

    SiTime’s Super-TCXO clock combines the functionality of ultra-stable and well-synchronized clocks into a single device, providing synchronization that is 3 times as good as a comparable quartz-based component at a bandwidth of 800 gigabits per second, in a chip thats one-fourth the size.

    Better Timing Leads to Energy Savings

    AI is a data-hungry beast. And yet, expensive and power-guzzling GPUs sit idle up to 57 percent of the time waiting for their next batch of data. If data could be served up more quickly, GPUs could be used in smaller amounts and more efficiently.

    “You need faster bandwidth. Because you need faster bandwidth, you need better timing,” says Piyush Sevalia, executive vice president of marketing at SiTime.

    In addition, one can save a lot of power if GPUs can be put into sleep mode while they’re waiting for more data to load, Cutress says. This, too, requires more precise timing, such that the sleep-wake cycle can happen quickly enough to keep up with the data stream.

    For AI, clocks not only need to be more precise, but also synchronized perfectly across many GPUs. Large AI models split their tasks among many GPUs, with each one doing a small chunk of the calculation. Then, their results are stitched back together. If one GPU lags behind the others, the whole calculation will have to wait for that node. In other words, the computation is only as fast as the weakest link. All of the GPUs remain turned on while they wait, so any such delay results in energy losses.

    High Time for MEMS Time

    The timing must be precise, well synchronized, and robust—any mechanical vibrations or temperature swings have to be compensated for to ensure they don’t throw off the computation. SiTime’s Super-TCXO aims to combine all three requirements in a single device.

    Sevalia says using MEMS oscillator rather than the traditional quartz makes that combination possible. Quartz oscillators use the vibrations of precisely machined quartz crystals—similar to a tuning fork. In contrast, MEMS oscillators are manufactured, not machined, to resonate at a specific frequency. MEMS devices can be smaller, which makes them less sensitive to mechanical strains. They can also be more precise.

    “Crystal oscillators have been around since the beginning of time, since compute was a thing,” says Dave Altavilla, president and principal analyst at HotTech Vision & Analysis and who also works with SiTime. “We’ve improved that technology dramatically since its inception. But MEMS takes it another step further beyond what a crystal is capable of. So that’s what I think is being displaced in the market by this new technology is the old way of doing things.”

    SiTime’s MEMS-based solutions are already having some success—Nvidia’s Spectrum-X Switch silicon already contains a SiTime device.

    Sevalia says he expects the need for MEMS-based timing devices to continue. The company is already planning even higher bandwidth devices, and they’re hoping their innovations will result in even more energy savings. “We’re just scratching the surface right now in terms of figuring out how much energy efficiency we can bring,” Sevalia says.

  • These Graphene Tattoos Are Actually Biosensors
    by Dmitry Kireev on 03. February 2025. at 14:00



    Imagine it’s the year 2040, and a 12-year-old kid with diabetes pops a piece of chewing gum into his mouth. A temporary tattoo on his forearm registers the uptick in sugar in his blood stream and sends that information to his phone. Data from this health-monitoring tattoo is also uploaded to the cloud so his mom can keep tabs on him. She has her own temporary tattoos—one for measuring the lactic acid in her sweat as she exercises and another for continuously tracking her blood pressure and heart rate.

    Right now, such tattoos don’t exist, but the key technology is being worked on in labs around the world, including my lab at the University of Massachusetts Amherst. The upside is considerable: Electronic tattoos could help people track complex medical conditions, including cardiovascular, metabolic, immune system, and neurodegenerative diseases. Almost half of U.S. adults may be in the early stages of one or more of these disorders right now, although they don’t yet know it.

    Technologies that allow early-stage screening and health tracking long before serious problems show up will lead to better outcomes. We’ll be able to look at factors involved in disease, such as diet, physical activity, environmental exposure, and psychological circumstances. And we’ll be able to conduct long-term studies that track the vital signs of apparently healthy individuals as well as the parameters of their environments. That data could be transformative, leading to better treatments and preventative care. But monitoring individuals over not just weeks or months but years can be achieved only with an engineering breakthrough: affordable sensors that ordinary people will use routinely as they go about their lives.

    Building this technology is what’s motivating the work at my 2D bioelectronics lab, where we study atomically thin materials such as graphene. I believe these materials’ properties make them uniquely suited for advanced and unobtrusive biological monitors. My team is developing graphene electronic tattoos that anyone can place on their skin for chemical or physiological biosensing.

    The Rise of Epidermal Electronics

    The idea of a peel-and-stick sensor comes from the groundbreaking work of John Rogers and his team at Northwestern University. Their “epidermal electronics” embed state-of-the-art silicon chips, sensors, light-emitting diodes, antennas, and transducers into thin epidermal patches, which are designed to monitor a variety of health factors. One of Rogers’s best-known inventions is a set of wireless stick-on sensors for newborns in the intensive care unit that make it easier for nurses to care for the fragile babies—and for parents to cuddle them. Rogers’s wearables are typically less than a millimeter thick, which is thin enough for many medical applications. But to make a patch that people would be willing to wear all the time for years, we’ll need something much less obtrusive.

    In search of thinner wearable sensors, Deji Akinwande and Nanshu Lu, professors at the University of Texas at Austin, created graphene electronic tattoos (GETs) in 2017. Their first GETs, about 500 nanometers thick, were applied just like the playful temporary tattoos that kids wear: The user simply wets a piece of paper to transfer the graphene, supported by a polymer, onto the skin.

    Graphene is a wondrous material composed of a single layer of carbon atoms. It’s exceptionally conductive, transparent, lightweight, strong, and flexible. When used within an electronic tattoo, it’s imperceptible: The user can’t even feel its presence on the skin. Tattoos using 1-atom-thick graphene (combined with layers of other materials) are roughly one-hundredth the thickness of a human hair. They’re soft and pliable, and conform perfectly to the human anatomy, following every groove and ridge.

    A close-up photo shows an area of skin with a nearly invisible clear shape adhering to the skin. The ultrathin graphene tattoos are soft and pliable, conforming to the skin’s grooves and ridges. Dmitry Kireev/The University of Texas at Austin

    Some people mistakenly think that graphene isn’t biocompatible and can’t be used in bioelectronic applications. More than a decade ago, during the early stages of graphene development, some preliminary reports found that graphene flakes are toxic to live cells, mainly because of their size and the chemical doping used in the fabrication of certain types of graphene. Since then, however, the research community has realized that there are at least a dozen functionally different forms of graphene, many of which are not toxic, including oxidized sheets, graphene grown via chemical vapor deposition, and laser-induced graphene. For example, a 2024 paper in Nature Nanotechnology reported no toxicity or adverse effects when graphene oxide nanosheets were inhaled.

    We know that the 1-atom-thick sheets of graphene being used to make e-tattoos are completely biocompatible. This type of graphene has already been used for neural implants without any sign of toxicity, and can even encourage the proliferation of nerve cells. We’ve tested graphene-based tattoos on dozens of subjects, who have experienced no side effects, not even minor skin irritation.

    When Akinwande and Lu created the first GETs in 2017, I had just finished my Ph.D. in bioelectronics at the German research institute Forschungszentrum Jülich. I joined Akinwande’s lab, and more recently have continued the work at my own lab in Amherst. My collaborators and I have made substantial progress in improving the GETs’ performance; in 2022 we published a report on version 2.0, and we’ve continued to push the technology forward.

    Electronic Tattoos for Heart Disease

    According to the World Health Organization, cardiovascular diseases are the leading cause of death worldwide, with causal factors including diet, lifestyle, and environmental pollution. The long-term tracking of people’s cardiac activity—specifically their heart rate and blood pressure—would be a straightforward way to keep tabs on people who show signs of trouble. Our e-tattoos would be ideal for this purpose.

    Measuring heart rate is the easier task, as the cardiac tissue produces obvious electrical signals when the muscles depolarize and repolarize to produce each heartbeat. To detect such electrocardiogram signals, we place a pair of GETs on a person’s skin, either on the chest near the heart or on the two arms. A third tattoo is placed elsewhere and used as a reference point. In what’s known as a differential amplification process, an amplifier takes in signals from all three electrodes but ignores signals that appear in both the reference and the measuring electrodes, and only amplifies the signal that represents the difference between the two measuring electrodes. This way, we isolate the relevant cardiac electrical activity from the surrounding electrophysiological noise of the human body. We’ve been using off-the-shelf amplifiers from companies like OpenBCI that are packaged into wireless devices.

    Continuously measuring blood pressure via tattoo is much more difficult. We started that work with Akinwande of UT Austin in collaboration with Roozbeh Jafari of Texas A&M University (now at MIT’s Lincoln Laboratory). Surprisingly, the blood pressure monitors that doctors use today aren’t significantly different from the ones that doctors were using 100 years ago. You almost certainly have encountered such a device yourself. The machine uses a cuff, usually placed around the upper arm, that inflates to apply pressure on an artery until it briefly stops the flow of blood, then the cuff slowly deflates. While deflating, the machine records the beats as the heart pushes blood through the artery and measures the highest (systolic) and lowest (diastolic) pressure. While the cuff works well in a doctor’s office, it can’t provide a continuous reading or take measurements when a person is on the move. In hospital settings, nurses wake up patients at night to take blood pressure readings, and at-home devices require users to be proactive about monitoring their levels.

    A diagram shows an arm with electrodes on the wrist above the site of an underlying artery. Two simplified charts show an inverse relationship between blood pressure and bioimpedance. Graphene electronic tattoos (GETs) can be used for continuous blood pressure monitoring. Two GETs placed on the skin act as injecting electrodes [red] and send a tiny current through the arm. Because blood conducts electricity better than tissue, the current moves through the underlying artery. Four GETs acting as sensing electrodes [blue] measure the bioimpedance—the body’s resistance to electric current—which changes according to the volume of blood moving through the artery with every heartbeat. We’ve trained a machine learning model to understand the correlation between bioimpedance readings and blood pressure.Chris Philpot

    We developed a new system that uses only stick-on GETs to measure blood pressure continuously and unobtrusively. As we described in a 2022 paper, the GET doesn’t measure pressure directly. Instead, it measures electrical bioimpedance—the body’s resistance to an electric current. We use several GETs to inject a small-amplitude current (50 microamperes at present), which goes through the skin to the underlying artery; GETs on the other side of the artery then measure the impedance of the tissue. The rich ionic solution of the blood within the artery acts as a better conductor than the surrounding fat and muscle, so the artery is the lowest-resistance path for the injected current. As blood flows through the artery, its volume changes slightly with each heartbeat. These changes in blood volume alter the impedance levels, which we then correlate to blood pressure.

    While there is a clear correlation between bioimpedance and blood pressure, it’s not a linear relationship—so this is where machine learning comes in. To train a model to understand the correlation, we ran a set of experiments while carefully monitoring our subjects’ bioimpedance with GETs and their blood pressure with a finger-cuff device. We recorded data as the subjects performed hand grip exercises, dipped their hands into ice-cold water, and did other tasks that altered their blood pressure.

    Our graphene tattoos were indispensable for these model-training experiments. Bioimpedance can be recorded with any kind of electrode—a wristband with an array of aluminum electrodes could do the job. However, the correlation between the measured bioimpedance and blood pressure is so precise and delicate that moving the electrodes by just a few millimeters (like slightly shifting a wristband) would render the data useless. Our graphene tattoos kept the electrodes at exactly the same location during the entire recording.

    Once we had the trained model, we used GETs to again record those same subjects’ bioimpedance data and then derive from that data their systolic, diastolic, and mean blood pressure. We tested our system by continuously measuring their blood pressure for more than 5 hours, a tenfold longer period than in previous studies. The measurements were very encouraging. The tattoos produced more accurate readings than blood-pressure-monitoring wristbands did, and their performance met the criteria for the highest accuracy ranking under the IEEE standard for wearable cuffless blood-pressure monitors.

    While we’re pleased with our progress, there’s still more to do. Each person’s biometric patterns are unique—the relationship between a person’s bioimpedance and blood pressure is uniquely their own. So at present we must calibrate the system anew for each subject. We need to develop better mathematical analyses that would enable a machine learning model to describe the general relationship between these signals.

    Tracking Other Cardiac Biomarkers

    With the support of the American Heart Association, my lab is now working on another promising GET application: measuring arterial stiffness and plaque accumulation within arteries, which are both risk factors for cardiovascular disease. Today, doctors typically check for arterial stiffness and plaque using diagnostic tools such as ultrasound and MRI, which require patients to visit a medical facility, utilize expensive equipment, and rely on highly trained professionals to perform the procedures and interpret the results.

    A photo shows a forearm and hand with the palm facing up. On both the left and right side of the forearm, a line of six small shapes adhere to the skin.      Graphene tattoos can be used to continuously measure a person’s bioimpedance, or the body’s resistance to an electric current, which is correlated to the person’s blood pressure. Dmitry Kireev/The University of Texas at Austin and Kaan Sel/Texas A&M University

    With GETs, doctors could easily and quickly take measurements at multiple locations on the body, getting both local and global perspectives. Since we can stick the tattoos anywhere, we can get measurements from major arteries that are otherwise difficult to reach with today’s tools, such as the carotid artery in the neck. The GETs also provide an extremely fast readout of electrical measurements. And we believe we can use machine learning to correlate bioimpedance measurements with both arterial stiffness and plaque—it’s just a matter of conducting the tailored set of experiments and gathering the necessary data.

    Using GETs for these measurements would allow researchers to look deeper into how stiffening arteries and the buildup of plaque are related to the development of high blood pressure. Tracking this information for a long time in a large population would help clinicians understand the problems that eventually lead to major heart diseases—and perhaps help them find ways to prevent those diseases.

    What Can You Learn from Sweat?

    In a different area of work, my lab has just begun developing graphene tattoos for sweat biosensing. When people sweat, the liquid carries salts and other compounds onto the skin, and sensors can detect markers of good health or disease. We’re initially focusing on cortisol, a hormone associated with stress, stroke, and several disorders of the endocrine system. Down the line, we hope to use our tattoos to sense other compounds in sweat, such as glucose, lactate, estrogen, and inflammation markers.

    Several labs have already introduced passive or active electronic patches for sweat biosensing. The passive systems use a chemical indicator that changes color when it reacts with specific components in sweat. The active electrochemical devices, which typically use three electrodes, can detect substances across a wide range of concentrations and yield accurate data, but they require bulky electronics, batteries, and signal processing units. And both types of patches use cumbersome microfluidic chambers for sweat collection.

    In our GETs for sweat, we use the graphene as a transistor. We modify the graphene’s surface by adding certain molecules, such as antibodies, that are designed to bind to specific targets. When a target substance interacts with the antibody, it produces a measurable electrical signal that then changes the resistance of the graphene transistor. That resistance change is converted into a readout that indicates the presence and concentration of the target molecule.

    We’ve already successfully developed standalone graphene biosensors that can detect food toxins, measure ferritin (a protein that stores iron), and distinguish between the COVID-19 and flu viruses. Those standalone sensors look like chips, and we place them on a tabletop and drip liquid onto them for the experiments. With support from the U.S. National Science Foundation, we’re now integrating this transistor-based sensing approach into GET wearable biosensors that can be stuck on the skin for direct contact with the sweat.

    We’ve also improved our GETs by adding microholes to allow for water transport, so that sweat doesn’t accumulate under the GET and interfere with its function. Now we’re working to ensure that enough sweat is coming from the sweat ducts and into the tattoo, so that the target substances can react with the graphene.

    The Way Forward for Graphene Tattoos

    To turn our technology into user-friendly products, there are a few engineering challenges. Most importantly, we need to figure out how to integrate these smart e-tattoos into an existing electronic network. At the moment, we have to connect our GETs to standard electronic circuits to deliver the current, record the signal, and transmit and process the information. That means the person wearing the tattoo must be wired to a tiny computing chip that then wirelessly transmits the data. Over the next five to ten years, we hope to integrate the e-tattoos with smartwatches. This integration will require a hybrid interconnect to join the flexible graphene tattoo to the smartwatch’s rigid electronics.

    In the long term, I envision 2D graphene materials being used for fully integrated electronic circuits, power sources, and communication modules. Microelectronic giants such as Imec and Intel are already pursuing electronic circuits and nodes made from 2D materials instead of silicon.

    Perhaps in 20 years, we’ll have 2D electronic circuits that can be integrated with soft human tissue. Imagine electronics embedded in the skin that continuously monitor health-related biomarkers and provide real-time feedback through subtle, user-friendly displays. This advancement would offer everyone a convenient and noninvasive way to stay informed and proactively manage their own health, beginning a new era of human self-knowledge.

  • The Starting Line for Self-Driving Cars
    by Allison Marsh on 01. February 2025. at 14:00



    The 2004 DARPA Grand Challenge was a spectacular failure. The Defense Advanced Research Projects Agency had offered a US $1 million prize for the team that could design an autonomous ground vehicle capable of completing an off-road course through sometimes flat, sometimes winding and mountainous desert terrain. As IEEE Spectrum reported at the time, it was “the motleyest assortment of vehicles assembled in one place since the filming of Mad Max 2: The Road Warrior.” Not a single entrant made it across the finish line. Some didn’t make it out of the parking lot.

    Videos of the attempts are comical, although any laughter comes at the expense of the many engineers who spent countless hours and millions of dollars to get to that point.

    So it’s all the more remarkable that in the second DARPA Grand Challenge, just a year and a half later, five vehicles crossed the finish line. Stanley, developed by the Stanford Racing Team, eked out a first-place win to claim the $2 million purse. This modified Volkswagen Touareg [shown at top] completed the 212-kilometer course in 6 hours, 54 minutes. Carnegie Mellon’s Sandstorm and H1ghlander took second and third place, respectively, with times of 7:05 and 7:14.

    Kat-5, sponsored by the Gray Insurance Co. of Metairie, La., came in fourth with a respectable 7:30. The vehicle was named after Hurricane Katrina, which had just pummeled the Gulf Coast a month and a half earlier. Oshkosh Truck’s TerraMax also finished the circuit, although its time of 12:51 exceeded the 10-hour time limit set by DARPA.

    So how did the Grand Challenge go from a total bust to having five robust finishers in such a short period of time? It’s definitely a testament to what can be accomplished when engineers rise to a challenge. But the outcome of this one race was preceded by a much longer path of research, and that plus a little bit of luck are what ultimately led to victory.

    Before Stanley, there was Minerva

    Let’s back up to 1998, when computer scientist Sebastian Thrun was working at Carnegie Mellon and experimenting with a very different robot: a museum tour guide. For two weeks in the summer, Minerva, which looked a bit like a Dalek from “Doctor Who,” navigated an exhibit at the Smithsonian National Museum of American History. Its main task was to roll around and dispense nuggets of information about the displays.

    Minerva was a museum tour-guide robot developed by Sebastian Thrun.

    In an interview at the time, Thrun acknowledged that Minerva was there to entertain. But Minerva wasn’t just a people pleaser ; it was also a machine learning experiment. It had to learn where it could safely maneuver without taking out a visitor or a priceless artifact. Visitor, nonvisitor; display case, not-display case; open floor, not-open floor. It had to react to humans crossing in front of it in unpredictable ways. It had to learn to “see.”

    Fast-forward five years: Thrun transferred to Stanford in July 2003. Inspired by the first Grand Challenge, he organized the Stanford Racing Team with the aim of fielding a robotic car in the second competition.

    In a vast oversimplification of Stanley’s main task, the autonomous robot had to differentiate between road and not-road in order to navigate the route successfully. The Stanford team decided to focus its efforts on developing software and used as much off-the-shelf hardware as they could, including a laser to scan the immediate terrain and a simple video camera to scan the horizon. Software overlapped the two inputs, adapted to the changing road conditions on the fly, and determined a safe driving speed. (For more technical details on Stanley, check out the team’s paper.) A remote-control kill switch, which DARPA required on all vehicles, would deactivate the car before it could become a danger. About 100,000 lines of code did that and much more.

    The Stanford team hadn’t entered the 2004 Grand Challenge and wasn’t expected to win the 2005 race. Carnegie Mellon, meanwhile, had two entries—a modified 1986 Humvee and a modified 1999 Hummer—and was the clear favorite. In the 2004 race, CMU’s Sandstorm had gone furthest, completing 12 km. For the second race, CMU brought an improved Sandstorm as well as a new vehicle, H1ghlander.

    Many of the other 2004 competitors regrouped to try again, and new ones entered the fray. In all, 195 teams applied to compete in the 2005 event. Teams included students, academics, industry experts, and hobbyists.

    After site visits in the spring, 43 teams made it to the qualifying event, held 27 September through 5 October at the California Speedway, in Fontana. Each vehicle took four runs through the course, navigating through checkpoints and avoiding obstacles. A total of 23 teams were selected to attempt the main course across the Mojave Desert. Competing was a costly endeavor—CMU’s Red Team spent more than $3 million in its first year—and the names of sponsors were splashed across the vehicles like the logos on race cars.

    In the early hours of 8 October, the finalists gathered for the big race. Each team had a staggered start time to help avoid congestion along the route. About two hours before a team’s start, DARPA gave them a CD containing approximately 3,000 GPS coordinates representing the course. Once the team hit go, it was hands off: The car had to drive itself without any human intervention. PBS’s NOVA produced an excellent episode on the 2004 and 2005 Grand Challenges that I highly recommend if you want to get a feel for the excitement, anticipation, disappointment, and triumph.

    Photo of a red SUV covered with instruments and company logos driving along a dirt road in the desert. In the 2005 Grand Challenge, Carnegie Mellon University’s H1ghlander was one of five autonomous cars to finish the race.Damian Dovarganes/AP

    H1ghlander held the pole position, having placed first in the qualifying rounds, followed by Stanley and Sandstorm. H1ghlander pulled ahead early and soon had a substantial lead. That’s where luck, or rather the lack of it, came in.

    About two hours into the race, H1ghlander slowed down and started rolling backward down a hill. Although it eventually resumed moving forward, it never regained its top speed, even on long, straight, level sections of the course. The slower but steadier Stanley caught up to H1ghlander at the 163-km (101.5-mile) marker, passed it, and never let go of the lead.

    What went wrong with H1ghlander remained a mystery, even after extensive postrace analysis. It wasn’t until 12 years after the race—and once again with a bit of luck—that CMU discovered the problem: Pressing on a small electronic filter between the engine control module and the fuel injector caused the engine to lose power and even turn off. Team members speculated that an accident a few weeks before the competition had damaged the filter. (To learn more about how CMU finally figured this out, see Spectrum Senior Editor Evan Ackerman’s 2017 story.)

    The Legacy of the DARPA Grand Challenge

    Regardless of who won the Grand Challenge, many success stories came out of the contest. A year and a half after the race, Thrun had already made great progress on adaptive cruise control and lane-keeping assistance, which is now readily available on many commercial vehicles. He then worked on Google’s Street View and its initial self-driving cars. CMU’s Red Team worked with NASA to develop rovers for potentially exploring the moon or distant planets. Closer to home, they helped develop self-propelled harvesters for the agricultural sector.

    Photo of a smiling man sitting on the hood of a dusty blue SUV that is covered with company logos and has instruments on the roof. Stanford team leader Sebastian Thrun holds a $2 million check, the prize for winning the 2005 Grand Challenge.Damian Dovarganes/AP

    Of course, there was also a lot of hype, which tended to overshadow the race’s militaristic origins—remember, the “D” in DARPA stands for “defense.” Back in 2000, a defense authorization bill had stipulated that one-third of the U.S. ground combat vehicles be “unmanned” by 2015, and DARPA conceived of the Grand Challenge to spur development of these autonomous vehicles. The U.S. military was still fighting in the Middle East, and DARPA promoters believed self-driving vehicles would help minimize casualties, particularly those caused by improvised explosive devices.

    DARPA sponsored more contests, such as the 2007 Urban Challenge, in which vehicles navigated a simulated city and suburban environment; the 2012 Robotics Challenge for disaster-response robots; and the 2022 Subterranean Challenge for—you guessed it—robots that could get around underground. Despite the competitions, continued military conflicts, and hefty government contracts, actual advances in autonomous military vehicles and robots did not take off to the extent desired. As of 2023, robotic ground vehicles made up only 3 percent of the global armored-vehicle market.

    Today, there are very few fully autonomous ground vehicles in the U.S. military; instead, the services have forged ahead with semiautonomous, operator-assisted systems, such as remote-controlled drones and ship autopilots. The one Grand Challenge finisher that continued to work for the U.S. military was Oshkosh Truck, the Wisconsin-based sponsor of the TerraMax. The company demonstrated a palletized loading system to transport cargo in unmanned vehicles for the U.S. Army.

    Much of the contemporary reporting on the Grand Challenge predicted that self-driving cars would take us closer to a “Jetsons” future, with a self-driving vehicle to ferry you around. But two decades after Stanley, the rollout of civilian autonomous cars has been confined to specific applications, such as Waymo robotaxis transporting people around San Francisco or the GrubHub Starships struggling to deliver food across my campus at the University of South Carolina.

    I’ll be watching to see how the technology evolves outside of big cities. Self-driving vehicles would be great for long distances on empty country roads, but parts of rural America still struggle to get adequate cellphone coverage. Will small towns and the spaces that surround them have the bandwidth to accommodate autonomous vehicles? As much as I’d like to think self-driving autos are nearly here, I don’t expect to find one under my carport anytime soon.

    A Tale of Two Stanleys

    Not long after the 2005 race, Stanley was ready to retire. Recalling his experience testing Minerva at the National Museum of American History, Thrun thought the museum would make a nice home. He loaned it to the museum in 2006, and since 2008 it has resided permanently in the museum’s collections, alongside other remarkable specimens in robotics and automobiles. In fact, it isn’t even the first Stanley in the collection.

    Photo of an early 20th-century open-top car. Stanley now resides in the collections of the Smithsonian Institution’s National Museum of American History, which also houses another Stanley—this 1910 Stanley Runabout. Behring Center/National Museum of American History/Smithsonian Institution

    That distinction belongs to a 1910 Stanley Runabout, an early steam-powered car introduced at a time when it wasn’t yet clear that the internal-combustion engine was the way to go. Despite clear drawbacks—steam engines had a nasty tendency to explode—“Stanley steamers” were known for their fine craftsmanship. Fred Marriott set the land speed record while driving a Stanley in 1906. It clocked in at 205.5 kilometers per hour, which was significantly faster than the 21st-century Stanley’s average speed of 30.7 km/hr. To be fair, Marriott’s Stanley was racing over a flat, straight course rather than the off-road terrain navigated by Thrun’s Stanley.

    Across the century that separates the two Stanleys, it’s easy to trace a narrative of progress. Both are clearly recognizable as four-wheeled land vehicles, but I suspect the science-fiction dreamers of the early 20th century would have been hard-pressed to imagine the suite of technologies that would propel a 21st-century self-driving car. What will the vehicles of the early 22nd century be like? Will they even have four tires, or will they run on something entirely new?

    Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

    An abridged version of this article appears in the February 2025 print issue as “Slow and Steady Wins the Race.”

    References


    Sebastian Thrun and his colleagues at the Stanford Artificial Intelligence Laboratory, along with members of the other groups that sponsored Stanley, published “Stanley: The Robot That Won the DARPA Grand Challenge.” This paper, from the Journal of Field Robotics, explains the vehicle’s development.

    The NOVA PBS episode “The Great Robot Race provides interviews and video footage from both the failed first Grand Challenge and the successful second one. I personally liked the side story of GhostRider, an autonomous motorcycle that competed in both competitions but didn’t quite cut it. (GhostRider also now resides in the Smithsonian’s collection.)

    Smithsonian curator Carlene Stephens kindly talked with me about how she collected Stanley for the National Museum of American History and where she sees artifacts like this fitting into the stream of history.

  • William Ratcliff, Former IEEE Region 3 Director, Dies at 80
    by Amanda Davis on 31. January 2025. at 19:00

    William Ratcliff
    Former IEEE Region 3 director
    Life senior member, 80; died 20 June

    Ratcliff was the 2008–2009 director of IEEE Region 3 (Southeastern United States).

    An active IEEE volunteer, he led efforts to change the IEEE Regional Activities board to the IEEE Member and Geographic Activities board.

    He also helped develop and launch the IEEE MOVE (Mobile Outreach using Volunteer Engagement) program. The three vehicles in the IEEE-USA initiative provide U.S. communities with power and communications capabilities in areas affected by widespread outages due to natural disasters.

    Ratcliff began his career in 1965 as an electrical engineer at Public Service Indiana, an electric utility based in Indianapolis. There he helped design bulk power systems and developed engineering software. He left in 1985 to join manufacturer Gulfstream Aerospace, in Savannah, Ga., where he was an engineering manager until 1994.

    He earned a bachelor’s degree in electrical engineering from Purdue University, in West Lafayette, Ind.

    Lembit Salasoo
    GE senior research scientist
    Life member, 68; died 17 August

    Salasoo was a scientist for 36 years at the General Electric Global Research Center, in Niskayuna, N.Y.

    He earned two bachelor’s degrees, one in computer science in 1976 and the other in electrical engineering in 1978, both from the University of Sydney.

    He joined the Electricity Commission of New South Wales, an Australian utility, as a power engineer.

    In 1982 he moved to the United States after being accepted into Rensselaer Polytechnic Institute, in New York. He earned a master’s degree in engineering in 1983 and a Ph.D. in electric power engineering in 1986.

    After graduating, he joined GE Research’s superconducting magnet group, where he focused initially on researching conduction-cooled MRI magnets.

    He later worked on computer tomography equipment including the Gemini tube used in CT scanners.

    He and his team developed a tool to analyze secondary electron emission heat transfer in the tubes. For their work, they received GE’s 1998 Dushman Award, which recognizes contributions to the company.

    In the early 2000s, Salasoo shifted his focus to developing technology for clean energy transportation—namely for hybrid-electric buses, locomotives, and mine trucks. He was part of the research team that conducted a proof-of-concept demonstration of a hybrid locomotive at Union Station in Los Angeles as part of GE’s Ecomagination initiative, a clean-energy R&D program.

    His area of research changed again in the 2010s to developing financial systems for GE’s Applied Statistics Lab. His work involved making GE Capital, the company’s financial services subsidiary, compliant as a systemically important financial institution. Should a SIFI fail, it could trigger a financial crisis, so it must adhere to strict regulations. For his work, he received the 2015 Dushman Award.

    From 2015 to 2020 he developed defect detection models at GE used in metal additive manufacturing. In 2023 he led GE’s climate research team in developing technology that predicts and mitigates the formation of long-lasting cirrus clouds, commonly known as contrails, produced by aircraft emissions. Under his leadership, the team won a grant from the Advanced Research Projects Agency–Energy.

    Karl Kay Womack
    Computer engineer
    Life Fellow, 90; died 10 July

    Womack spent his career working on early computers at IBM in New York City. He earned a master’s degree in electrical engineering from Syracuse University, in New York.

    He was an avid science fiction and fantasy reader, according to his obituary.

    Thomas M. Kurihara
    Chair of IEEE Standards Association working groups
    Life member, 89; died 24 May

    Kurihara was an active volunteer for the IEEE Standards Association. He was chair of the working group that developed the IEEE 1512 series of standards for incident management message sets used by emergency management centers. He also chaired the IEEE 1609 working group, which developed standards for next-generation V2X (vehicle-to-everything) communications.

    A member of the IEEE Vehicular Technology Society, he chaired its intelligent transportation systems standards committee from 2017 to 2022.

    After graduating with a bachelor’s degree in 1957 from Stanford, he joined the U.S. Navy. By the time his active duty ended in 1969, he had attained the rank of lieutenant commander.

    He then worked as an engineer for the U.S. government and in private industry before becoming a consultant.

    Kurihara and his family were sent to Japanese-American internment camps during World War II. After the war ended, they resettled in St. Paul. He was a lifetime supporter of the Twin Cities chapter of the Japanese American Citizens League, a national organization that advocates for civil rights and seeks to preserve the heritage of Asian Americans. He was a member of the St. Paul–Nagasaki Sister City Committee, which promotes beneficial relationships between citizens of the two cities and encourages peace between the United States and Japan.

    In honor of his parents, in 2010 Kurihara established the Earl K. and Ruth N. Tanbara Fund for Japanese American History at the Minnesota Historical Society. The money is used to document and preserve the group’s history, particularly in Minnesota.

    Robert A. Reilly
    Former IEEE Division VI director
    Senior member, 76; died 21 May

    Reilly served as the 2015–2016 director of IEEE Division VI. He was a former president of the IEEE Education Society and a member of numerous IEEE boards and committees.

    He enlisted in the U.S. Army in 1965 and served as a medic in Japan for five years. After returning to the United States in 1970 as a major in the Army Reserve, he enrolled at the University of Massachusetts in Amherst. He received a bachelor’s degree in health and physical education in 1974. Two years later he earned a master’s degree in education from Springfield College, in Massachusetts. He later returned to the University of Massachusetts and in 1996 received a Ph.D. in education.

    Reilly began his career in 1972 as a physical education teacher at St. Matthew’s Parish School in Indian Orchard, Mass. After two years there, he left to teach social studies, math, and science at Our Lady of the Sacred Heart School in Springfield. He worked at the school, which closed in 2006, for three years.

    From 1979 to 1982 he was an instructor at North Adams State College (now the Massachusetts College of Liberal Arts), training educators.

    In 1985 he joined Lanesborough Elementary School as a computer teacher, and he taught there until he retired in 2011.

    In 1992 he founded and served as director of K12 Net, an online communication network for teachers that preceded the Internet. From 1995 to 2001 he was director of EdNet@UMass, a Web-based professional development network at the University of Massachusetts’s College of Education.

    Reilly was a visiting scientist in the early 2000s at MIT, where he researched computer-based applications and cognitive learning theories.

    He was a member of the American Society of Engineering Education and served as the 2009–2010 chair of its electrical and computer engineering division. He was president of the National Education Association’s Lanesborough chapter three times.

    He received several IEEE awards including the 2010 IEEE Sayle Award for Achievement in Education from the IEEE Education Society and the 2006 Wilson Transnational Award from IEEE Member and Geographic Activities.

    Ron B. Schroer
    Aerospace engineer
    Life senior member, 92; died 9 May

    Schroer was an aerospace engineer at Martin Marietta (now part of Lockheed Martin) in Denver for more than 30 years.

    After receiving bachelor’s degrees in chemistry and physical science from the University of Wisconsin in La Crosse in 1953, he enlisted in the U.S. Air Force. After his active duty ended in 1957, he earned a master’s degree in instrumentation engineering from the University of Michigan in Detroit and an MBA from the University of Colorado in Denver.

    During his career at Martin Marietta, he worked on the Titan missile program, the NASA Space Shuttle, and a number of Federal Aviation Administration air traffic control systems.

    An active IEEE volunteer, he was editor in chief of IEEE Aerospace and Electronic Systems Magazine and served on the Aerospace and Electronic Systems Society’s board of governors.

  • Video Friday: Aibo Foster Parents
    by Evan Ackerman on 31. January 2025. at 16:30



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY
    German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY
    European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY
    RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
    ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
    ICRA 2025: 19–23 May 2025, ATLANTA
    London Humanoids Summit: 29–30 May 2025, LONDON
    IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
    2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON
    RSS 2025: 21–25 June 2025, LOS ANGELES

    Enjoy today’s videos!

    This video about ‘foster’ Aibos helping kids at a children’s hospital is well worth turning on auto-translated subtitles for.

    [ Aibo Foster Program ]

    Hello everyone, let me introduce myself again. I am Unitree H1 “Fuxi”. I am now a comedian at the Spring Festival Gala, hoping to bring joy to everyone. Let’s push boundaries every day and shape the future together.

    [ Unitree ]

    Happy Chinese New Year from PNDbotics!

    [ PNDbotics ]

    In celebration of the upcoming Year of the Snake, TRON 1 swishes into three little lions, eager to spread hope, courage, and strength to everyone in 2025. Wishing you a Happy Chinese New Year and all the best, TRON TRON TRON!

    [ LimX Dynamics ]

    Designing planners and controllers for contact-rich manipulation is extremely challenging as contact violates the smoothness conditions that many gradient-based controller synthesis tools assume. We introduce natural baselines for leveraging contact smoothing to compute (a) open-loop plans robust to uncertain conditions and/or dynamics, and (b) feedback gains to stabilize around open-loop plans.

    Mr. Bucket is my favorite.

    [ Boston Dynamics AI Institute ]

    Thanks, Yuki!

    What do you get when you put three aliens in a robotaxi? The first-ever Zoox commercial! We hope you have as much fun watching it as we had creating it and can’t wait for you to experience your first ride in the not-too-distant future.

    [ Zoox ]

    The Humanoids Summit at the Computer History Museum in December was successful enough (either because of or in spite of my active participation) that it’s not only happening again in 2025: There’s also going to be a spring version of the conference in London in May!

    [ Humanoids Summit ]

    I’m not sure it’ll ever be practical at scale, but I do really like JSK’s musculoskeletal humanoid work.

    [ Paper ]

    In November 2024, as part of the CRS-31 mission, flight controllers remotely maneuvered Canadarm2 and Dextre to extract a payload from the SpaceX Dragon cargo ship’s trunk (CRS-31) and install it on the International Space Station. This animation was developed in preparation for the operation and shows just how complex robotic tasks can be.

    [ Canadian Space Agency ]

    Staci Americas, a third-party logistics provider, addressed its inventory challenges by implementing the Corvus One™ Autonomous Inventory Management System in its Georgia and New Jersey facilities. The system uses autonomous drones for nightly, lights-out inventory scans, identifying discrepancies and improving workflow efficiency.

    [ Corvus Robotics ]

    Thanks, Joan!

    I would have said that this controller was too small to be manipulated with a pinch grasp. I would be wrong.

    [ Pollen ]

    How does NASA plan to use resources on the surface of the Moon? One method is the ISRU Pilot Excavator, or IPEx! Designed by Kennedy Space Center’s Swamp Works team, the primary goal of IPEx is to dig up lunar soil, known as regolith, and transport it across the Moon’s surface.

    [ NASA ]

    The TBS Mojito is an advanced forward-swept FPV flying wing platform that delivers unmatched efficiency and flight endurance. By focusing relentlessly on minimizing drag, the wing reaches speeds upwards of 200 km/h (125 mph), while cruising at 90-120 km/h (60-75 mph) with minimal power consumption.

    [ Team BlackSheep ]

    At Zoox, safety is more than a priority—it’s foundational to our mission and one of the core reasons we exist. Our System Design & Mission Assurance (SDMA) team is responsible for building the framework for safe autonomous driving. Our Co-Founder and CTO, Jesse Levinson, and Senior Director of System Design and Mission Assurance (SDMA), Qi Hommes, hosted a LinkedIn Live to provide an insider’s overview of the teams responsible for developing the metrics that ensure our technology is safe for deployment on public roads.

    [ Zoox ]

  • AIs and Robots Should Sound Robotic
    by Bruce Schneier on 30. January 2025. at 17:00



    Most people know that robots no longer sound like tinny trash cans. They sound like Siri, Alexa, and Gemini. They sound like the voices in labyrinthine customer support phone trees. And even those robot voices are being made obsolete by new AI-generated voices that can mimic every vocal nuance and tic of human speech, down to specific regional accents. And with just a few seconds of audio, AI can now clone someone’s specific voice.

    This technology will replace humans in many areas. Automated customer support will save money by cutting staffing at call centers. AI agents will make calls on our behalf, conversing with others in natural language. All of that is happening, and will be commonplace soon.

    But there is something fundamentally different about talking with a bot as opposed to a person. A person can be a friend. An AI cannot be a friend, despite how people might treat it or react to it. AI is at best a tool, and at worst a means of manipulation. Humans need to know whether we’re talking with a living, breathing person or a robot with an agenda set by the person who controls it. That’s why robots should sound like robots.

    You can’t just label AI-generated speech. It will come in many different forms. So we need a way to recognize AI that works no matter the modality. It needs to work for long or short snippets of audio, even just a second long. It needs to work for any language, and in any cultural context. At the same time, we shouldn’t constrain the underlying system’s sophistication or language complexity.

    We have a simple proposal: all talking AIs and robots should use a ring modulator. In the mid-twentieth century, before it was easy to create actual robotic-sounding speech synthetically, ring modulators were used to make actors’ voices sound robotic. Over the last few decades, we have become accustomed to robotic voices, simply because text-to-speech systems were good enough to produce intelligible speech that was not human-like in its sound. Now we can use that same technology to make robotic speech that is indistinguishable from human sound robotic again.

    A ring modulator has several advantages: It is computationally simple, can be applied in real-time, does not affect the intelligibility of the voice, and--most importantly--is universally “robotic sounding” because of its historical usage for depicting robots.

    Responsible AI companies that provide voice synthesis or AI voice assistants in any form should add a ring modulator of some standard frequency (say, between 30-80 Hz) and of a minimum amplitude (say, 20 percent). That’s it. People will catch on quickly.

    Here are a couple of examples you can listen to for examples of what we’re suggesting. The first clip is an AI-generated “podcast” of this article made by Google’s NotebookLM featuring two AI “hosts.” Google’s NotebookLM created the podcast script and audio given only the text of this article. The next two clips feature that same podcast with the AIs’ voices modulated more and less subtly by a ring modulator:

    Raw audio sample generated by Google’s NotebookLM

    Audio sample with added ring modulator (30 Hz-25%)

    Audio sample with added ring modulator (30 Hz-40%)

    We were able to generate the audio effect with a 50-line Python script generated by Anthropic’s Claude. One of the most well-known robot voices were those of the Daleks from Doctor Who in the 1960s. Back then robot voices were difficult to synthesize, so the audio was actually an actor’s voice run through a ring modulator. It was set to around 30 Hz, as we did in our example, with different modulation depth (amplitude) depending on how strong the robotic effect is meant to be. Our expectation is that the AI industry will test and converge on a good balance of such parameters and settings, and will use better tools than a 50-line Python script, but this highlights how simple it is to achieve.

    Of course there will also be nefarious uses of AI voices. Scams that use voice cloning have been getting easier every year, but they’ve been possible for many years with the right know-how. Just like we’re learning that we can no longer trust images and videos we see because they could easily have been AI-generated, we will all soon learn that someone who sounds like a family member urgently requesting money may just be a scammer using a voice-cloning tool.

    We don’t expect scammers to follow our proposal: They’ll find a way no matter what. But that’s always true of security standards, and a rising tide lifts all boats. We think the bulk of the uses will be with popular voice APIs from major companies--and everyone should know that they’re talking with a robot.

  • Sony Kills Recordable Blu-Ray And Other Vintage Media
    by Gwendolyn Rak on 30. January 2025. at 12:00



    Physical media fans need not panic yet—you’ll still be able to buy new Blu-Ray movies for your collection. But for those who like to save copies of their own data onto the discs, the remaining options just became more limited: Sony announced last week that it’s ending all production of several recordable media formats—including Blu-Ray discs, MiniDiscs, and MiniDV cassettes—with no successor models.

    “Considering the market environment and future growth potential of the market, we have decided to discontinue production,” a representative of Sony said in a brief statement to IEEE Spectrum.

    Though availability is dwindling, most Blu-Ray discs are unaffected. The discs being discontinued are currently only available to consumers in Japan and some professional markets elsewhere, according to Sony. Many consumers in Japan use blank Blu-Ray discs to save TV programs, Sony separately told Gizmodo.

    Sony, which prototyped the first Blu-Ray discs in 2000, has been selling commercial Blu-Ray products since 2006. Development of Blu-Ray was started by Philips and Sony in 1995, shortly after Toshiba’s DVD was crowned the winner of the battle to replace the VCR, notes engineer Kees Immink, whose coding was instrumental in developing optical formats such as CDs, DVDs, and Blu-Ray discs. “Philips [and] Sony were so frustrated by that loss that they started a new disc format, using a blue laser,” Immink says.

    Blu-Ray’s Short-Lived Media Dominance

    The development took longer than expected, but when it was finally introduced a decade later, Blu-Ray was on its way to becoming the medium for distributing video, as DVD discs and VHS tapes had done in their heydays. In 2008, Spectrum covered the moment when Blu-Ray’s major competitor, HD-DVD, surrendered. But the timing was unfortunate, as the rise of streaming made it an empty victory. Still, Blu-Rays continue to have value as collector’s items for many film buffs who want high-quality recordings not subject to compression artifacts that can arise with streaming, not to mention those wary of losing access to movies due to the vagaries of streaming services’ licensing deals.

    Sony’s recent announcement does, however, cement the death of the MiniDV cassette and MiniDisc. MiniDV, magnetic cassettes meant to replace VHS tapes at one-fifth the size, were once a popular format of digital video cassettes. The MiniDisc, an erasable magneto-optical disc that can hold up to 80 minutes of digitized audio, still has a small following. The 64-millimeter (2.5-inch) discs, held in a plastic cartridge similar to a floppy disk, were developed in the mid-1980s as a replacement for analog cassette tapes. Sony finally released the product in 1992, and it was popular in Japan into the 2000s.

    To record data onto optical storage like CDs and Blu-Rays, lasers etch microscopic pits into the surface of the disc to represent ones and zeros. Lasers are also used to record data onto MiniDiscs, but instead of making indentations, they’re used to change the polarization of the material; the lasers heat up one side of the disc, making the material susceptible to a magnetic field, which can then alter the polarity of the heated area. Then in playback, the polarization of reflected light translates to a one or zero.

    When the technology behind media storage formats like the MiniDisc and Blu-Ray was first being developed, the engineers involved believed the technology would be used well into the future, says optics engineer Joseph Braat. His research at Philips with Immink served as the basis of the MiniDisc.

    Despite that optimism, “the density of information in optical storage was limited from the very beginning,” Braat says. Despite using the compact wavelengths of blue light, Blu-Ray soon hit a limit of how much data could be stored. Even dual-layer Blu-Ray discs can only hold 50 gigabytes per side; that amount of data will give you 50 hours of standard definition streaming on Netflix, or about seven hours of 4K video content.

    MiniDiscs still have a small, dedicated niche of enthusiasts, with active social media communities and in-person disc swaps. But since Sony stopped production of MiniDisc devices in 2013, the retro format has effectively been on technological hospice care, with the company only offering blank discs and repair services. Now, it seems, it’s officially over.

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 09. February 2022. at 15:31



    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


    Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

    Andrew Ng on...

    The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

    Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

    When you say you want a foundation model for computer vision, what do you mean by that?

    Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

    What needs to happen for someone to build a foundation model for video?

    Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

    Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

    Back to top

    It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

    Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

    “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
    —Andrew Ng, CEO & Founder, Landing AI

    I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

    I expect they’re both convinced now.

    Ng: I think so, yes.

    Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

    Back to top

    How do you define data-centric AI, and why do you consider it a movement?

    Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

    When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

    The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

    You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

    Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

    When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

    Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

    “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
    —Andrew Ng

    For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

    Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

    Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

    One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

    When you talk about engineering the data, what do you mean exactly?

    Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

    For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

    Back to top

    What about using synthetic data, is that often a good solution?

    Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

    Do you mean that synthetic data would allow you to try the model on more data sets?

    Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

    “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
    —Andrew Ng

    Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

    Back to top

    To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

    Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

    One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

    How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

    Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

    In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

    So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

    Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

    Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

    Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

    Back to top

    This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 08. February 2022. at 14:00



    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

    Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

    But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

    How is AI currently being used to design the next generation of chips?

    Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

    Portrait of a woman with blonde-red hair smiling at the camera Heather GorrMathWorks

    Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

    What are the benefits of using AI for chip design?

    Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

    So it’s like having a digital twin in a sense?

    Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

    So, it’s going to be more efficient and, as you said, cheaper?

    Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

    We’ve talked about the benefits. How about the drawbacks?

    Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

    Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

    One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

    How can engineers use AI to better prepare and extract insights from hardware or sensor data?

    Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

    One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

    What should engineers and designers consider when using AI for chip design?

    Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

    How do you think AI will affect chip designers’ jobs?

    Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

    How do you envision the future of AI and chip design?

    Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 07. February 2022. at 16:12



    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

    IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

    Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

    “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

    The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

    Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

    Golden dilution refrigerator hanging vertically Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

    In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

    As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

    In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

    “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

    On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

    While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

    “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

    This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

    “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

    Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.