IEEE Spectrum IEEE Spectrum
-
Video Friday: Tiny Robot Bug Hops and Jumps
by Evan Ackerman on 11. April 2025. at 15:30
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
ICRA 2025: 19–23 May 2025, ATLANTA, GA
London Humanoids Summit: 29–30 May 2025, LONDON
IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX
RSS 2025: 21–25 June 2025, LOS ANGELES
ETH Robotics Summer School: 21–27 June 2025, GENEVA
IAS 2025: 30 June–4 July 2025, GENOA, ITALY
ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
IEEE World Haptics: 8–11 July 2025, SUWON, KOREA
IFAC Symposium on Robotics: 15–18 July 2025, PARIS
RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL
RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS
CLAWAR 2025: 5–7 September 2025, SHENZHEN
World Robot Summit: 10–12 October 2025, OSAKA, JAPAN
IROS 2025: 19–25 October 2025, HANGZHOU, CHINA
IEEE Humanoids: 30 September–2 October 2025, SEOUL
CoRL 2025: 27–30 September 2025, SEOUL
Enjoy today’s videos!
MIT engineers developed an insect-sized jumping robot that can traverse challenging terrains while using far less energy than an aerial robot of comparable size. This tiny, hopping robot can leap over tall obstacles and jump across slanted or uneven surfaces carrying about 10 times more payload than a similar-sized aerial robot, opening the door to many new applications.
[ MIT ]
CubiX is a wire-driven robot that connects to the environment through wires, with drones used to establish these connections. By integrating with various tools and a robot, it performs tasks beyond the limitations of its physical structure.
[ JSK Lab ]
Thanks, Shintaro!
It’s a game a lot of us played as children—and maybe even later in life: unspooling measuring tape to see how far it would extend before bending. But to engineers at the University of California San Diego, this game was an inspiration, suggesting that measuring tape could become a great material for a robotic gripper.
[ University of California San Diego ]
I enjoyed the Murderbot books, and the trailer for the TV show actually looks not terrible.
[ Murderbot ]
For service robots, being able to operate an unmodified elevator is much more difficult (and much more important) than you might think.
[ Pudu Robotics ]
There’s a lot of buzz around impressive robotics demos — but taking Physical AI from demo to real-world deployment is a journey that demands serious engineering muscle. Hammering out the edge cases and getting to scale is 500x the effort of getting to the first demo. See our process for building this out for the singulation and induction Physical AI solution trusted by some of the world’s leading parcel carriers. Here’s to the teams likewise committed to the grind toward reliability and scale.
I am utterly charmed by the design of this little robot.
[ RoMeLa ]
This video shows a shortened version of Issey Miyake’s Fly With Me runway show from 2025 Paris Men’s Fashion Week. My collaborators and I brought two industrial robots to life to be the central feature of the minimalist scenography for the Japanese brand.
Each ABB IRB 6640 robot held a two meter square piece of fabric, and moved synchronously in flowing motions to match the emotional timing of the runway show. With only three-weeks development time and three days on-site, I built custom live coding tools that opened up the industrial robots to more improvisational workflows. This level of reliable, real-time control unlocked the flexibility needed by the Issey Miyake team to make the necessary last-minute creative decisions for the show.
[ Atonaton ]
Meet Clone’s first musculoskeletal android: Protoclone, the most anatomically accurate robot in the world. Based on a natural human skeleton, Protoclone is actuated with over 1,000 Myofibers, Clone’s proprietary artificial muscle technology.
[ Clone Robotics ]
There are a lot of heavily produced humanoid robot videos from the companies selling them, but now that these platforms are entering the research space, we should start getting a more realistic sense of their capabilities.
Here’s a bit more footage from RIVR on their home delivery robot.
[ RIVR ]
And now, this.
[ EngineAI ]
Robots are at the heart of sci-fi, visions of the future, but what if that future is now? And what if those robots, helping us at work and at home, are simply an extension of the tools we’ve used for millions of years? That’s what artist and engineer Catie Cuan thinks, and it’s part of the reason she teaches robots to dance. In this episode we meet the people at the frontiers of the future of robotics and Astro Teller introduces two groundbreaking projects, Everyday Robots and Intrinsic, that have advanced how robots could work not just for us but with us.
[ Moonshot Podcast ]
-
Climb the Career Ladder with Focused Expertise
by Rahul Pandey on 10. April 2025. at 18:00
This article is crossposted from IEEE Spectrum’s rebooted careers newsletter! In partnership with tech career development company Taro, every issue will be bringing you deeper insight into how to pursue your goals and navigate professional challenges. Sign up now to get insider tips, expert advice, and practical strategies delivered to your inbox for free.
One of the key strategies for gaining seniority is expertise. Whether you’re trying to get promoted or land a new job at a higher level, you need to demonstrate mastery over a valuable skill or domain.
Here’s what most job seekers get wrong about this: They think that being an “expert” is reserved for senior or principal engineers who have decades of experience. Nothing could be further from the truth.
Instead of assuming that expertise is a distant goal, realize that you can become more knowledgeable than anyone as long as you narrow the scope appropriately. For example, in one afternoon, you can become the go-to person in your team of 10 for anything related to configuring logs for your company’s version control software.
In a company with any amount of sophistication, each person’s knowledge is incomplete. There will always be problems that fall into the category of “If we had more time, we’d look into that.” Your goal is to identify which of these gaps could make a meaningful business impact. It need not be purely technical; it could be about search engine optimization (SEO), launch processes, or improving the developer experience.
This is actionable advice if you’re on the job market. If you’re looking for a job, especially as a junior engineer, your #1 goal is to demonstrate mastery over a technology or domain.
This means you should be selective about how much you claim to know on your resume. If you mention every programming language or analysis tool you’ve ever touched, you are making it impossible for someone to identify your level of expertise. This is especially true when you have less than 4 years of experience.
When you claim to know everything, I’ll assume you actually suck at everything. You should be able to teach me something about each of the projects or technologies you mention, e.g. discuss tradeoffs or interesting technical decisions you made.
Yes, you do disqualify yourself from certain jobs where you didn’t list the technologies they were looking for. But those jobs weren’t a good fit anyway.
-Rahul
ICYMI: Top Programming Languages
If you’re taking our advice and looking to develop expertise in a programming language your team needs, check out Spectrum‘s Top Programming Languages interactive. There you’ll find out what programming languages are the most important in your field, and which are most in demand by employers.
ICYMI: Data Centers Seek Engineers Amid a Talent Shortage
The rapid development of AI is fueling a data center boom, unlocking billions of dollars in investments to build the infrastructure needed to support data- and energy-hungry models. This surge in construction has created a strong demand for certain electrical engineers, whose expertise in power systems and energy efficiency is essential for designing, building, and maintaining energy-intensive AI infrastructure.
ICYMI: In Praise of “Normal” Engineers
You don’t have to be a superhero to develop valuable skills either. In one of the most popular articles on IEEE Spectrum this month, Charity Majors breaks down the dangers of lionizing the “10x engineer,” writing “Individual engineers don’t own software; engineering teams own software. It doesn’t matter how fast an individual engineer can write software. What matters is how fast the team can collectively write, test, review, ship, maintain, refactor, extend, architect, and revise the software that they own.”
-
IEEE TryEngineering STEM Grants Fund Over 50 Projects
by Robert Schneider on 10. April 2025. at 17:00
IEEE TryEngineering, a program within Educational Activities, fosters outreach to school-age children worldwide by equipping teachers and IEEE volunteers with tools for engaging activities. The science, technology, engineering, and math resources include peer-reviewed lesson plans, games, and activities that are designed to captivate and inspire—all provided at no cost.
The TryEngineering STEM grant program provides financial support to IEEE volunteers to start, sustain, or scale up selected outreach projects in their communities. Since its inception in 2021, 144 projects have been funded, totaling more than US $176,000. At least 1,000 IEEE volunteers have led programs, engaging with more than 19,000 students.
Last year the grant program awarded more than $70,379 to 58 volunteer-led projects, and 462 applications from nine IEEE regions were received. IEEE members involved in preuniversity outreach programs, including STEM Champions and members of the preuniversity education coordinating committee, reviewed the submissions using a criteria-based rubric.
The full list of funded projects can be found here. What follows is a sampling.
Eight donor-supported projects
STEM Education Workshop 2024: Introducing the Internet of Things to High School Students, funded by the Taenzer Memorial Fund with support from the IEEE Foundation, featured a hands-on activity that provided an introduction to the IoT, programming, and basic microcontroller concepts. Forty high school and vocational school students and nine teachers from the Itenas Bandung electrical engineering study program in Indonesia attended. Twelve IEEE volunteers facilitated the program. Through experimentation with microelectronics, students were able to be creative, spurring increased interest and a desire to further explore technology.
The Taenzer Fund subsidized seven additional proposals to support engineering in developing countries. They totaled $10,000 and reached more than 300 students. The programs included:
- Exploring the Future: An IEEE STEM Industry Tour in Indonesia. This event in Jakarta engaged 20 students with hands-on workshops, networking opportunities, and visits to cutting-edge facilities involved with 5G, AI, and ocean engineering.
- IEEE STEM Empowerment: Student Workshop Series. In these workshops, also held in Indonesia, 20 students tackled hands-on projects in communications technology, AI, and ocean engineering.
- IEEE STEM Teacher Workshop: Empowering Educators for Future Innovators. Fifty participants attended the event in Sukabumi, Indonesia. It offered hands-on sessions on pedagogy, cutting-edge technologies, and ways to increase gender inclusivity.
- IEEE Women in Engineering Day. A three-day session in Kenya included STEM competitions, interactive workshops, mentorship sessions, and hands-on activities to empower young women.
- WIE Impact. A series of workshops and events held in Zaghouan, Tunisia, engaged 160 students with activities in coding, cybersecurity, robotics, space exploration, sustainability, and first aid.
- Exploring Sustainable Futures: Empowering Students With IoT-driven Aquaponics Systems for STEM Enthusiasts. This hands-on program in Kuala Lumpur, Malaysia, taught 28 students about coding basics, real-time system monitoring, and IoT-driven aquaponics systems. Aquaponics, a food-production method, combines aquaculture with hydroponics.
- Development of a Game—Multiplayer Kuis (Gamukis)—for Students of Senior High School 1 Cangkringan in Increasing Learning Enthusiasm. Held at Amikom University, in Yogyakarta, Indonesia, this program taught 150 students how to design educational video games.
Students who participated in the program held at the Atal Tinkering Lab in Narasaraopet, Andhra Pradesh, learned about the fundamentals of Internet of Things and its applications. Atal Tinkering Lab Program Team
IEEE society-sponsored programs
The generous support from the Taenzer Fund was supplemented by financial assistance from IEEE groups including the Communications, Oceanic Engineering, and Signal Processing societies, as well as IEEE Women in Engineering.
The IEEE Signal Processing Society funded three projects including Train the STEM Trainers in Secondary Schools-Multiplier Effect STEM Outreach. The “train the trainers” program involved 350 students and 10 parents, more than 80 teachers, and 30 volunteers. The teachers were trained in robotics and coding using mBlock and Python. The students got experience with calculators, digital counters, LED displays, robotic cars, smart dustbins, and more.
A focus on Internet of Things
Another notable project was the ConnectXperience: The Journey into the World of IoT. Held at the Atal Tinkering Lab in Narasaraopet, Andhra Pradesh, India, this program engaged more than 400 students who learned about IoT fundamentals, robotics, programming, electronics, data analytics, networking, cybersecurity, and other innovative applications of IoT.
2025 grants
TryEngineering recently announced its 2025 STEM grant recipients. Out of more than 410 applications received, funding was awarded to 58 programs from nine sections, for a total of $70,379. The list of recipients can be found here.
To contribute to the program, visit the TryEngineering Fund donation page.
-
The Great Chatbot Debate: Do They Really Understand?
by Eliza Strickland on 10. April 2025. at 15:01
The large language models (LLMs) that power today’s chatbots have gotten so astoundingly capable, AI researchers are hard pressed to assess those capabilities—it seems that no sooner is there a new test than the AI systems ace it. But what does that performance really mean? Do these models genuinely understand our world? Or are they merely a triumph of data and calculations that simulates true understanding?
To hash out these questions, IEEE Spectrum partnered with the Computer History Museum in Mountain View, Calif. to bring two opinionated experts to the stage. I was the moderator of the event, which took place on 25 March. It was a fiery (but respectful) debate, well worth watching in full.
Emily M. Bender is a University of Washington professor and director of its computational linguistics laboratory, and she has emerged over the past decade as one of the fiercest critics of today’s leading AI companies and their approach to AI. She’s also known as one of the coauthors of the seminal 2021 paper “On the Dangers of Stochastic Parrots,” a paper that laid out the possible risks of LLMs (and caused Google to fire coauthor Timnit Gebru). Bender, unsurprisingly, took the “no” position.
Taking the “yes” position was Sébastien Bubeck, who recently moved to OpenAI from Microsoft, where he was VP of AI. During his time at Microsoft he coauthored the influential preprint “Sparks of Artificial General Intelligence,” which described his early experiments with OpenAI’s GPT-4 while it was still under development. In that paper, he described advances over prior LLMs that made him feel that the model had reached a new level of comprehension.
With no further ado, we bring you the matchup that I call “Parrots vs. Sparks.”
- YouTube youtu.be
-
First Supercritical CO2 Circuit Breaker Debuts
by Emily Waltz on 09. April 2025. at 16:36
Researchers this month will begin testing a high-voltage circuit breaker that can quench an arc and clear a fault with supercritical carbon dioxide fluid. The first-of-its-kind device could replace conventional high-voltage breakers, which use the potent greenhouse gas sulfur hexafluoride, or SF6. Such equipment is scattered widely throughout power grids as a way to stop the flow of electrical current in an emergency.
“SF6 is a fantastic insulator, but it’s very bad for the environment—probably the worst greenhouse gas you can think of,” says Johan Enslin, a program director at U.S. Advanced Research Projects Agency–Energy (ARPA-E), which funded the research. The greenhouse warming potential of SF6 is nearly 25,000 times as high as that of carbon dioxide, he notes.
If successful, the invention, developed by researchers at the Georgia Institute of Technology, could have a big impact on greenhouse gas emissions. Hundreds of thousands of circuit breakers dot power grids globally, and nearly all of the high voltage ones are insulated with SF6.
A high-voltage circuit breaker interrupter, like this one made by GE Vernova, stops current by mechanically creating a gap and an arc, and then blasting high-pressure gas through the gap. This halts the current by absorbing free electrons and quenching the arc as the dielectric strength of the gas is increased. GE Vernova
On top of that, SF6 byproducts are toxic to humans. After the gas quenches an arc, it can decompose into substances that can irritate the respiratory system. People who work on SF6-insulated equipment have to wear full respirators and protective clothing. The European Union and California are phasing out the use of SF6 and other fluorinated gases (F-gases) in electrical equipment, and several other regulators are following suit.
In response, researchers globally are racing to develop alternatives. Over the last five years, ARPA-E has funded 15 different early-stage circuit breaker projects. And GE Vernova has developed products for the European market that use a gas mixture that includes an F-gas, but at a fraction of the concentration of conventional SF6 breakers.
Reinventing Circuit Breakers With Supercritical CO2
The job of a grid-scale circuit breaker is to interrupt the flow of electrical current when something goes wrong, such as a fault caused by a lightning strike. These devices are placed throughout substations, power generation plants, transmission and distribution networks, and industrial facilities where equipment operates in tens to hundreds of kilovolts.
Unlike home circuit breakers, which can isolate a fault with a small air gap, grid-scale breakers need something more substantial. Most high-voltage breakers rely on a mechanical interrupter housed in an enclosure containing SF6, which is a non-conductive insulating gas. When a fault occurs, the device breaks the circuit by mechanically creating a gap and an arc, and then blasts the high-pressure gas through the gap, absorbing free electrons and quenching the arc as the dielectric strength of the gas is increased.
In Georgia Tech’s design, supercritical carbon dioxide quenches the arc. The fluid is created by putting CO2 under very high pressure and temperature, turning it into a substance that’s somewhere between a gas and a liquid. Because supercritical CO2 is quite dense, it can quench an arc and avoid reignition of a new arc by reducing the momentum of electrons—or at least that’s the theory.
Led by Lukas Graber, head of Georgia Tech’s plasma and dielectrics lab, the research group will run its 72-kV prototype AC breaker through a synthetic test circuit at the University of Wisconsin-Milwaukee beginning in late April. They group is also building a 245-kV version.
The use of supercritical CO2 isn’t new, but designing a circuit breaker around it is. The challenge was to build the breaker with components that can withstand the high pressure needed to sustain supercritical CO2, says Graber.
The team turned to the petroleum industry to find the parts, and found all but one: the bushing. This crucial component serves as a feed-through to carry current through equipment enclosures. But a bushing that can withstand 120 atmospheres of pressure didn’t exist. So Georgia Tech made its own using mineral-filled epoxy resins, copper conductors, steel pipes, and blank flanges.
“They had to go back to the fundamentals of the bushing design to make the whole breaker work,” says Enslin. “That’s where they are making the biggest contribution, in my eyes.” The compact design of Georgia Tech’s breaker will also allow it to fit in tighter spaces without sacrificing power density, he says.
Replacing a substation’s existing circuit breakers with this design will require some adjustments, including the addition of a heat pump in the vicinity for thermal management of the breaker.
If the tests on the synthetic circuit go well, Graber plans to run the breaker through a battery of real-world simulations at KEMA Laboratories‘ Chalfont, Penn. location—a gold standard certification facility.
The Georgia Tech team built its circuit breaker with parts that can withstand the very high pressures of supercritical CO2.Alfonso Jose Cruz
GE Vernova Markets SF6-alternative Circuit Breaker
If Georgia Tech’s circuit breaker makes it to the market, it will have to compete with GE Vernova, which had a 20-year head start on developing SF6-free circuit breakers. In 2018, the company installed its first SF6-free gas-insulated substation in Europe, which included a 145 kV-class AC circuit breaker that’s insulated with a gas mixture it calls g3. It’s composed of CO2, oxygen and a small amount of C4F7N, or heptafluoroisobutyronitrile.
This fluorinated greenhouse gas isn’t good for the environment either. But it comprises less than 5 percent of the gas mixture, so it lowers the greenhouse warming potential by up to 99 percent compared with SF6. That makes the warming potential still far greater than CO2 and methane, but it’s a start.
“One of the reasons we’re using this technology is because we can make an SF6-free circuit breaker that will actually bolt onto the exact foundation of our equivalent SF6 breaker,” says Todd Irwin, a high-voltage circuit breaker senior product specialist at GE Vernova. It’s a drop-in replacement that will “slide right into a substation,” he says. Workers must still wear full protective gear when they maintain or fix the machine like they do for SF6 equipment, Irwin says. The company also makes a particular type of breaker called a live tank circuit breaker without the fluorinated component, he says.
All of these approaches, including Georgia Tech’s supercritical CO2, depend on mechanical action to open and close the circuit. This takes up precious time in the event of a fault. That’s inspired many researchers to turn to semiconductors, which can do the switching a lot faster, and don’t need a gas to turn off the current.
“With mechanical, it can take up to four or five cycles to clear the fault and that’s so much energy that you have to absorb,” says Enslin at ARPA-E. A semiconductor can potentially do it in a millisecond or less, he says. But commercial development of these solid state circuit breakers is still in early stages, and is focused on medium voltages. “It will take some time to get them to the required high voltages,” Enslin says.
The work may be niche, but the impact could be high. About 1 percent of SF6 leaks from electrical equipment. In 2018, that translated to 9,040 tons (8,200 tonnes) of SF6 emitted globally, accounting for about 1 percent of the global warming value that year.
-
Airbus Is Working on a Superconducting Electric Aircraft
by Glenn Zorpette on 09. April 2025. at 15:10
One of the greatest climate-related engineering challenges right now is the design and construction of a large, zero-emission, passenger airliner. And in this massive undertaking, no airplane maker is as invested as Airbus.
At the Airbus Summit, a symposium for journalists on 24 and 25 March, top executives sketched out a bold, tech-forward vision for the company’s next couple of generations of aircraft. The highlight, from a tech perspective, is a superconducting, fuel-cell powered airliner.
Airbus’s strategy is based on parallel development efforts. While undertaking the enormous R&D projects needed to create the large, fuel-cell aircraft, the company said it will also work aggressively on an airliner designed to wring the most possible efficiency out of combustion-based propulsion. For this plane, the company is targeting a 20-to-30 percent reduction in fuel consumption, according to Bruno Fichefeux, head of future programmes at Airbus. The plane would be a single-aisle airliner, designed to succeed Airbus’s A320 family of aircraft, the highest-selling passenger jet aircraft on the market, with nearly 12,000 delivered. The company expects the new plane to enter service some time in the latter half of the 2030s.
Airbus hopes to achieve such a large efficiency gain by exploiting emerging advances in jet engines, wings, lightweight, high-strength composite materials, and sustainable aviation fuel. For example, Airbus disclosed that it is now working on a pair of advanced jet engines, the more radical of which would have an open fan whose blades would spin without a surrounding nacelle. Airbus is evaluating such an engine in a project with partner CFM International, a joint venture between GE Aerospace and Safran Aircraft Engines.
Without a nacelle to enclose them, an engine’s fan blades can be very large, permitting higher levels of “bypass air,” which is the air sucked in to the back of the engine—separate from the air used to combust fuel—and expelled to provide thrust. The ratio of bypass air to combustion air is an important measure of engine performance, with higher ratios indicating higher efficiencies, according to Mohamed Ali, chief technology and operating officer for GE Aerospace. Typical bypass ratios today are around 11 or 12, but the open-fan design could enable ratios as high as 60, according to Ali.
The partners have already tested open-fan engines in two different series of wind-tunnel tests in Europe, Ali added. “The results have been extremely encouraging, not only because they are really good in terms of performance and noise validation, but also [because] they’re validating the computational analysis that we have done,” Ali said at the Airbus event.
A scale model of an open-fan aircraft engine was tested last year in a wind tunnel in Modane, France. The tests were conducted by France’s national aerospace research agency and Safran Aircraft Engines, which is working on open-fan engines with GE Aerospace.Safran Aircraft Engines
Fuel-cell airliner is a cornerstone of zero-emission goals
In parallel with this advanced combustion-powered airliner, Airbus has been developing a fuel-cell aircraft for five years under a program called ZEROe. At the Summit, Airbus CEO Guillaume Faury backed off of a goal to fly such a plane by 2035, citing the lack of a regulatory framework for certifying such an aircraft as well as the slow pace of the build-out of infrastructure needed to produce “green” hydrogen at commercial scale and at competitive prices. “We would have the risk of a sort of ‘Concord of hydrogen’ where we would have a solution, but that would not be a commercially viable solution at scale,” Faury explained.
That said, he took pains to reaffirm the company’s commitment to the project. “We continue to believe in hydrogen,” he declared. “We’re absolutely convinced that this is an energy for the future for aviation, but there’s just more work to be done. More work for Airbus, and more work for the others around us to bring that energy to something that is at scale, that is competitive, and that will lead to a success, making a significant contribution to decarbonization.” Many of the world’s major industries, including aviation, have pledged to achieve zero net greenhouse gas emissions by the year 2050, a fact that Faury and other Airbus officials repeatedly invoked as a key driver of the ZEROe project.
Later in the event, Glenn Llewellyn, Airbus’s vice president in charge of the ZEROe program, described the project in detail, indicating an effort of breathtaking technological ambition. The envisioned aircraft would seat at least 100 people and have a range of 1000 nautical miles (1850 kilometers). It would be powered by four fuel-cell “engines” (two on each wing), each with a power output of 2 megawatts.
According to Hauke Luedders, head of fuel cell propulsion systems development at Airbus, the company has already done extensive tests in Munich on a 1.2 MW system built with partners including Liebherr Group, ElringKlinger, Magna Steyr, and Diehl. Luedders said the company is focusing on low-temperature proton-exchange-membrane fuel cells, although it has not yet settled on the technology.
But the real stunner was Llewellyn’s description of a comprehensive program at Airbus to design and test a complete superconducting electrical powertrain for the fuel-cell aircraft. “As the hydrogen stored on the aircraft is stored at a very cold temperature, minus 253 degrees Celsius, we can use this temperature and the cryogenic technology to also efficiently cool down the electrics in the full system,” Llewellyn explained. “It significantly improves the energy efficiency and the performance. And even if this is an early technology, with the right efforts and the right partnerships, this could be a game changer for our fuel-cell aircraft, for our fully electric aircraft, enabling us to design larger, more powerful, and more efficient aircraft.”
In response to a question from IEEE Spectrum, Llewellyn elaborated that all of the major components of the electric propulsion system would be cryo-cooled: “electric distribution system, electronic controls, power converters, and the motors”—specifically, the coils in the motors. “We’re working with partners on every single component,” he added. The cryo-cooling system would chill a refrigerant that would circulate to keep the components cold, he explained.
A fuel cell aircraft “engine,” as envisioned by Airbus, would include a 2-megawatt electric motor and associated motor control unit (MCU), a fuel-cell system to power the motor, and associated systems for supplying air, hydrogen fuel, liquid refrigerant, and other necessities. The ram air system would capture cold air flowing over the aircraft for use in the cooling systems.Airbus SAS
Could aviation be the killer app for superconductors?
Llewellyn did not specify which superconductors and refrigerants the team was working with. But high temperature superconductors are a good bet, because of the drastically reduced requirements on the cooling system that would be needed to sustain superconductivity.
Copper-oxide based ceramic superconductors were invented at IBM in 1986, and various forms of them can superconduct at temperatures between –238 °C (35 K) and –140 °C (133 K) at ambient pressure. These temperatures are higher than traditional superconductors, which need temperatures below about 25 K. Nevertheless, commercial applications for the high-temperature superconductors have been elusive.
But a superconductivity expert, applied physicist Yu He at Yale University, was heartened by the news from Airbus. “My first reaction was, ‘really?’ And my second reaction was, wow, this whole line of research, or application, is indeed growing and I’m very delighted” about Airbus’s ambitious plans.
Copper-oxide superconductors have been used in a few applications, almost all of them experimental. These included wind-turbine generators, magnetic-levitation train demonstrations, short electrical transmission cables, magnetic-resonance imaging machines and, notably, in the electromagnet coils for experimental tokamak fusion reactors.
The tokamak application, at a fusion startup called Commonwealth Fusion Systems, is particularly relevant because to make coils, engineers had to invent a process for turning the normally brittle copper-oxide superconducting material into a tape that could be used to form donut-shaped coils capable of sustaining very high current flow and therefore very intense magnetic fields.
“Having a superconductor to provide such a large current is desirable because it doesn’t generate heat,” says He. “That means, first, you have much less energy lost directly from the coils themselves. And, second, you don’t require as much cooling power to remove the heat.”
Still, the technical hurdles are substantial. “One can argue that inside the motor, intense heat will still need to be removed due to aerodynamic friction,” He says. “Then it becomes, how do you manage the overall heat within the motor?”
An engineer at Air Liquide Advanced Technologies works on a test of a hydrogen storage and distribution system at the Liquid Hydrogen Breadboard in November, 2024. The “Breadboard” was established last year in Grenoble, France, by Air Liquide and Airbus.Céline Sadonnet/Master Films
For this challenge, engineers will at least have a favorable environment with cold, fast-flowing air. Engineers will be able to tap into the “massive air flow” over the motors and other components to assist the cooling, He suggests. Smart design could “take advantage of this kinetic energy of flowing air.”
To test the evolving fuel-cell propulsion system, Airbus has built a unique test center in Grenoble called the “Liquid Hydrogen Breadboard,” Llewellyn disclosed at the Summit. “We partnered with Air Liquide Advanced Technologies” to build the facility, he said. “This Breadboard is a versatile test platform designed to simulate key elements of future aircraft architecture: tanks, valves, pipes, and pumps, allowing us to validate different configurations at full scale. And this test facility is helping us gain critical insight into safety, hydrogen operations, tank design, refueling, venting, and gauging.”
“Throughout 2025, we’re going to continue testing the complete liquid-hydrogen and distribution system,” Llewellyn added. “And by 2027, our objective is to take an even further major step forward, testing the complete end-to-end system, including the fuel-cell engine and the liquid hydrogen storage and distribution system together, which will allow us to assess the full system in action.”
Glenn Zorpette traveled to Toulouse as a guest of Airbus.
-
Get Ready for the Stellarator Showdown
by Tom Clynes on 09. April 2025. at 14:33
For decades, nuclear fusion—the reaction that powers the sun—has been the ultimate energy dream. If harnessed on Earth, it could provide endless, carbon-free power. But the challenge is huge. Fusion requires temperatures hotter than the sun’s core and a mastery of plasma—the superheated gas in which atoms that have been stripped of their electrons collide, their nuclei fusing. Containing that plasma long enough to generate usable energy has remained elusive.
Now, two companies—Germany’s Proxima Fusion and Tennessee-based Type One Energy—have taken a major step forward, publishing peer-reviewed blueprints for their competing stellarator designs. Two weeks ago, Type One released six technical papers in a special issue of the Journal of Plasma Physics. Proxima detailed its fully integrated stellarator power plant concept, called Stellaris, in the journal Fusion Engineering and Design. Both firms say the papers demonstrate that their machines can deliver commercial fusion energy.
At the heart of both approaches is the stellarator, a mesmerizingly complex machine that uses twisted magnetic fields to hold the plasma steady. This configuration, first dreamed up in the 1950s, promises a crucial advantage: Unlike its more popular cousin, the tokamak, a stellarator can operate continuously, without the need for a strong internal plasma current. Instead, stellarators use external magnetic coils. This design reduces the risk of sudden disruptions to the plasma field that can send high-energy particles crashing into reactor walls.
The downside? Stellarators, while theoretically simpler to operate, are notoriously difficult to design and build. Recent advances in computational power, high-temperature superconducting (HTS) magnets, and AI-enhanced optimization of magnet geometries are changing the game, helping researchers to uncover patterns that lead to simpler, faster, and cheaper stellarator designs.
Two Visions of Fusion with One Goal
While both firms are racing toward the same destination—practical, commercial fusion power—the Proxima paper’s focus leans more toward the engineering integration of its reactor, while Type One’s papers reveal details of its plasma physics design and key components of its reactor.
Proxima, a spinoff from Germany’s Max Planck Institute for Plasma Physics, aims to build a 1-gigawatt stellarator power plant. The design uses HTS magnets and AI optimization to generate more power per unit volume than earlier stellarators, while also significantly reducing the overall size. Proxima has applied for a patent on an innovative liquid-metal breeding blanket, which will be used to breed tritium fuel for the fusion reaction, via the reaction of neutrons with lithium.
Proxima Fusion’s Stellaris design is significantly smaller than other stellarators of the same power.Proxima Fusion
“This is the first time anyone has put all the elements together in a single, fully integrated concept,” says Proxima cofounder and chief scientist Jorrit Lion. The design builds on the Wendelstein 7-X stellarator, a €1.4 billion (US $1.5 billion) project funded by the German government and the European Union, which set records for electron temperature, plasma density, and energy confinement time.
Type One’s stellarator design incorporates three key innovations: an optimized magnetic field for plasma stability, advanced manufacturing techniques, and cutting-edge HTS magnets. The plant it has dubbed Infinity Two is designed to generate 350 megawatts of electricity.
Like Proxima’s plant, Infinity Two will use deuterium-tritium fuel and build on lessons learned from W7-X, as well as Wisconsin’s HSX project, where many of Type One’s founders worked before forming the company. In partnership with the Tennessee Valley Authority, Type One aims to build Infinity Two at TVA’s Bull Run Fossil Plant by the mid-2030s.
“Why are we the first private fusion company with an agreement to develop a fusion power plant with a utility? Because we have a design based in reality,” says Christofer Mowry, CEO of Type One Energy. “This isn’t about building a science experiment. This is about delivering energy.”
AI Points to an Ideal 3D Magnetic-Field Structure
Both firms have relied heavily on AI and supercomputing to help them place the magnetic coils to more precisely shape their magnetic fields. Type One relied on a range of high-performance computing resources, including the U.S. Department of Energy’s cutting-edge exascale Frontier supercomputer at Oak Ridge National Laboratory, to power its highly detailed simulations.
That research led to one of the more intriguing developments buried in these papers: a possible move toward consensus in the stellarator physics community about the ideal three-dimensional magnetic-field structure.
Proxima’s team has always embraced the quasi-isodynamic (QI) approach, used in W7-X, which prioritizes deep particle trapping for superior plasma confinement. Type One, on the other hand, built its early designs around quasi-symmetry (QS), inspired by the HSX stellarator, which aimed to streamline particle motion. Now, based on its optimization research, Type One is changing course.
“We were champions of quasi-symmetry,” says Type One’s lead theorist Chris Hegna. “But the surprise was that we couldn’t make quasi-symmetry work as well as we thought we could. We will continue doing studies of quasi-symmetry, but primarily it looks like QI is the prominent optimization choice we are going to pursue.”
Type One Energy is working with the Tennessee Valley Authority to build a commercial stellarator by the mid-2030s.Type One Energy
The Road Ahead for Stellarators
According to Hegna, Type One’s partnership with TVA could put a stellarator fusion plant on the grid by the mid-2030s. But before it builds Infinity Two, the company plans to validate key technologies with its Infinity One test platform, set for construction in 2026 and operation by 2029.
Proxima, meanwhile, plans to bring its Stellaris design to life by the 2030s, first with a demo stellarator, dubbed Alpha. The company claims Alpha will be the first stellarator to demonstrate net energy production in a steady state. It’s targeted to debut in 2031, after the 2027 completion of a demonstration set of the complex magnetic coils.
Both companies face a common challenge: funding. Type One has raised $82 million and, according to Axios, is preparing for more than $200 million in Series A financing, which the company declined to confirm. Proxima has secured about $65 million in public and private capital.
If the recent papers succeed in building confidence in stellarators, investors may be more willing to fund these ambitious projects. The coming decade will determine whether both companies’ confidence in their own designs is justified, and whether producing fusion energy from stellarators transitions from scientific ambition to commercial reality.
-
IEEE-HKN Marks 120th Anniversary With Hackathon
by Amy Michael on 08. April 2025. at 18:00
Among the many events that marked the IEEE–Eta Kappa Nu (IEEE-HKN) honor society’s 120th anniversary last year was its first international hackathon. Organized by a group of 10 HKN students and led by recent graduate Christian Winingar from the Gamma Theta chapter at Missouri University of Science and Technology, in Rolla, the hackathon required more than seven months of planning.
The idea originated from students who attended the society’s 2023 Student Leadership Conference to continue fostering international collaboration among the society’s chapters.
“It seemed a natural fit to organize it as a way to celebrate the society’s anniversary,” says Serena Canavero, one of the event’s organizers and the 2025 HKN student governor.
To tie the hackathon to the establishment of the society, the organizing committee created mathematical and engineering problems around saving the eight founders of HKN from those who would oppose their commitment to its foundational tenets of scholarship, character, and attitude.
“It was a valuable experience for IEEE-HKN members both at the professional and student member levels to connect with each other and to foster a community focused on problem-solving and innovation.” —Christian Winingar
“Our founders, especially Maurice L. Carr, envisioned a society that would eventually become international, as it is today, more than a century into the future,” Canavero says. “To capture this spirit, our team combed through HKN’s historical records, seeking insights into the visionary students who founded it. We imagined them facing various challenges, often misunderstood by school leaders who didn’t yet see the value in an organization dedicated to the professional growth of young, bright, and philanthropic engineers.”
The hackathon succeeded in capturing the imagination of students around the world, ultimately attracting 62 participants collaborating in 12 teams from 11 to 22 October. The students worked together to face obstacles, using mathematical and engineering principles. In the process, they learned to appreciate each other’s problem-solving approaches, how to contribute to a team, and how to surmount logistical challenges such as working across time zones.
On 28 October, the hackathon culminated with teams presenting their solutions virtually to 16 IEEE members who judged their work based on its completeness, accuracy, and timeliness. The presentation was a part of HKN’s Founders Day celebration, which included a virtual fireside chat by two eminent members and IEEE Medal of Honor winners, Vint Cerf and Robert Kahn.
These are the top five IEEE-HKN teams and their chapters:
Shockingly Efficient. Mu Nu chapter at Politecnico di Torino, in Italy; Sigma chapter at Carnegie Mellon; Mu Kappa chapter at the University of Queensland, in Brisbane, Australia; and Iota Kappa chapter at Montana State University, in Bozeman.
Light Emitting Resistor. Lambda Omega chapter at the National University of Singapore.
Thetastic Coders. Theta chapter at the University of Wisconsin in Madison.
Jumbos. Epsilon Delta chapter at Tufts University, in Massachusetts.
Leo. Nu Theta chapter at Purdue University Northwest in Hammond, Ind.
TestEquity, a test and measurement product distributor, provided prizes for the teams. They included a gift card, a soldering station, handheld industrial and digital multimeters, a voltage detector, and multi-tools.
The organizing committee was pleased with the hackathon.
“It was a valuable experience for IEEE-HKN members both at the professional and student member levels to connect with each other and foster a community focused on problem-solving and innovation,” Winigar says.
“The international hackathon brought together motivated young IEEE-HKN engineers from both computer and electrical engineering backgrounds,” Canavero adds. “It blended chapters into mixed teams, sparking creativity and problem-solving, bridging time zones, and fostering our community at an international level. It was a testament to how IEEE-HKN empowers young leaders to dream big, enabling us to collaborate on ambitious engineering endeavors together.”
Because of the enthusiastic response to the hackathon, plans are underway to hold another one this year.
-
This Alphabet Spin-off Brings “Fishal Recognition” to Aquaculture
by Rajesh Jadhav on 07. April 2025. at 13:00
Deep within a rugged fjord in Norway, our team huddled around an enclosed metal racetrack, full of salt water, that stood about a meter off the ground on stilts. We called the hulking metal contraption our “fish run.” Inside, a salmon circled the 3-meter diameter loop, following its instincts and swimming tirelessly against the current. A stopwatch beeped, and someone yelled “Next fish!” We scooped up the swimmer to weigh it and record its health data before returning it to the school of salmon in the nearby pen. The sun was high in the sky as the team loaded the next fish into the racetrack. We kept working well into the evening, measuring hundreds of fish.
This wasn’t some bizarre fish Olympics. Rather, it was a pivotal moment in the journey of our company, TidalX AI, which brings artificial intelligence and advanced robotics to aquaculture.
Tidal’s AI systems track the salmon and estimate their biomass. TidalX AI
Tidal emerged from X, the Moonshot Factory at Alphabet (the parent company of Google), which seeks to create technologies that make a difference to millions if not billions of people. That was the mission that brought a handful of engineers to a fish farm near the Arctic Circle in 2018. Our team was learning how to track visible and behavioral metrics of fish to provide new insights into their health and growth and to measure the environmental impact of fish farms. And aquaculture is just our beginning: We think the modular technologies we’ve developed will prove useful in other ocean-based industries as well.
To get started, we partnered with Mowi ASA, the largest salmon-aquaculture company in the world, to develop underwater camera and software systems for fish farms. For two weeks in 2018, our small team of Silicon Valley engineers lived and breathed salmon aquaculture, camping out in an Airbnb on a small Norwegian island and commuting to and from the fish farm in a small motorboat. We wanted to learn as much as we could about the problems and the needs of the farmers. The team arrived with laptops, cords, gadgets, and a scrappy camera prototype cobbled together from off-the-shelf parts, which eventually became our window into the underwater world.
Mowi, the world’s largest producer of Atlantic salmon, operates this fish farm in the waters off Norway. Viken Kantarci/AFP/Getty Images
Still, that early trip armed us with our first 1,000 fish data points and a growing library of underwater images (since then, our datasets have grown by a factor of several million). That first data collection allowed us to meticulously train our first AI models to discern patterns invisible to the human eye. The moment of truth arrived two months later, when our demo software successfully estimated fish weights from images alone. It was a breakthrough, a validation of our vision, and yet only the first step on a multiyear journey of technology development.
Weight estimation was the first of a suite of features we would go on to develop, to increase the efficiency of aquaculture farms and help farmers take early action for the benefit of the salmon. Armed with better data about how quickly their fish are growing, farmers can more precisely calculate feeding rates to minimize both wasted food and fish waste, which can have an impact on the surrounding ocean. With our monitoring systems, farmers can catch pest outbreaks before they spread widely and require expensive and intensive treatments.
The Origins of Tidal
The ocean has long fascinated engineers at Alphabet’s Moonshot Factory, which has a mandate to create both novel technologies and profitable companies. X has explored various ocean-based projects over the past decade, including an effort to turn seawater into fuel, a project exploring whether underwater robots could farm seaweed for carbon sequestration and food, and a test of floating solar panels for clean energy.
In some ways, building technologies for the seas is an obvious choice for engineers who want to make a difference. About two-thirds of our planet is covered in water, and more than 3 billion people rely on seafood for their protein. The ocean is also critical for climate regulation, life-giving oxygen, and supporting the livelihoods of billions of people. Despite those facts, the United Nations Sustainable Development Goal No. 14, which focuses on “life below water,” is the least funded of all the 17 goals.
One of the most pressing challenges facing humanity is ensuring ongoing access to sustainable and healthy protein sources as the world’s population continues to grow. With the global population projected to reach 9.7 billion by 2050, the demand for seafood will keep rising, and it offers a healthier and lower-carbon alternative to other animal-based proteins such as beef and pork. However, today’s wild-fishing practices are unsustainable, with almost 90 percent of the world’s fisheries now considered either fully exploited (used to their full capacity) or overfished.
Aquaculture offers a promising solution. Fish farming has the potential to alleviate pressure on wild fish stocks, provide a more sustainable way to produce protein, and support the livelihoods of millions. Fish is also a much more efficient protein source than land-based protein. Salmon have a “feed conversion ratio” of roughly one to one; that means they produce about one kilogram of body mass for every kilogram of feed consumed. Cows, on the other hand, require 8 to 12 kilograms of feed to gain a kilogram of mass.
Tidal’s AI platform tracks both fish and food pellets [top] and can then automatically adjust feed rates to limit waste and reduce costs. The system’s sensors can detect sea lice on the salmon [center], which enables farmers to intervene early and track trends. The real-time estimation of biomass [bottom] gives farmers information about both average weight and population distribution, helping them plan the timing of harvests. TidalX AI
However, the aquaculture industry faces growing challenges, including rising water temperatures, changing ocean conditions, and the pressing need for improved efficiency and sustainability. Farmers are accountable for pollution from excess feed and waste, and are grappling with fish diseases that can spread quickly among farmed populations.
At Tidal, our team is developing technology that will both protect the oceans and address global food-security challenges. We’ve visited aquaculture farms in Norway, Japan, and many other countries to test our technology, which we hope will transform aquaculture practices and serve as a beneficial force for fish, people, and the planet.
The Data Behind AI for Aquaculture
Salmon aquaculture is the most technologically advanced sector within the ocean farming industry, so that’s where we began. Atlantic salmon are a popular seafood, with a global market of nearly US $20 billion in 2023. That year, 2.87 million tonnes of salmon were farmed in the Atlantic Ocean; globally, farmed salmon accounts for nearly three-quarters of all salmon sold.
Our partnership with Mowi combined their deep aquaculture knowledge with our expertise in AI, underwater robotics, and data science. Our initial goal was to estimate biomass, a critical task in fish farming that involves accurately assessing the weight and distribution of fish within a pen in real time. Mastering this task established a baseline for improvement, because better measurements can unlock better management.
Tidal’s imaging platform, which includes lights, multiple cameras, and other sensors, moves through the fish pen to gather data. TidalX AI
We quickly realized that reliable underwater computer-vision models didn’t exist, even from cutting-edge AI. State-of-the-art computer-vision models weren’t trained on underwater images and often misidentified salmon, sometimes with comic results—one model confidently classified a fish as an umbrella. In addition, we had to estimate the average weight of up to 200,000 salmon within a pen, but the reference data available—based on weekly manual sampling by farmers of just 20 to 30 salmon—didn’t represent the variability across the population. We had internalized the old computing adage “garbage in, garbage out,” and so we realized that our model’s performance would be only as good as the quality and quantity of the data we used to train it. Developing models for Mowi’s desired accuracy required a drastically larger dataset.
We therefore set out to create a high-quality dataset of images from marine pens. In our earliest experiments on estimating fish weight from images, we had worked with realistic-looking rubber fish in our own lab. But the need for better data sent us to Norway in 2018 to collect footage. First, we tried taking photos of individual fish in small enclosures, but this method proved inefficient because the fish didn’t reliably swim in front of our camera.
That’s when we designed our fish-run racetrack to capture images of individual fish from all angles. We then paired this footage with corresponding weight and health measurements to train our models. A second breakthrough came when we got access to data from the fish farms’ harvests, when every fish is individually weighed. That addition expanded our dataset a thousandfold and improved our model performance. Soon we had a model capable of making highly precise and accurate estimates of fish weight distributions for the entire population within a given enclosure.
Crafting Resilient Hardware for an Unforgiving Ocean
As we were building a precise and accurate AI model, we were simultaneously creating a comprehensive hardware package. The system included underwater cameras, an autonomous winch to move the cameras within the pen, and an integrated software platform.
Tidal’s autonomous winch systems move the cameras on horizontal and vertical axes within the fish pen. TidalX AI
Our initial field experiments had taught us the stark reality of operating technology in extreme environmental conditions, including freezing temperatures, high waves, and strong currents. To meet this challenge, we spent several years putting the Tidal technology through rigorous testing: We simulated extreme conditions, pushed the equipment to its breaking point, and even used standards typically reserved for military gear. We tested how well it worked under pressures intense enough to implode most electronics. Once satisfied with the lab results, we tested our technology on farms above the Arctic Circle.
The result is a remarkably resilient system that features highly responsive top, stereo, and bottom cameras, with efficient lighting that minimizes stress on the fish. The smart winch moves the camera autonomously through the pen around the clock on horizontal and vertical axes, collecting tens of thousands of fish observations daily. The chief operating officer of Mowi Farming Norway, Oyvind Oaland, called our commercial product “the most advanced sensing and analysis platform in aquaculture, and undoubtedly the one with the greatest potential.”
The Tidal system today provides farmers with real-time data on fish growth, health, and feeding, enabling them to make data-driven decisions to optimize their operations. One of our key innovations was the development and integration of the industry’s first AI-powered autonomous feeding system. By feeding fish just the amount that they need to grow, the system minimizes wasted food and fish excrement, therefore improving fish farms’ environmental impact. Merging our autonomous feeding system with our camera platform meant that farmers could save on cost and clutter by deploying a single all-in-one system in their pens.
Developing the autonomous feeding system presented new challenges—not all of them technical. We initially aimed for an ideal feeding strategy based on the myriad factors influencing fish appetite, which would work seamlessly for every user straight out of the box. But we faced resistance from farmers when the strategy differed from their feeding policies, which were often based on decades of experience.
Tidal’s AI systems identify food pellets. TidalX AI
This response forced us to rethink our approach and pivot from a one-size-fits-all solution to a modular system that farmers could customize . This allowed them to adjust the system to their specific feeding preferences first, building trust and acceptance. Farmers could initially set their preferred maximum and minimum feed rates and their tolerance for feed fall-through; over time, as they began to trust the technology more, they could let it run more autonomously. Once deployed within a pen, the system gathers data on fish behavior and how many feed pellets fall through the net, which improves the system’s estimate of fish appetite. These ongoing revisions not only improve feeding efficiency—thus optimizing growth, reducing waste, and minimizing environmental impact—but also build confidence among farmers.
Tidal’s Impact on Sustainable Aquaculture
Tidal’s technology has demonstrated multiple benefits. With the automated feed system, farmers are improving production efficiency, reducing costs, and reducing environmental impact. Our software can also detect health issues early on, such as sea-lice infestations and wounds, allowing farmers to promptly intervene with more-targeted treatments. When farmers have accurate biomass and fish welfare estimates, they can optimize the timing of harvests and minimize the risk that the harvested fish will be in poor health or too small to fetch a good market price. By integrating AI into every aspect of its system, we have created a powerful tool that enables farmers to make better-informed and sustainable decisions.
The platform approach also fosters collaboration between technology experts and aquaculture professionals. We’re currently working with farmers and fish-health experts on new applications of machine learning, such as fish-behavior detection and ocean-simulation modeling. That modeling can help farmers predict and respond to serious challenges, such as harmful algal blooms caused by nutrient pollution and warming water temperatures.
To date, we have installed systems in more than 700 pens around the globe, collected over 30 billion data points, processed 1.5 petabytes of video footage, and monitored over 50 million fish throughout their growth cycle. Thanks to years of research and development, commercial validation, and scaling, our company has now embarked on its next phase. In July 2024, Tidal graduated from Alphabet’s X and launched as an independent company, with investors including U.S. and Norwegian venture-capital firms and Alphabet.
Tidal’s journey from a moon shot idea to a commercially viable company is just the start of what we hope to accomplish. With never-ending challenges facing our planet, leveraging cutting-edge technology to survive and thrive in a quickly adapting world will be more critical than ever before. Aquaculture is Tidal’s first step, but there is so much potential within the ocean that can be unlocked to support a sustainable future with economic and food security.
We’re proud that our technology is already making salmon production more sustainable and efficient, thus contributing to the health of our oceans and the growing global population that depends upon seafood for protein.
Tidal’s underwater perception technology has applications far beyond aquaculture, offering transformative potential across ocean-based industries, collectively referred to as the “blue economy.” While our roots are in “blue food,” our tools can be adapted for “blue energy” by monitoring undersea infrastructure like offshore wind farms, “blue transportation” by improving ocean simulations for more-efficient shipping routes, and “blue carbon” by mapping and quantifying the carbon storage capacity of marine ecosystems such as sea grasses.
For example, we have already demonstrated that we can adapt our salmon biomass-estimation models to create detailed three-dimensional maps of sea-grass beds in eastern Indonesia, enabling us to estimate the amount of carbon stored below the water’s surface. We’re aiming to address a critical knowledge gap: Scientists have limited data on how much carbon sea-grass ecosystems can sequester, which undermines the credibility of marine-based carbon credit markets. Adapting our technology could advance scientific understanding and drive investment in protecting and conserving these vital ocean habitats.
What started with fish swimming through a racetrack on one small Norwegian fish farm may become a suite of technologies that help humanity protect and make the most of our ocean resources. With its robust, AI-powered systems designed to withstand the harshest oceanic conditions, Tidal is well equipped to revolutionize the blue economy, no matter how rough the seas get.
-
12 Graphs That Explain the State of AI in 2025
by Eliza Strickland on 07. April 2025. at 10:00
If you read the news about AI, you may feel bombarded with conflicting messages: AI is booming. AI is a bubble. AI’s current techniques and architectures will keep producing breakthroughs. AI is on an unsustainable path and needs radical new ideas. AI is going to take your job. AI is mostly good for turning your family photos into Studio Ghibli-style animated images.
Cutting through the confusion is the 2025 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence. The 400+ page report is stuffed with graphs and data on the topics of R&D, technical performance, responsible AI, economic impacts, science and medicine, policy, education, and public opinion. As IEEE Spectrum does every year (see our coverage from 2021, 2022, 2023, and 2024), we’ve read the whole thing and plucked out the graphs that we think tell the real story of AI right now.
1. U.S. Companies Are Out Ahead
While there are many different ways to measure which country is “ahead” in the AI race (journal articles published or cited, patents awarded, etc.), one straightforward metric is who’s putting out models that matter. The research institute Epoch AI has a database of influential and important AI models that extends from 1950 to the present, from which the AI Index drew the information shown in this chart.
Last year, 40 notable models came from the United States, while China had 15 and Europe had 3 (incidentally, all from France). Another chart, not shown here, indicates that almost all of those 2024 models came from industry rather than academia or government. As for the decline in notable models released from 2023 to 2024, the index suggests it may be due to the increasing complexity of the technology and the ever-rising costs of training.
2. Speaking of Training Costs...
Yowee, but it’s expensive! The AI Index doesn’t have precise data, because many leading AI companies have stopped releasing information about their training runs. But the researchers partnered with Epoch AI to estimate the costs of at least some models based on details gleaned about training duration, type and quantity of hardware, and the like. The most expensive model for which they were able to estimate the costs was Google’s Gemini 1.0 Ultra, with a breathtaking cost of about US $192 million. The general scale up in training costs coincided with other findings of the report: Models are also continuing to scale up in parameter count, training time, and amount of training data.
Not included in this chart is the Chinese upstart DeepSeek, which rocked financial markets in January with its claim of training a competitive large language model for just $6 million—a claim that some industry experts have disputed. AI Index steering committee co-director Yolanda Gil tells IEEE Spectrum that she finds DeepSeek “very impressive,” and notes that the history of computer science is rife with examples of early inefficient technologies giving way to more elegant solutions. “I’m not the only one who thought there would be a more efficient version of LLMs at some point,” she says. “We just didn’t know who would build it and how.”
3. Yet the Cost of Using AI Is Going Down
The ever-increasing costs of training (most) AI models risks obscuring a few positive trends that the report highlights: Hardware costs are down, hardware performance is up, and energy efficiency is up. That means inference costs, or the expense of querying a trained model, are falling dramatically. This chart, which is on a logarithmic scale, shows the trend in terms of AI performance per dollar. The report notes that the blue line represents a drop from $20 per million tokens to $0.07 per million tokens; the pink line shows a drop from $15 to $0.12 in less than a year’s time.
4. AI’s Massive Carbon Footprint
While energy efficiency is a positive trend, let’s whipsaw back to a negative: Despite gains in efficiency, overall power consumption is up, which means that the data centers at the center of the AI boom have an enormous carbon footprint. The AI Index estimated the carbon emissions of select AI models based on factors such as training hardware, cloud provider, and location, and found that the carbon emissions from training frontier AI models have steadily increased over time—with DeepSeek being the outlier.
The worst offender included in this chart, Meta’s Llama 3.1, resulted in an estimated 8,930 tonnes of CO2 emitted, which is the equivalent of about 496 Americans living a year of their American lives. That massive environmental impact explains why AI companies have been embracing nuclear as a reliable source of carbon-free power.
5. The Performance Gap Narrows
The United States may still have a commanding lead on the quantity of notable models released, but Chinese models are catching up on quality. This chart shows the narrowing performance gap on a chatbot benchmark. In January 2024, the top U.S. model outperformed the best Chinese model by 9.26 percent; by February 2025, this gap had narrowed to just 1.70 percent. The report found similar results on other benchmarks relating to reasoning, math, and coding.
6. Humanity’s Last Exam
This year’s report highlights the undeniable fact that many of the benchmarks we use to gauge AI systems’ capabilities are “saturated” — the AI systems get such high scores on the benchmarks that they’re no longer useful. It has happened in many domains: general knowledge, reasoning about images, math, coding, and so on. Gil says she has watched with surprise as benchmark after benchmark has been rendered irrelevant. “I keep thinking [performance] is going to plateau, that it’s going to reach a point where we need new technologies or radically different architectures” to continue making progress, she says. “But that has not been the case.”
In light of this situation, determined researchers have been crafting new benchmarks that they hope will challenge AI systems. One of those is Humanity’s Last Exam, which consists of extremely challenging questions contributed by subject-matter experts hailing from 500 institutions worldwide. So far, it’s still hard for even the best AI systems: OpenAI’s reasoning model, o1, has the top score so far with 8.8 percent correct answers. We’ll see how long that lasts.
7. A Threat to the Data Commons
Today’s generative AI systems get their smarts by training on vast amounts of data scraped from the Internet, leading to the oft-stated idea that “data is the new oil” of the AI economy. As AI companies keep pushing the limits of how much data they can feed into their models, people have started worrying about “peak data,” and when we’ll run out of the stuff. One issue is that websites are increasingly restricting bots from crawling their sites and scraping their data (perhaps due to concerns that AI companies are profiting from the websites’ data while simultaneously killing their business models). Websites state these restrictions in machine readable robots.txt files.
This chart shows that 48 percent of data from top web domains is now fully restricted. But Gil says it’s possible that new approaches within AI may end the dependence on huge data sets. “I would expect that at some point the amount of data is not going to be as critical,” she says.
8. Here Comes the Corporate Money
The corporate world has turned on the spigot for AI funding over the past five years. And while overall global investment in 2024 didn’t match the giddy heights of 2021, it’s notable that private investment has never been higher. Of the $150 billion in private investment in 2024, another chart in the index (not shown here) indicates that about $33 billion went to investments in generative AI.
9. Waiting for That Big ROI
Presumably, corporations are investing in AI because they expect a big return on investment. This is the part where people talk in breathless tones about the transformative nature of AI and about unprecedented gains in productivity. But it’s fair to say that corporations haven’t yet seen a transformation that results in significant savings or substantial new profits. This chart, with data drawn from a McKinsey survey, shows that of those companies that reported cost reductions, most had savings of less than 10 percent. Of companies that had a revenue increase due to AI, most reported gains of less than 5 percent. That big payoff may still be coming, and the investment figures suggest that a lot of corporations are betting on it. It’s just not here yet.
10. Dr. AI Will See You Soon, Maybe
AI for science and medicine is a mini-boom within the AI boom. The report lists a variety of new foundation models that have been released to help researchers in fields such as materials science, weather forecasting, and quantum computing. Many companies are trying to turn AI’s predictive and generative powers into profitable drug discovery. And OpenAI’s o1 reasoning model recently scored 96 percent on a benchmark called MedQA, which has questions from medical board exams.
But overall, this seems like another area of vast potential that hasn’t yet translated into significant real-world impact—in part, perhaps, because humans still haven’t figured out quite how to use the technology. This chart shows the results of a 2024 study that tested whether doctors would make more accurate diagnoses if they used GPT-4 in addition to their typical resources. They did not, and it also didn’t make them faster. Meanwhile, GPT-4 on its own outperformed both the human-AI teams and the humans alone.
11. U.S. Policy Action Shifts to the States
In the United States, this chart shows that there has been plenty of talk about AI in the halls of Congress, and very little action. The report notes that action in the United States has shifted to the state level, where 131 bills were passed into law in 2024. Of those state bills, 56 related to deepfakes, prohibiting either their use in elections or for spreading nonconsensual intimate imagery.
Beyond the United States, Europe did pass its AI Act, which places new obligations on companies making AI systems that are deemed high risk. But the big global trend has been countries coming together to make sweeping and non-binding pronouncements about the role that AI should play in the world. So there’s plenty of talk all around.
12. Humans Are Optimists
Whether you’re a stock photographer, a marketing manager, or a truck driver, there’s been plenty of public discourse about whether or when AI will come for your job. But in a recent global survey on attitudes about AI, the majority of people did not feel threatened by AI. While 60 percent of respondents from 32 countries believe that AI will change how they do their jobs, only 36 percent expected to be replaced. “I was really surprised” by these survey results, says Gil. “It’s very empowering to think, ‘AI is going to change my job, but I will still bring value.’” Stay tuned to find out if we all bring value by managing eager teams of AI employees.
-
How Ukraine’s Drones are Beating Russian Jamming
by Tereza Pultarova on 06. April 2025. at 14:00
After the Estonian startup KrattWorks dispatched the first batch of its Ghost Dragon ISR quadcopters to Ukraine in mid-2022, the company’s officers thought they might have six months or so before they’d need to reconceive the drones in response to new battlefield realities. The 46-centimeter-wide flier was far more robust than the hobbyist-grade UAVs that came to define the early days of the drone war against Russia. But within a scant three months, the Estonian team realized their painstakingly fine-tuned device had already become obsolete.
Rapid advances in jamming and spoofing—the only efficient defense against drone attacks—set the team on an unceasing marathon of innovation. Its latest technology is a neural-network-driven optical navigation system, which allows the drone to continue its mission even when all radio and satellite-navigation links are jammed. It began tests in Ukraine in December, part of a trend toward jam-resistant, autonomous UAVs (uncrewed aerial vehicles). The new fliers herald yet another phase in the unending struggle that pits drones against the jamming and spoofing of electronic warfare, which aims to sever links between drones and their operators. There are now tens of thousands of jammers straddling the front lines of the war, defending against drones that are not just killing soldiers but also destroying armored vehicles, other drones, industrial infrastructure, and even tanks.
Ukrainian troops tested KrattWorks’ Ghost Dragon drone in Estonia last year.KrattWorks
“The situation with electronic warfare is moving extremely fast,” says Martin Karmin, KrattWorks’ cofounder and chief operations officer. “We have to constantly iterate. It’s like a cat-and-mouse game.”
I met Karmin at the company’s headquarters in the outskirts of Estonia’s capital, Tallinn. Just a couple of hundred kilometers to the east is the tiny nation’s border with Russia, its former oppressor. At 38, Karmin is barely old enough to remember what life was like under Russian rule, but he’s heard plenty. He and his colleagues, most of them volunteer members of the Estonian Defense League, have “no illusions” about Russia, he says with a shrug.
His company is as much about arming Estonia as it is about helping Ukraine, he acknowledges. Estonia is not officially at war with Russia, of course, but regions around the border between the two countries have for years been subjected to persistent jamming of satellite-based navigation systems, such as the European Union’s Galileo satellites, forcing occasional flight cancellations at Tartu airport. In November, satellite imagery revealed that Russia is expanding its military bases along the Baltic states’ borders.
“We are a small country,” Karmin says. “Innovation is our only chance.”
Navigating by Neural Network
In KrattWorks’ spacious, white-walled workshop, a handful of engineers are testing software. On the large ocher desk that dominates the room, a selection of KrattWorks’ devices is on display, including a couple of fixed-wing, smoke-colored UAVs designed to serve as aerial decoys, and the Ghost Dragon ISR quadcopter, the company’s flagship product.
Now in its third generation, the Ghost Dragon has come a long way since 2022. Its original command-and-control-band radio was quickly replaced with a smart frequency-hopping system that constantly scans the available spectrum, looking for bands that aren’t jammed. It allows operators to switch among six radio-frequency bands to maintain control and also send back video even in the face of hostile jamming.
The Ghost Dragon reconnaissance drone from Krattworks can navigate autonomously, by detecting landmarks as it flies over them. KrattWorks
The drone’s dual-band satellite-navigation receiver can switch among the four main satellite positioning services: GPS, Galileo, China’s BeiDou, and Russia’s GLONASS. It’s been augmented with a spoof-proof algorithm that compares the satellite-navigation input with data from onboard sensors. The system provides protection against sophisticated spoofing attacks that attempt to trick drones into self-destruction by persuading them they’re flying at a much higher altitude than they actually are.
At the heart of the quadcopter’s matte grey body is a machine-vision-enabled computer running a 1-gigahertz Arm processor that provides the Ghost Dragon with its latest superpower: the ability to navigate autonomously, without access to any global navigation satellite system (GNSS). To do that, the computer runs a neural network that, like an old-fashioned traveler, compares views of landmarks with positions on a map to determine its position. More precisely, the drone uses real-time views from a downward-facing optical camera, comparing them against stored satellite images, to determine its position.
A promotional video from Krattworks depicts scenarios in which the company’s drones augment soldiers on offensive maneuvers.
“Even if it gets lost, it can recognize some patterns, like crossroads, and update its position,” Karmin says. “It can make its own decisions, somewhat, either to return home or to fly through the jamming bubble until it can reestablish the GNSS link again.”
Designing Drones for High Lethality per Cost
Just as machine guns and tanks defined the First World War, drones have become emblematic of Ukraine’s struggle against Russia. It was the besieged Ukraine that first turned the concept of a military drone on its head. Instead of Predators and Reapers worth tens of millions of dollars each, Ukraine began purchasing huge numbers of off-the-shelf fliers worth a few hundred dollars apiece—the kind used by filmmakers and enthusiasts—and turned them into highly lethal weapons. A recent New York Times investigation found that drones account for 70 percent of deaths and injuries in the ongoing conflict.
“We have much less artillery than Russia, so we had to compensate with drones,” says Serhii Skoryk, commercial director at Kvertus, a Kyiv-based electronic-warfare company. “A missile is worth perhaps a million dollars and can kill maybe 12 or 20 people. But for one million dollars, you can buy 10,000 drones, put four grenades on each, and they will kill 1,000 or even 2,000 people or destroy 200 tanks.”
Near the Russian border in Kharkiv Oblast, a Ukrainian soldier prepared first-person-view drones for an attack on 16 January 2025.Jose Colon/Anadolu/Getty Images
Electronic warfare techniques such as jamming and spoofing aim to neutralize the drone threat. A drone that gets jammed and loses contact with its pilot and also loses its spatial bearings will either crash or fly off randomly until its battery dies. According to the Royal United Services Institute, a U.K. defense think tank, Ukraine may be losing about 10,000 drones per month, mostly due to jamming. That number includes explosives-laden kamikaze drones that don’t reach their targets, as well as surveillance and reconnaissance drones like KrattWorks’ Ghost Dragon, meant for longer service.
“Drones have become a consumable item,” says Karmin. “You will get maybe 10 or 15 missions out of a reconnaissance drone, and then it has to be already paid off because you will lose it sooner or later.”
Russia took an unexpected step in the summer of 2024, ditching sophisticated wireless control in favor of hard-wired drones fitted with spools of optical fiber.
Tech minds on both sides of the conflict have therefore been working hard to circumvent electronic defenses. Russia took an unexpected step starting in early 2024, deploying hard-wired drones fitted with spools of optical fiber. Like a twisted variation on a child’s kite, the lethal UAVs can venture 20 or more kilometers away from the controller, the hair-thin fiber floating behind them, providing an unjammable connection.
“Right now, there is no protection against fiber-optic drones,” Vadym Burukin, cofounder of the Ukrainian drone startup Huless, tells IEEE Spectrum. “The Russians scaled this solution pretty fast, and now they are saturating the battle front with these drones. It’s a huge problem for Ukraine.”
One way that drone operators can defeat electronic jamming is by communicating with their drone via a fiber optic line that pays out of a spool as the drone flies. This is a tactic favored by Russian units, although this particular first-person-view drone is Ukrainian. It was demonstrated near Kyiv on 29 January 2025.Efrem Lukatsky/AP
Ukraine, too, has experimented with optical fiber, but the technology didn’t take off, as it were. “The optical fiber costs upwards from $500, which is, in many cases, more than the drone itself,” Burukin says. “If you use it in a drone that carries explosives, you lose some of that capacity because you have the weight of the cable.” The extra weight also means less capacity for better-quality cameras, sensors, and computers in reconnaissance drones.
Small Drones May Soon Be Making Kill-or-No-Kill Decisions
Instead, Ukraine sees the future in autonomous navigation. This past July, kamikaze drones equipped with an autonomous navigation system from U.S. supplier Auterion destroyed a column of Russian tanks fitted with jamming devices.
“It was really hard to strike these tanks because they were jamming everything,” says Burukin. “The drones with the autopilot were the only equipment that could stop them.”
Auterion’s “terminal guidance” system uses known landmarks to orient a drone as it seeks out a target. Auterion
The technology used to hit those tanks is called terminal guidance and is the first step toward smart, fully autonomous drones, according to Auterion’s CEO, Lorenz Meier. The system allows the drone to directly overcome the jamming whether the protected target is a tank, a trench, or a military airfield.
“If you lock on the target from, let’s say, a kilometer away and you get jammed as you approach the target, it doesn’t matter,” Meier says in an interview. “You’re not losing the target as a manual operator would.”
The visual navigation technology trialed by KrattWorks is the next step and an innovation that has only reached the battlefield this year. Meier expects that by the end of 2025, firms including his own will introduce fully autonomous solutions encompassing visual navigation to overcome GPS jamming, as well as terminal guidance and smart target recognition.
“The operator would only decide the area where to strike, but the decision about the target is made by the drone,” Meier explains. “It’s already done with guided shells, but with drones you can do that at mass scale and over much greater distances.”
Auterion, founded in 2017 to produce drone software for civilian applications such as grocery delivery, threw itself into the war effort in early 2024, motivated by a desire to equip democratic countries with technologies to help them defend themselves against authoritarian regimes. Since then, the company has made rapid strides, working closely with Ukrainian drone makers and troops.
“A missile worth perhaps a million dollars can kill maybe 12 or 20 people. But for one million dollars, you can buy 10,000 drones, put four grenades on each, and they will kill 1,000 or even 2,000 people or destroy 200 tanks.” —Serhii Skoryk, Kvertus
But purchasing Western equipment is, in the long term, not affordable for Ukraine, a country with a per capita GDP of US $5,760—much lower than the European average of $38,270. Fortunately, Ukraine can tap its engineering workforce, which is among the largest in Europe. Before the war, Ukraine was a go-to place for Western companies looking to set up IT- and software-development centers. Many of these workers have since joined Ukraine’s DIY military-technician (“miltech”) development movement.
An engineer and founder at a Ukrainian startup that produces long-range kamikaze drones, who didn’t want to be named because of security concerns, told Spectrum that the company began developing its own computers and autonomous navigation software for target tracking “just to keep the price down.” The engineer said Ukrainian startups offer advanced military-drone technology at a price that is a small fraction of what established competitors in the West are charging.
Within three years of the February 2022 Russian invasion, Ukraine produced a world-class defense-tech ecosystem that is not only attracting Western innovators into its fold, but also regularly surpassing them. The keys to Ukraine’s success are rapid iterations and close cooperation with frontline troops. It’s a formula that’s working for Auterion as well. “If you want to build a leading product, you need to be where the product is needed the most,” says Meier. “That’s why we’re in Ukraine.”
Burukin, from Ukrainian startup Huless, believes that autonomy will play a bigger role in the future of drone warfare than Russia’s optical fibers will. Autonomous drones not only evade jamming, but their range is limited only by their battery storage. They also can carry more explosives or better cameras and sensors than the wired drones can. On top of that, they don’t place high demands on their operators.
“In the perfect world, the drone should take off, fly, find the target, strike it, and report back on the task,” Burukin says. “That’s where the development is heading.”
The cat-and-mouse game is nowhere near over. Companies including KrattWorks are already thinking about the next innovation that would make drone warfare cheaper and more lethal. By creating a drone mesh network, for example, they could send a sophisticated intelligence, surveillance, and reconnaissance drone followed by a swarm of simpler kamikaze drones to find and attack a target using visual navigation.
“You can send, like, 10 drones, but because they can fly themselves, you don’t need a superskilled operator controlling every single one of these,” notes KrattWorks’ Karmin, who keeps tabs on tech developments in Ukraine with a mixture of professional interest, personal empathy, and foreboding. Rarely does a day go by that he does not think about the expanding Russian military presence near Estonia’s eastern borders.
“We don’t have a lot of people in Estonia,” he says. “We will never have enough skilled drone pilots. We must find another way.”
-
Video Friday: RIVR Delivers Your Package
by Evan Ackerman on 04. April 2025. at 16:00
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
ICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.
ICRA 2025: 19–23 May 2025, ATLANTA, GA.
London Humanoids Summit: 29–30 May 2025, LONDON
IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TEXAS
RSS 2025: 21–25 June 2025, LOS ANGELES
ETH Robotics Summer School: 21–27 June 2025, GENEVA
IAS 2025: 30 June–4 July 2025, GENOA, ITALY
ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
IEEE World Haptics: 8–11 July 2025, SUWON, KOREA
IFAC Symposium on Robotics: 15–18 July 2025, PARIS
RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL
RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS
CLAWAR 2025: 5–7 September 2025, SHENZHEN
World Robot Summit: 10–12 October 2025, OSAKA, JAPAN
IROS 2025: 19–25 October 2025, HANGZHOU, CHINA
IEEE Humanoids: 30 September–2 October 2025, SEOUL
CoRL 2025: 27–30 September 2025, SEOUL
Enjoy today’s videos!
I love the platform and I love the use case, but this particular delivery method is...odd?
[ RIVR ]
This is just the beginning of what people and physical AI can accomplish together. To recognize business value from collaborative robotics, you have to understand what people do well, what robots do well, and how they best come together to create productivity. DHL and Robust.AI are partnering to define the future of human–robot collaboration.
[ Robust AI ]
Teleoperated robotic characters can perform expressive interactions with humans, relying on the operators’ experience and social intuition. In this work, we propose to create autonomous interactive robots by training a model to imitate operator data. Our model is trained on a dataset of human–robot interactions, where an expert operator is asked to vary the interactions and mood of the robot, while the operator commands and the poses of the human and robot are recorded.
Introducing THEMIS V2, our all-new full-size humanoid robot. Standing at 1.6 meters with 40 degrees of freedom, THEMIS V2 now features enhanced 6 DoF arms and advanced 7 DoF end-effectors, along with an additional body-mounted stereo camera and up to 200 Tera Operations per Second (TOPS) of onboard AI computing power. These upgrades deliver exceptional capabilities in manipulation, perception, and navigation, pushing humanoid robotics to new heights.
[ Westwood ]
BMW x Figure Update: This isn’t a test environment—it’s real production operations. Real-world robots are advancing our Helix AI and strengthening our end-to-end autonomy to deploy millions of robots.
[ Figure ]
On 13 March, at WorldMinds 2025, in the Kaufleuten Theater of Zurich, our team demonstrated for the first time two autonomous vision-based racing drones. It was an epic journey to prepare for this event, given the poor lighting conditions and the safety constraints of a theater filled with more than 500 people! The background screen visualizes in real time the observations of the AI algorithm of each drone. No map, no IMU, no SLAM!
[ University of Zurich (UZH) ]
Unitree releases Dex5 dexterous hand. Single hand with 20 degrees of freedom (16 active plus 4 passive). Enable smooth backdrivability (direct force control). Equipped with 94 highly sensitive touch points (optional).
[ Unitree ]
You can say “real-world manipulation” all you want, but until it’s actually in the real world, I’m not buying it.
[ 1X ]
Developed by Pudu X-Lab, FlashBot Arm elevates the capabilities of our flagship FlashBot by blending advanced humanoid manipulation and intelligent delivery capabilities, powered by cutting-edge embodied AI. This powerful combination allows the robot to autonomously perform a wide range of tasks across diverse settings, including hotels, office buildings, restaurants, retail spaces, and health care facilities.
[ Pudu Robotics ]
If you ever wanted to manipulate a trilby with 25 robots, a solution now exists.
[ Paper ] via [ EPFL Reconfigurable Robotics Lab ] published by [ IEEE Robotics and Automation Letters ]
We’ve been sharing videos from the Suzumori Endo Robotics Lab at the Institute of Science Tokyo for many years, and Professor Suzumori is now retiring.
Best wishes to Professor Suzumori!
No matter the vehicle, traditional control systems struggle when unexpected challenges—like damage, unforeseen environments, or new missions—push them beyond their design limits. Our Learning Introspective Control (LINC) program aims to fundamentally improve the safety of mechanical systems, such as ground vehicles, ships, and robotics, using various machine learning methods that require minimal computing power.
[ DARPA ]
NASA’s Perseverance rover captured new images of multiple dust devils while exploring the rim of the Jezero crater on Mars. The largest dust devil was approximately 210 feet wide (65 meters). In this Mars Report, atmospheric scientist Priya Patel explains what dust devils can teach us about weather conditions on the Red Planet.
[ NASA ]
-
A Guide to IEEE Education Week’s Events
by Angelique Parashis on 03. April 2025. at 18:00
As technology evolves, staying current with the latest advancements and skills remains crucial. Continuous learning is essential for maintaining a competitive edge in the tech industry.
IEEE Education Week, taking place from 6 to 12 April, emphasizes the importance of lifelong learning. During the week, technical professionals, students, educators, and STEM enthusiasts can access a variety of events, resources, and special offers from IEEE organizational units, societies, and councils. Whether you’re a seasoned professional or just starting your career, participating in IEEE Education Week can help you reassess and realign your skills to meet market demands.
Here are some of the offerings:
- On 7 April, the keynote presentation by Kathleen Kramer, 2025 IEEE president and CEO, kicks off the week. Kramer is expected to discuss the future of education and the importance of engaging younger members and industry professionals in IEEE.
- The “Creating Impactful Community Service Learning Projects” webinar from EPICS in IEEE on 7 April discusses how to develop engineering solutions for community challenges, as well as the grant application process.
- IEEE-USA is holding the live stream webinar “Job Search Trends for 2025: What You Need to Know to Stay Ahead” on 7 April. Participants can learn what they need to focus on to stand out in an evolving landscape, including the rise of AI-powered hiring tools and the growing importance of soft skills.
- The “Inspiring Tomorrow’s Engineers with TryEngineering eBooks” webinar on 8 April is designed to provide insights from the publication series on how to explain complex concepts to school-age children. The free TryEngineering books cover artificial intelligence, engineering, semiconductors, and more.
- On 8 April, IEEE Educational Activities is presenting the “Learning in the Digital Age: Comparing Online Course Delivery Styles” interactive webinar. Jill Bagley, senior manager of the IEEE continuing education platform, plans to lead a discussion about the needs of today’s learners, touching on content design, delivery methods, and flexible learning formats.
- IEEE Women in Engineering’s “Conquer Impostor Syndrome to Advance Your Career” webinar on 9 April aims to explain the syndrome and its impact on individuals and businesses. The session is expected to present actionable strategies to build confidence and achieve professional goals.
- The IEEE–Eta Kappa Nu honor society is holding its virtual TechX Conference from 9 to 11 April. Experts including IEEE Fellow Dejan Milojicic, a Hewlett Packard Fellow and vice president at HP Labs, plan to discuss the latest trends in AI. Also planned are a job recruitment fair and networking opportunities. The conference is open to all.
- Tune in on 9 April at 1 p.m. ET for the hour-long virtual event “Investing in Your Future: The Importance of Continuing Education for Engineers.“ Learn about the importance of continuing education for engineers and how the IEEE Professional Development Suite can empower technical professionals in all levels of their career.
- The “Planning and Delivering Effective Oral Presentations” webinar from the IEEE Professional Communication Society on 10 April is being presented by Alan Chong, director of the Engineering Communication Program at the University of Toronto. He is set to explain the basic tools needed to deliver impactful technical presentations and how to tailor them to diverse audiences. Participants can expect to leave with practical strategies for creating and delivering presentations.
The IEEE Education Week website lists special offers and discounts. The IEEE Learning Network, for example, is offering a 25 percent discount on some of its popular course programs in technical areas including artificial intelligence, communications, and IEEE standards; use the code ILNIEW25, available until 30 April.
Be sure to complete the IEEE Education Week quiz by noon ET on 11 April for a chance to earn a digital badge, which can be displayed on social media.
Don’t miss this opportunity to invest in your future and explore IEEE’s vast educational offerings. To learn more about IEEE Education Week, watch this video and follow the event on Facebook or X.
-
Discover the Role of Filter Technologies in Advanced Communication Systems
by Cadence on 02. April 2025. at 18:37
Learn about carrier aggregation, microcell overlapping, and massive MIMO implementation. Delve into the world of surface acoustic wave (SAW) and bulk acoustic wave (SAW) filters and understand their strengths, limitations, and applications in the evolving 5G/6G landscape.
Key highlights:
- Uncover the design challenges of new technologies in the mobile ecosystem
- Explore the field of SAW and SAW filters and discover their roles and performance nuances
- Take a look at the impact of temperature on filter technologies and how it shapes their applications
- Learn how simulation technology can bridge the gap between design concepts and real-world implementation
Stay ahead in 5G/6G innovation and learn how filters shape seamless communication.
-
Nvidia Blackwell Ahead in AI Inference, AMD Second
by Samuel K. Moore on 02. April 2025. at 15:00
In the latest round of machine learning benchmark results from MLCommons, computers built around Nvidia’s new Blackwell GPU architecture outperformed all others. But AMD’s latest spin on its Instinct GPUs, the MI325, proved a match for the Nvidia H200, the product it was meant to counter. The comparable results were mostly on tests of one of the smaller-scale large language models, Llama2 70B (for 70 billion parameters). However, in an effort to keep up with a rapidly changing AI landscape, MLPerf added three new benchmarks to better reflect where machine learning is headed.
MLPerf runs benchmarking for machine learning systems in an effort to provide an apples-to-apples comparison between computer systems. Submitters use their own software and hardware, but the underlying neural networks must be the same. There are a total of 11 benchmarks for servers now, with three added this year.
It has been “hard to keep up with the rapid development of the field,” says Miro Hodak, the cochair of MLPerf Inference. ChatGPT appeared only in late 2022, OpenAI unveiled its first large language model (LLM) that can reason through tasks last September, and LLMs have grown exponentially—GPT3 had 175 billion parameters, while GPT4 is thought to have nearly 2 trillion. As a result of the breakneck innovation, “we’ve increased the pace of getting new benchmarks into the field,” says Hodak.
The new benchmarks include two LLMs. The popular and relatively compact Llama2 70B is already an established MLPerf benchmark, but the consortium wanted something that mimicked the responsiveness people are expecting of chatbots today. So the new benchmark “Llama2-70B Interactive” tightens the requirements. Computers must produce at least 25 tokens per second under any circumstance and cannot take more than 450 milliseconds to begin an answer.
Seeing the rise of “ agentic AI”—networks that can reason through complex tasks—MLPerf sought to test an LLM that would have some of the characteristics needed for that. They chose Llama3.1 405B for the job. That LLM has what’s called a wide context window. That’s a measure of how much information—documents, samples of code, et cetera—it can take in at once. For Llama3.1 405B, that’s 128,000 tokens, more than 30 times as much as Llama2 70B.
The final new benchmark, called RGAT, is what’s called a graph attention network. It acts to classify information in a network. For example, the dataset used to test RGAT consists of scientific papers, which all have relationships between authors, institutions, and fields of study, making up 2 terabytes of data. RGAT must classify the papers into just under 3,000 topics.
Blackwell, Instinct Results
Nvidia continued its domination of MLPerf benchmarks through its own submissions and those of some 15 partners, such as Dell, Google, and Supermicro. Both its first- and second-generation Hopper architecture GPUs—the H100 and the memory-enhanced H200—made strong showings. “We were able to get another 60 percent performance over the last year” from Hopper, which went into production in 2022, says Dave Salvator, director of accelerated computing products at Nvidia. “It still has some headroom in terms of performance.”
But it was Nvidia’s Blackwell architecture GPU, the B200, that really dominated. “The only thing faster than Hopper is Blackwell,” says Salvator. The B200 packs in 36 percent more high-bandwidth memory than the H200, but, even more important, it can perform key machine learning math using numbers with a precision as low as 4 bits instead of the 8 bits Hopper pioneered. Lower-precision compute units are smaller, so more fit on the GPU, which leads to faster AI computing.
In the Llama3.1 405B benchmark, an eight-B200 system from Supermicro delivered nearly four times the tokens per second of an eight-H200 system by Cisco. And the same Supermicro system was three times as fast as the quickest H200 computer at the interactive version of Llama2 70B.
Nvidia used its combination of Blackwell GPUs and Grace CPU, called GB200, to demonstrate how well its NVL72 data links can integrate multiple servers in a rack, so they perform as if they were one giant GPU. In an unverified result the company shared with reporters, a full rack of GB200-based computers delivers 869,200 tokens per second on Llama2 70B. The fastest system reported in this round of MLPerf was an Nvidia B200 server that delivered 98,443 tokens per second.
AMD is positioning its latest Instinct GPU, the MI325X, as providing performance competitive with Nvidia’s H200. MI325X has the same architecture as its predecessor, MI300, but it adds even more high-bandwidth memory and memory bandwidth—256 gigabytes and 6 terabytes per second (a 33 percent and 13 percent boost, respectively).
Adding more memory is a play to handle larger and larger LLMs. “Larger models are able to take advantage of these GPUs because the model can fit in a single GPU or a single server,” says Mahesh Balasubramanian, director of data-center GPU marketing at AMD. “So you don’t have to have that communication overhead of going from one GPU to another GPU or one server to another server. When you take out those communications, your latency improves quite a bit.” AMD was able to take advantage of the extra memory through software optimization to boost the inference speed of DeepSeek-R1 eightfold.
On the Llama2 70B test, an eight-GPU MI325X computers came within 3 to 7 percent the speed of a similarly tricked-out H200-based system. And on image generation the MI325X system was within 10 percent of the Nvidia H200 computer.
AMD’s other noteworthy mark this round was from its partner, Mangoboost, which showed nearly fourfold performance on the Llama2 70B test by doing the computation across four computers.
Intel has historically put forth CPU-only systems in the inference competition to show that for some workloads you don’t really need a GPU. This time around saw the first data from Intel’s Xeon 6 chips, which were formerly known as Granite Rapids and are made using Intel’s 3-nanometer process. At 40,285 samples per second, the best image-recognition results for a dual-Xeon 6 computer was about one-third the performance of a Cisco computer with two Nvidia H100s.
Compared with Xeon 5 results from October 2024, the new CPU provides about an 80 percent boost on that benchmark and an even bigger boost on object detection and medical imaging. Since it first started submitting Xeon results in 2021 (the Xeon 3), the company has achieved an elevenfold boost in performance on Resnet.
For now, it seems Intel has quit the field in the AI accelerator-chip battle. Its alternative to the Nvidia H100, Gaudi 3, did not make an appearance in the new MLPerf results, nor in version 4.1, released last October. Gaudi 3 got a later-than-planned release because its software was not ready. In the opening remarks at Intel Vision 2025, the company’s invite-only customer conference, newly minted CEO Lip-Bu Tan seemed to apologize for Intel’s AI efforts. “I’m not happy with our current position,” he told attendees. “You’re not happy either. I hear you loud and clear. We are working toward a competitive system. It won’t happen overnight, but we will get there for you.”
Google’s TPU v6e chip also made a showing, though the results were restricted to the image-generation task. At 5.48 queries per second, the 4-TPU system saw a 2.5-times boost over a similar computer using its predecessor TPU v5e in the October 2024 results. Even so, 5.48 queries per second was roughly in line with a similarly sized Lenovo computer using Nvidia H100s.
This post was corrected on 2 April 2025 to give the right value for high-bandwidth memory in the MI325X. It was corrected again on 7 April, to make the chart easier to read.
-
Four Ways Engineers Are Trying to Break Physics
by Dan Garisto on 02. April 2025. at 14:00
In particle physics, the smallest problems often require the biggest solutions.
Along the border of France and Switzerland, around a hundred meters underneath the countryside, protons speed through a 27-kilometer ring—about seven times the length of the Indy 500 circuit—until they crash into protons going in the opposite direction. These particle pileups produce a petabyte of data every second, the most interesting of which is poured into data centers, accessible to thousands of physicists worldwide.
The Large Hadron Collider (LHC), arguably the largest experiment ever engineered, is needed to probe the universe’s smallest constituents. In 2012, two teams at the LHC discovered the elusive Higgs boson, the particle whose existence confirmed 50-year-old theories about the origins of mass. It was a scientific triumph that led to a Nobel Prize and worldwide plaudits.
Since then, experiments at the LHC have focused on better understanding how the newfound Higgs fits into the Standard Model, particle physicists’ best theoretical description of matter and forces—minus gravity. “The Standard Model is beautiful,” says Victoria Martin, an experimental physicist at the University of Edinburgh. “Because it’s so precise, all the little niggles stand out.”
The Large Hadron Collider lives in a 27-kilometer tunnel ring, about 100 meters underneath France and Switzerland. It was used to discover the Higgs boson, but further research may require something larger still. Maximilien Brice/CERN
The minor quibbles physicists have about the Standard Model could be explained by new particles: Dark matter, the invisible material whose gravity shapes the universe, is thought to be made of heretofore undiscovered particles. But such new particles may be out of reach for the LHC, even after it undergoes upgrades that are set to be completed later this decade. To address these lingering questions, particle physicists have been planning its successors. These next-generation colliders will improve on the LHC by smashing protons at higher energies or by making more precise collisions with muons, antimuons, electrons, and positrons. In doing so, they’ll allow researchers to peek into a whole new realm of physics.
Martin herself is particularly interested in learning more about the Higgs, and learning exactly how the particle responsible for mass behaves. One possible find: Properties of the Higgs suggest that the universe might not be stable in the long, long term. [Editor’s note: About 10790 years. Other problems may be more pressing.] “We don’t really know exactly what we’re going to find,” Martin says. “But that’s okay, because it’s science, it’s research.”
There are four main proposals for new colliders, and each one comes with its own slew of engineering challenges. To build them, engineers would need to navigate tricky regional geology, design accelerating cavities, handle the excess heat within the cavities, and develop powerful new magnets to whip the particles through these cavities. But perhaps more daunting are the geopolitical obstacles: coordinating multinational funding commitments and slogging through bureaucratic muck.
Collider projects take years to plan and billions of dollars to finance. The fastest that any of the four machines would come on line is the late 2030s. But now is when physicists and engineers are making key scientific and engineering decisions about what’s coming next.
Supercolliders at a glance
Large Hadron Collider
Size (circumference): 27 kilometers
Collision energy: 13,600 giga-electron volts
Colliding particles: protons and ions
Luminosity: 2 × 1034 collisions per square centimeter per second (5 × 1034 for high-luminosity upgrade)
Location: Switzerland–France border
Start date: 2008–
International Linear Collider
Size (length): 31 km
Collision energy: 500 GeV
Colliding particles: electrons and positrons
Luminosity (at peak energy): 3 × 1034 collisions per cm2 per second
Location: Iwate, Japan
Earliest start date: 2038
Muon collider
Size (circumference): 4.5 km (or 10 km)
Collision energy: 3,000 GeV (or 10,000 GeV)
Colliding particles: muons and antimuons
Luminosity: 2 × 1035 collisions per cm2 per second
Location: possibly Fermilab
Earliest start date: 2045 (or in the mid-2050s)
Future Circular Collider-ee | FCC-hh
Size (circumference): 91 km
Collision energy: 240 GeV | 85,000 GeV
Colliding particles: electrons and positrons | protons
Luminosity: 8.5 × 1034 | 30 × 1034 collisions per cm2 per second
Location: Switzerland–France border
Earliest start date: 2046 | 2070
Circular Electron Positron Collider | Super proton–proton Collider (SPPC)
Size (circumference): 100 km
Collision energy: 240 GeV | 100,000 GeV
Colliding particles: electrons and positrons | protons
Luminosity: 8.3 × 1034 | 13 × 1034 collisions per cm2 per second
Location: China
Earliest start date: 2035 | 2060s
Possible supercolliders of the future
The LHC collides protons and other hadrons. Hadrons are like beanbags, full of quarks and gluons, that spray around everywhere upon collision.
Next-generation colliders have two ways to improve on the LHC: They can go to higher energies or higher precision. Higher energies provide more data by producing more particles—potentially new, heavy ones. Higher-precision collisions give physicists cleaner data with a better signal-to-noise ratio because the particle crash produces less debris. Either approach could reveal new physics beyond the Standard Model.
Three of the new colliders would improve on the LHC’s precision by colliding electrons and their antimatter counterparts, positrons, instead of hadrons. These particles are more like individual marbles—much lighter, and not made up of any smaller constituents. Compared with the collisions between messy, beanbag-like hadrons, a collision between electrons and positrons is much cleaner. After taking data for years, some of those colliders could be converted to smash protons as well, though at energies about eight times as high as those of the LHC.
These new colliders range from technically mature to speculative. One such speculative option is to smash muons, electrons’ heavier cousins, which have never been collided before. In 2023, an influential panel of particle physicists recommended that the United States pursue development of such a machine, in a so-called “muon shot.” If it is built, a muon collider would likely be based at Fermilab, the center of particle physics in the United States.
A muon collider “can bring us outside of the world that we know,” says Daniele Calzolari, a physicist working on muon collider design at CERN, the European Organization for Nuclear Research. “We don’t know exactly how everything will look like, but we believe we can make it work.”
While muon colliders have remained conceptual for more than 50 years, their potential has long excited and intrigued physicists. Muons are heavy compared with electrons, almost as heavy as protons, but they lack the mess of quarks and gluons, so collisions between muons could be both high energy and high precision.
Superconducting radio-frequency cavities are used in particle colliders to apply electric fields to charged particles, speeding them up toward each other until they smash together. Newer methods of making these cavities are seamless, providing more-precise steering and, presumably, better collisions. Reidar Hahn/Fermi
The trouble is that muons decay rapidly—in a mere 2.2 microseconds while at rest—so they have to be cooled, accelerated, and collided before they expire. Preliminary studies suggest a muon collider is possible, but key technologies, like powerful high-field solenoid magnets used for cooling, still need to be developed. In March 2025, Calzolari and his colleagues submitted an internal proposal for a preliminary demonstration of the cooling technology, which they hope will happen before the end of the decade.
The accelerator that could theoretically come on line the soonest, would be the International Linear Collider (ILC) in Iwate, Japan. The ILC would send electrons and positrons down straight tunnels where the particles would collide to produce Higgs bosons that are easier to detect than at the LHC. The collider’s design is technically mature, so if the Japanese government officially approved the project, construction could begin almost immediately. But after multiple delays by the government, the ILC remains in a sort of planning purgatory, looking more and more unlikely.
The Standard Model of particle physics is the current best theory of all the understood matter and forces in our universe (except gravity). The model works extremely well, but scientists also know that it is incomplete. The next generation of supercolliders might give a glimpse at what’s beyond the Standard Model.
So, the two colliders, both technically mature, that have perhaps the clearest path to construction are China’s Circular Electron Positron Collider (CEPC) and CERN’s Future Circular Collider (FCC-ee).
CERN’s FCC-ee would be a 91-km ring, designed to initially collide electrons and positrons to study the parameters of particles like the Higgs in fine detail (the “ee” indicates collisions between electrons and positrons). Compared with the LHC’s collisions of protons or heavy ions, those between electrons and positrons “are much cleaner, so you can have a more precise measurement,” says Michael Benedikt, the head of the FCC-ee effort. After about a decade of operation—enough time to gather data and develop the needed magnets—it would be upgraded to collide protons and search for new physics at much higher energies (and then become known as the FCC-hh, for hadrons). The FCC-ee’s feasibility report just concluded, and CERN’s member states are now left deciding whether to pursue the project.
Similarly, China’s CEPC would also be a 100-km ring designed to collide electrons and positrons for the first 18 years or so. And much like the FCC, a proton or other hadron upgrade is in the works after that. Later this year, Chinese researchers plan to submit the CEPC for official approval by the Chinese government as part of the next five-year-plan. As the two colliders (and their proton upgrades) are considered for construction in the next few years, policymakers will be thinking about more than just their potential for discovery.
CEPC and FCC-ee are, in this sense, less abstract physics experiments and more engineering projects with concrete design challenges.
Laying the groundwork
When particles zip around the curve of a collider, they lose energy—much like a car braking on a racetrack. The effect is particularly pronounced for lightweight particles like electrons and positrons. To reduce this energy loss from sharp turns, CEPC and FCC-ee are both planned to have enormous tunnels, which, if built, would be among the longest in the world. The construction cost of such an enormous tunnel would be several billion U.S. dollars, roughly one-third of the total collider price.
Finding a place to bury a 90-km ring is not easy, especially in Switzerland. The proposed path of the FCC-ee has an average depth of 200 meters, with a dip to 500 meters under Lake Geneva, fit snugly between the Jura Mountains to the northwest and the Prealps to the east. The land there was once covered by a sea, which left behind sedimentary rock—a mixture of sandstone and shale known as molasse. “We’ve done so much tunneling at CERN before. We were quite confident about the molasse rock,” says Liam Bromiley, a civil engineer at CERN.
But the FCC-ee’s path also takes it through deposits of limestone, which is permeable and can hold karsts, or cavities, full of water. “If you hit one of those, you could end up flooding the tunnel,” Bromiley says. During the next two years, if the project is green-lit, engineers will drill boreholes into the limestone to determine whether there are karsts that can be avoided.
FCC-ee would be a 91-km ring spanning underneath Switzerland and France, near the current Large Hadron Collider. One of the proposed locations for the CEPC is near the northern port city of Qinhuangdao, where the 100 km circumference collider would be buried underground.Chris Philpot
CEPC, in contrast, has a much looser spatial constraint, and can choose from nearly anywhere in China. Three main sites are being considered: Qinhuangdao (a northern port city), Changsha (a metropolis in central China), and Huzhou (a coastal city near Shanghai). According to Jie Gao, a particle physicist at the Institute of High Energy Physics, in Beijing, the ideal location will have hard rock, like granite, and low seismic activity. Additionally, Gao says, they want a site with good infrastructure to create a “science city” ideal for an international community of physicists.
The colliders’ carbon footprints are also on the minds of physicists. One potential energy-saving measure: redirecting excess heat from operations. “In the past we used to throw it into the atmosphere,” Benedikt says. In recent years, heated water from one of the LHC’s cooling stations has kept part of the commune of Ferney-Voltaire warm during the winters, and Benedikt says the FCC-ee would expand these environmental efforts.
Getting up to speed
If the civil-engineering challenges are met, physicists will rely on a spate of technologies to accelerate, focus, and collide electrons and positrons at CEPC and FCC-ee more precisely and efficiently than they could at the LHC.
When both types of particles are first produced from their sources, they start off at a comparatively low energy, around 4 giga-electron volts. To get them up to speed, electrons and positrons are sent through superconducting radio-frequency (SRF) cavities—gleaming metal bubbles strung together like beads of a necklace, which apply an electric field that pushes the charged particles forward.
Both China’s Circular Electron Positron Collider (CEPC) [bottom] and CERN’s Future Circular Collider (FCC-ee) [top] have preliminary designs of the insides of their tunnels, including the collider itself, associated vacuum and control equipment, and detectors.Chris Philpot
In the past, SRF cavities were welded together, which inherently left imperfections that led to beam instabilities. “You can never obtain a perfect surface along this weld,” Benedikt says. FCC-ee researchers have explored several techniques to create cavities without seams, including hydroforming, which is widely used for the components of high-end sports cars. A metal tube is placed in a pressurized cell and compressed against a die by liquid. The resulting cavity has no seams and is smooth as blown glass.
To improve efficiency, engineers focus on the machines that power the SRF cavities, machines called klystrons. Klystrons have historically had efficiencies that peak around 65 percent, but design advances, such as the machines’ ability to bunch electrons together, are on track to reach efficiencies of 80 percent. “The efficiency of the klystron is becoming very important,” Gao says. Over 10 years of operation, these savings could amount to 1 terawatt hour—about enough electricity to power all of China for an hour.
Another efficiency boost comes from focusing on the tunnel design. As electrons and positrons follow the curve of the ring, they will lose a considerable amount of energy, so SRF cavities will be placed around the ring to boost particle energies. The lost energy will be emitted as potent synchrotron radiation—about 10,000 times as much radiation as is emitted by protons circling the LHC today. “You do not want to send the synchrotron radiation into the detectors,” Benedikt says. To avoid this fate, neither FCC-ee nor CEPC will be perfectly circular. Shaped a bit like a racetrack, both colliders will have about 1.5-km-long straight sections before an interaction point. Other options are also on the table—in the past, researchers have even used repurposed steel from scrapped World War II battleships to shield particle detectors from radiation.
Both CEPC and FCC-ee will be massive data-generating machines. Unlike the LHC, which is regularly stopped to insert new particles, the next-generation colliders will be fed with a continuous stream of particles, allowing it to stay in “collision mode” and take more data.
At a collider, data is a function of ‘luminosity’— the ratio of detected events per square centimeter, per second. The more particle collisions, the “brighter” the collider. Firing particles at each other is a little like trying to get two bullets to collide—they often miss each other, which limits the luminosity. But physicists have a variety of strategies to squeeze more electrons and positrons into smaller areas to achieve more of these unlikely collisions. Compared to the Large Electron-Positron (LEP) collider of the 1990s, the new machines will produce 100,000 times as many Z bosons—particles responsible for radioactive decay. More Z bosons means more data. “The FCC-ee can produce all the data that were accumulated in operation over 10 years of LEP within minutes,” Benedikt says.
Back to protons
While both the FCC-ee and CEPC would start with electrons and positrons, they are designed to eventually collide protons. These upgrades are called FCC-hh and Super proton-proton Collider (SPPC). Using protons, FCC-hh and SPPC would reach a collision energy of 100,000 GeV, roughly an order of magnitude higher than the LHC’s 13,600 GeV. Though the collisions would be messy, their high energy would allow physicists to “explore fully new territory,” Benedikt says. While there’s no guarantee, physicists hope that territory teems with discoveries-in-waiting, such as dark-matter particles, or strange new collisions where the Higgs recursively interacts with itself many times.
One pro of protons is that they are over 1,800 times as heavy as electrons, so they emit far less radiation as they follow the curve of the collider ring. But this extra heft comes with a substantial cost: Bending protons’ paths requires even stronger superconducting magnets.
Magnet development has been the downfall of colliders before. In the early 1980s, a planned collider named Isabelle was scrapped because magnet technology was not far enough along. The LHC’s magnets are made from a strong alloy of niobium-titanium, wound together into a coil that produces magnetic fields when subjected to a current. These coils can produce field strengths over 8 teslas. The strength of the magnet pushes its two halves apart with a force of nearly 600 tons per meter. “If you have an abrupt movement of the turns in the coil by as little as 10 micrometers,” the entire magnet can fail, says Bernhard Auchmann, an expert on magnets at CERN.
It is unlikely that any Collider—whether based in China, at CERN, the United States, or Japan—will be able to go it alone.
Future magnets for FCC-hh and SPPC will need to have at least twice the magnetic field strength, about 16 to 20 T, pushing the limits of materials and physics. Auchmann points to three possible paths forward. The most straightforward option might be “niobium three tin” (Nb3Sn). Substituting tin for titanium allows the metal to host magnetic fields up to 16 T but makes it quite brittle, so you can’t “clamp the hell out of it,” Auchmann says. One possible solution involves placing (Nb3Sn) into a protective steel endoskeleton that prevents it from crushing itself.
Then there are high-temperature superconductors. Some copper oxide–based magnets can exceed 20 T, but they are either too fragile or don’t produce magnetic fields that are constant enough. Currently, these materials are expensive, but demand from fusion startups, which also require these types of magnets, may push the price down, Auchmann says.
Finally, there is a class of iron-based high-temperature superconductors that is being championed by physicists in China, thanks to the low price of iron and manufacturing-process improvements. “It’s cheap,” Gao says. “This technology is very promising.” Over the next decade or so, physicists will work on each of these materials, and hope to settle on one direction for next-generation magnets.
Time and money
While FCC-ee and CEPC (as well as their proton upgrades) share many of the same technical specifications, they differ dramatically in two critical factors: timelines and politics.
Construction for CEPC could begin in two years; the FCC-ee would need to wait about another decade. The difference comes down largely to the fact that CERN has a planned upgrade to the LHC—enabling it to collect 10 times as much data—which will consume resources until nearly 2040. China, by contrast, is investing heavily in basic research and has the funds immediately at hand.
The abstruse physics that happens at colliders is never as far from political realities on Earth as it seems. Japan’s ILC is in limbo because of budget issues. The muon collider is subject to the whims of the highly divided 119th U.S. Congress. Last year, a representative for Germany criticized the FCC-ee for being unaffordable, and CERN continues to struggle with the politics of including Russian scientists. Tensions between China and the United States are similarly on the rise following the Trump administration’s tariffs.
How physicists plan to tackle these practical problems remains to be seen. But it is unlikely that any collider—whether based in China, at CERN, the United States, or Japan—will be able to go it alone. In addition to the tens of billions of dollars for construction and operation of the new facility, the physics expertise needed to run it and perform complex experiments at scale must be global. “By definition, it’s an international project,” Gao says. “The door is wide open.”
This article was updated on 7 April 2025.
-
Complex Haptics Deliver a Pinch, a Stretch, or a Tap
by Gwendolyn Rak on 02. April 2025. at 13:00
Most haptic interfaces today are limited to simple vibrations. While visual displays and audio systems have continued to progress, those using our sense of touch have largely stagnated. Now, researchers have developed a haptics system that creates more complex tactile feedback. Beyond just buzzing, the device simulates sensations like pinching, stretching, and tapping for a more realistic experience.
“The sensation of touch is the most personal connection that you can have with another individual,” says John Rogers, a professor at Northwestern University in Evanston, Ill., who led the project. “It’s really important, but it’s much more difficult than audio or video.”
Co-led by Rogers and Yonggang Huang, also a professor at Northwestern, the work is largely geared toward medical applications. But the technology could be used in a wide range of uses, including virtual or augmented reality and the ability to feel the texture of clothing fabric or other items while shopping online. The research was published in the journal Science on 27 March.
A Nuanced Sense of Touch
Today’s haptic interfaces mostly rely on vibrating actuators, which are fairly simple to construct. “It’s a great place to start,” says Rogers. But going beyond vibration could help add the vibrancy of real-world interactions to the technology, he adds.
These types of interactions require more-sophisticated mechanical forces, which include a combination of both normal forces directed perpendicular to the skin’s surface and shear forces directed parallel to the skin. Whether through vibration or applying pressure, forces directed vertically into the skin have been the main focus of haptic designs, according to Rogers. But these don’t fully engage the many receptors embedded in our skin.
The researchers aimed to build an actuator that offers full freedom of motion, which they achieved with “very old physics,” Rogers says—namely, electromagnetism. The basic design of the device consists of three nested copper coils and a small magnet. Running current through the coils generates a magnetic field that then moves the magnet, which delivers force to the skin.
“What we’ve put together is an engineering embodiment [of the physics] that provides a very compact force delivery system and offers full programmability in direction, amplitude, and temporal characteristics,” says Rogers. For a more elaborate setup, the researchers also developed a version that uses a collection of four magnets with different orientations of north and south poles. This creates even more complex sensations of pinching, stretching, and twisting.
Haptics at Your Fingertips—or Anywhere
Because fingertips are highly sensitive, only small forces are needed for this application. John A. Rogers/Northwestern University
Although much of the previous work in haptics has focused on fingertips and the hands, these devices could be placed elsewhere on the body, including the back, chest, or arms. However, these applications may have different requirements. Compared with places like the back, the fingertips are highly sensitive—both in terms of the force needed and the spatial density of receptors.
“The fingertips are probably the most challenging in terms of density, but they’re easiest in terms of the forces that you need to deliver,” says Rogers. In other use cases, delivering enough power may be a challenge, he acknowledges.
The force possible may also be limited by the size of the coils, says Gregory Gerling, a systems engineering professor at the University of Virginia and former chair of the IEEE Technical Committee on Haptics. The coil size dictates how much force you can generate, and at a certain point, the device won’t be wearable. However, he believes it is sufficient for VR applications.
Gerling, an IEEE senior member, finds the use of magnetism in multiple directions interesting. Compared with other approaches that are based on hydraulics or air pressure, this system doesn’t require pumping fluids or gases. “You can be kind of untethered,” Gerling says. “Overall, it’s a very interesting, novel device, and maybe it takes the field in a slightly new direction.”
Applications in VR, Neuropathy, and More
The clearest application of the device is probably in virtual or augmented reality, says Rogers. These environments now have highly sophisticated audio and video inputs, “but the tactile component of that experience is still a work in progress,” he says.
Their lab, however, is primarily focused on medical applications, including sensory substitution for patients who have lost sensation in a part of the body. A complex haptics interface could reproduce the sensation in another part of the body.
For example, nerve damage in people with diabetic neuropathy makes it difficult for them to walk without looking at their feet. The lab is experimenting with placing an array of pressure sensors into the base of these patients’ shoes, then reproducing the pattern of pressure using a haptic array mounted on their upper thighs, where they still have sensation. The researchers are working with a rehabilitation facility in Chicago to test the approach, mainly with this population.
Continuing to develop these medical applications will be a focus moving forward, says Rogers. In terms of engineering, he would like to further miniaturize the actuators to make dense arrays possible in regions of the body like the fingertips.
Feeling the Music
Additionally, the researchers explored the possibility of using the device to increase engagement in musical performances. Apart from perhaps feeling vibrations of the bass line, performances usually rely on sight and sound. Adding a tactile element could make for a more immersive experience, or help people with hearing impairment engage with the music.
With the current tech, basic vibrating actuators can change the frequency of vibration to match the notes being played. While this can convey a simple melody, it lacks the richness of different instruments and musical components.
The researchers’ full-freedom-of-motion actuator can convey a more vibrant sound. Voice, guitar, and drums, for instance, can each be converted into a delivery mechanism for a particular force. Like with vibration alone, the frequency of each force can be modulated to match the music. The experiment was exploratory, Rogers says, but it exploits the advanced capabilities of the system.
-
How Dairy Robots Are Changing Work for Cows (and Farmers)
by Evan Ackerman on 01. April 2025. at 20:00
“Mooooo.”
This dairy barn is full of cows, as you might expect. Cows are being milked, cows are being fed, cows are being cleaned up after, and a few very happy cows are even getting vigorously scratched behind the ears. “I wonder where the farmer is,” remarks my guide, Jan Jacobs. Jacobs doesn’t seem especially worried, though—the several hundred cows in this barn are being well cared for by a small fleet of fully autonomous robots, and the farmer might not be back for hours. The robots will let him know if anything goes wrong.
At one of the milking robots, several cows are lined up, nose to tail, politely waiting their turn. The cows can get milked by robot whenever they like, which typically means more frequently than the twice a day at a traditional dairy farm. Not only is getting milked more often more comfortable for the cows, cows also produce about 10 percent more milk when the milking schedule is completely up to them.
“There’s a direct correlation between stress and milk production,” Jacobs says. “Which is nice, because robots make cows happier and therefore, they give more milk, which helps us sell more robots.”
Jan Jacobs is the human-robot interaction design lead for Lely, a maker of agricultural machinery. Founded in 1948 in Maassluis, Netherlands, Lely deployed its first Astronaut milking robot in the early 1990s. The company has since developed other robotic systems that assist with cleaning, feeding, and cow comfort, and the Astronaut milking robot is on its fifth generation. Lely is now focused entirely on robots for dairy farms, with around 135,000 of them deployed around the world.
Essential Jobs on Dairy Farms
The weather outside the barn is miserable. It’s late fall in the Netherlands, and a cold rain is gusting in from the sea, which is probably why the cows have quite sensibly decided to stay indoors and why the farmer is still nowhere to be found. Lely requires that dairy farmers who adopt its robots commit to letting their cows move freely between milking, feeding, and resting, as well as inside and outside the barn, at their own pace. “We believe that free cow traffic is a core part of the future of farming,” Jacobs says as we watch one cow stroll away from the milking robot while another takes its place. This is possible only when the farm operates on the cows’ schedule rather than a human’s.
A conventional dairy farm relies heavily on human labor. Lely estimates that repetitive daily tasks represent about a third of the average workday of a dairy farmer. In the morning, the cows are milked for the first time. Most dairy cows must be milked at least twice a day or they’ll become uncomfortable, and so the herd will line up on their own. Traditional milking parlors are designed to maximize human milking efficiency. A milking carousel, for instance, slowly rotates cows as they’re milked so that the dairy worker doesn’t have to move between stalls.
“We were spending 6 hours a day milking,” explains dairy farmer Josie Rozum, whose 120-cow herd at Takes Dairy Farm uses a pair of Astronaut A5 milking robots. “Now that the robots are handling all of that, we can focus more on animal care and comfort.”Lely
An experienced human using well-optimized equipment can attach a milking machine to a cow in just 20 to 30 seconds. The actual milking takes only a few minutes, but with the average small dairy farm in North America providing a home for several hundred cows, milking typically represents a time commitment of 4 to 6 hours per day.
There are other jobs that must be done every day at a dairy. Cows are happier with continuous access to food, which means feeding them several times a day. The feed is a mix of roughage (hay), silage (grass), and grain. The cows will eat all of this, but they prefer the grain, and so it’s common to see cows sorting their food by grabbing a mouthful and throwing it up into the air. The lighter roughage and silage flies farther than the grain does, leaving the cow with a pile of the tastier stuff as the rest gets tossed out of reach. This makes “feed pushing” necessary to shove the rest of the feed back within reach of the cow.
And of course there’s manure. A dairy cow produces an average of 68 kilograms of manure a day. All that manure has to be collected and the barn floors regularly cleaned.
Dairy Industry 4.0
The amount of labor needed to operate a dairy meant that until the early 1900s, most family farms could support only about eight cows. The introduction of the first milking machines, called bucket milkers, helped farmers milk 10 cows per hour instead of 4 by the mid-1920s. Rural electrification furthered dairy automation starting in the 1950s, and since then, both farm size and milk production have increased steadily. In the 1930s, a good dairy cow produced 3,600 kilograms of milk per year. Today, it’s almost 11,000 kilograms, and Lely believes that robots are what will enable small dairy farms to continue to scale sustainably.
Lely
But dairy robots are expensive. A milking robot can cost several hundred thousand dollars, plus an additional US $5,000 to $10,000 per year in operating costs. The Astronaut A5, Lely’s latest milking robot, uses a laser-guided robot arm to clean the cow’s udder before attaching teat cups one at a time. While the cow munches on treats, the Astronaut monitors her milk output, collecting data on 32 parameters, including indicators of the quality of the milk and the health of the cow. When milking is complete, the robot cleans the udder again, and the cow is free to leave as the robot steam cleans itself in preparation for the next cow.
Lely argues that although the initial cost is higher than that of a traditional milking parlor, the robots pay for themselves over time through higher milk production (due primarily to increased milking frequency) and lower labor costs. Lely’s other robots can also save on labor. The Vector mobile robot handles continuous feeding and feed pushing, and the Discovery Collector is a robotic manure vacuum that keeps the floors clean.
At Takes Dairy Farm, Rozum and her family used to spend several hours per day managing food for the cows. “The feeding robot is another amazing piece of the puzzle for our farm that allows us to focus on other things.”Takes Family Farm
For most dairy farmers, though, making more money is not the main reason to get a robot, explains Marcia Endres, a professor in the department of animal science at the University of Minnesota. Endres specializes in dairy-cattle management, behavior, and welfare, and studies dairy robot adoption. “When we first started doing research on this about 12 years ago, most of the farms that were installing robots were smaller farms that did not want to hire employees,” Endres says. “They wanted to do the work just with family labor, but they also wanted to have more flexibility with their time. They wanted a better lifestyle.”
Flexibility was key for the Takes family, who added Lely robots to their dairy farm in Ely, Iowa, four years ago. “When we had our old milking parlor, everything that we did as a family was always scheduled around milking,” says Josie Rozum, who manages the farm and a creamery along with her parents—Dan and Debbie Takes—and three brothers. “With the robots, we can prioritize our personal life a little bit more—we can spend time together on Christmas morning and know that the cows are still getting milked.”
Takes Family Dairy Farm’s 120-cow herd is milked by a pair of Astronaut A5 robots, with a Vector and three Discovery Collectors for feeding and cleaning. “They’ve become a crucial part of the team,” explains Rozum. “It would be challenging for us to find outside help, and the robots keep things running smoothly.” The robots also add sustainability to small dairy farms, and not just in the short term. “Growing up on the farm, we experienced the hard work, and we saw what that commitment did to our parents,” Rozum explains. “It’s a very tough lifestyle. Having the robots take over a little bit of that has made dairy farming more appealing to our generation.”
Takes Dairy Farm
Of the 25,000 dairy farms in the United States, Endres estimates about 10 percent have robots. This is about a third of the adoption rate in Europe, where farms tend to be smaller, so the cost of implementing the robots is lower. Endres says that over the last five years, she’s seen a shift toward robot adoption at larger farms with over 500 cows, due primarily to labor shortages. “These larger dairies are having difficulty finding employees who want to milk cows—it’s a very tedious job. And the robot is always consistent. The farmers tell me, ‘My robot never calls in sick, and never shows up drunk.’ ”
Endres is skeptical of Lely’s claim that its robots are responsible for increased milk production. “There is no research that proves that cows will be more productive just because of robots,” she says. It may be true that farms that add robots do see increased milk production, she adds, but it’s difficult to measure the direct effect that the robots have. “I have many dairies that I work with where they have both a robotic milking system and a conventional milking system, and if they are managing their cows well, there isn’t a lot of difference in milk production.”
The Lely Luna cow brush helps to keep cows’ skin healthy. It’s also relaxing and enjoyable, so cows will brush themselves several times a day.Lely
The robots do seem to improve the cows’ lives, however. “Welfare is not just productivity and health—it’s also the affective state, the ability to have a more natural life,” Endres says. “Again, it’s hard to measure, but I think that on most of these robot farms, their affective state is improved.” The cows’ relationship with humans changes too, comments Endres. When the cows no longer associate humans with being told where to go and what to do all the time, they’re much more relaxed and friendly toward people they meet. Rozum agrees. “We’ve noticed a tremendous change in our cows’ demeanor. They’re more calm and relaxed, just doing their thing in the barn. They’re much more comfortable when they can choose what to do.”
Cows Versus Robots
Cows are curious and clever animals, and have the same instinct that humans have when confronted with a new robot: They want to play with it. Because of this, Lely has had to cow-proof its robots, modifying their design and programming so that the machines can function autonomously around cows. Like many mobile robots, Lely’s dairy robots include contact-sensing bumpers that will pause the robot’s motion if it runs into something. On the Vector feeding robot, Lely product engineer René Beltman tells me, they had to add a software option to disable the bumper. “The cows learned that, ‘oh, if I just push the bumper, then the robot will stop and put down more feed in my area for me to eat.’ It was a free buffet. So you don’t want the cows to end up controlling the robot.” Emergency stop buttons had to be relocated so that they couldn’t be pressed by questing cow tongues.
There’s also a social component to cow-robot interaction. Within their herd, cows have a well-established hierarchy, and the robots need to work within this hierarchy to do their jobs. For example, a cow won’t move out of the way if it thinks that another cow is lower in the hierarchy than it is, and it will treat a robot the same way. The engineers had to figure out how the Discovery Collector could drive back and forth to vacuum up manure without getting blocked by cows. “In our early tests, we’d use sensors to have the robot stop to avoid running into any of the cows,” explains Jacobs. “But that meant that the robot became the weakest one in the hierarchy, and it would just end up crying in the corner because the cows wouldn’t move for it. So now, it doesn’t stop.”
One of the dirtiest jobs on a dairy farm is handled by the Discovery Collector, an autonomous manure vacuum. The robot relies on wheel odometry and ultrasonic sensors for navigation because it’s usually covered in manure.Evan Ackerman
“We make the robot drive slower for the first week, when it’s being introduced to a new herd,” adds Beltman. “That gives the cows time to figure out that the robot is at the top of the hierarchy.”
Besides maintaining their dominance at the top of the herd, the current generation of Lely robots doesn’t interact much with the cows, but that’s changing, Jacobs tells me. Right now, when a robot is driving through the barn, it makes a beeping sound to let the cows know it’s coming. Lely is looking into how to make these sounds more enjoyable for the cows. “This was a recent revelation for me,” Jacobs says. ”We’re not just designing interactions for humans. The cows are our users, too.”
Human-Robot Interaction
Last year, Jacobs and researchers from Delft University of Technology, in the Netherlands, presented a paper at the IEEE Human-Robot Interaction (HRI) Conference exploring this concept of robot behavior development on working dairy farms. The researchers visited robotic dairies, interviewed dairy farmers, and held workshops within Lely to establish a robot code of conduct—a guide that Lely’s designers and engineers use when considering how their robots should look, sound, and act, for the benefit of both humans and cows. On the engineering side, this includes practical things like colors and patterns for lights and different types of sounds so that information is communicated consistently across platforms.
But there’s much more nuance to making a robot seem “reliable” or “friendly” to the end user, since such things are not only difficult to define but also difficult to implement in a way that’s appropriate for dairy farmers, who prioritize functionality.
Jacobs doesn’t want his robots to try to be anyone’s friend—not the cow’s, and not the farmer’s. “The robot is an employee, and it should have a professional relationship,” he says. “So the robot might say ‘Hi,’ but it wouldn’t say, ‘How are you feeling today?’ ” What’s more important is that the robots are trustworthy. For Jacobs, instilling trust is simple: “You cannot gain trust by doing tricks. If your robot is reliable and predictable, people will trust it.”
The electrically driven, pneumatically balanced robotic arm that the Lely Astronaut uses to milk cows is designed to withstand accidental (or intentional) kicks.Lely
The real challenge, Jacobs explains, is that Lely is largely on its own when it comes to finding the best way of integrating its robots into the daily lives of people who may have never thought they’d have robot employees. “There’s not that much knowledge in the robot world about how to approach these problems,” Jacobs says. “We’re working with almost 20,000 farmers who have a bigger robot workforce than a human workforce. They’re robot managers. And I don’t know that there necessarily are other companies that have a customer base of normal people who have strategic dependence on robots for their livelihood. That is where we are now.”
From Dairy Farmers to Robot Managers
With the additional time and flexibility that the robots enable, some dairy farmers have been able to diversify. On our way back to Lely’s headquarters, we stop at Farm Het Lansingerland, owned by a Lely customer who has added a small restaurant and farm shop to his dairy. Large windows look into the barn so that restaurant patrons can watch the robots at work, caring for the cows that produce the cheese that’s on the menu. A self-guided tour takes you right up next to an Astronaut A5 milking robot, while signs on the floor warn of Vector feeding robots on the move. “This farmer couldn’t expand—this was as many cows as he’s allowed to have here,” Jacobs explains to me over cheese sandwiches. “So, he needs to have additional income streams. That’s why he started these other things. And the robots were essential for that.”
The farmer is an early adopter—someone who’s excited about the technology and actively interested in the robots themselves. But most of Lely’s tens of thousands of customers just want a reliable robotic employee, not a science project. “We help the farmer to prepare not just the environment for the robots, but also the mind,” explains Jacobs. “It’s a complete shift in their way of working.”
Besides managing the robots, the farmer must also learn to manage the massive amount of data that the robots generate about the cows. “The amount of data we get from the robots is a game changer,” says Rozum. “We can track milk production, health, and cow habits in real time. But it’s overwhelming. You could spend all day just sitting at the computer, looking at data and not get anything else done. It took us probably a year to really learn how to use it.”
The most significant advantages to farmers come from using the data for long-term optimization, says the University of Minnesota’s Endres. “In a conventional barn, the cows are treated as a group,” she says. “But the robots are collecting data about individual animals, which lets us manage them as individuals.” By combining data from a milking robot and a feeding robot, for example, farmers can close the loop, correlating when and how the cows are fed with their milk production. Lely is doing its best to simplify this type of decision making, says Jacobs. “You need to understand what the data means, and then you need to present it to the farmer in an actionable way.”
A Robotic Dairy
All dairy farms are different, and farms that decide to give robots a try will often start with just one or two. A highly roboticized dairy barn might look something like this illustration, with a team of many different robots working together to keep the cows comfortable and happy.
A: One Astronaut A5 robot can milk up to 60 cows. After the Astronaut cleans the teats, a laser sensor guides a robotic arm to attach the teat cups. Milking takes just a few minutes.
B: In the feed kitchen, the Vector robot recharges itself while different ingredients are loaded into its hopper and mixed together. Mixtures can be customized for different groups of cows.
C: The Vector robot dispenses freshly mixed food in small batches throughout the day. A laser measures the height of leftover food to make sure that the cows are getting the right amounts.
D: The Discovery Collector is a mop and vacuum for cow manure. It navigates the barn autonomously and returns to its docking station to remove waste, refill water, and wirelessly recharge.
E: As it milks, the Astronaut is collecting a huge amount of data—32 different parameters per teat. If it detects an issue, the farmer is notified, helping to catch health problems early.
F: Automated gates control meadow access and will keep a cow inside if she’s due to be milked soon. Cows are identified using RFID collars, which also track their behavior and health.
A Sensible Future for Dairy Robots
After lunch, we stop by Lely headquarters, where bright red life-size cow statues guard the entrance and all of the conference rooms are dairy themed. We get comfortable in Butter, and I ask Jacobs and Beltman what the future holds for their dairy robots.
In the near term, Lely is focused on making its existing robots more capable. Its latest feed-pushing robot is equipped with lidar and stereo cameras, which allow it to autonomously navigate around large farms without needing to follow a metal strip bolted to the ground. A new overhead camera system will leverage AI to recognize individual cows and track their behavior, while also providing farmers with an enormous new dataset that could allow Lely’s systems to help farmers make more nuanced decisions about cow welfare. The potential of AI is what Jacobs seems most excited about, although he’s cautious as well. “With AI, we’re suddenly going to take away an entirely different level of work. So, we’re thinking about doing research into the meaningfulness of work, to make sure that the things that we do with AI are the things that farmers want us to do with AI.”
“The idea of AI is very intriguing,” comments Rozum. “I think AI could help to simplify things for farmers. It would be a tool, a resource. But we know our cows best, and a farmer’s judgment has to be there too. There’s just some component of dairy farming that you cannot take the human out of. Robots are not going to be successful on a farm unless you have good farmers.”
Lely is aware of this and knows that its robots have to find the right balance between being helpful, and taking over. “We want to make sure not to take away the kinds of interactions that give dairy farmers joy in their work,” says Beltman. “Like feeding calves—every farmer likes to feed the calves.” Lely does sell an automated calf feeder that many dairy farmers buy, which illustrates the point: What’s the best way of designing robots to give humans the flexibility to do the work that they enjoy?
“This is where robotics is going,” Jacobs tells me as he gives me a lift to the train station. “As a human, you could have two other humans and six robots, and that’s your company.” Many industries, he says, look to robots with the objective of minimizing human involvement as much as possible so that the robots can generate the maximum amount of value for whoever happens to be in charge.
Dairy farms are different. Perhaps that’s because the person buying the robot is the person who most directly benefits from it. But I wonder if the concern over automation of jobs would be mitigated if more companies chose to emphasize the sustainability and joy of work equally with profit. Automation doesn’t have to be zero-sum—if implemented thoughtfully, perhaps robots can make work easier, more efficient, and more fun, too.
Jacobs certainly thinks so. “That’s my utopia,” he says. “And we’re working in the right direction.”
-
How Digital Archivists Are Saving Public Information from the Memory Hole
by Harry Goldstein on 01. April 2025. at 17:26
In the three decades since Brewster Kahle spun up the nonprofit Internet Archive’s Wayback Machine, it has scaled up to include government websites and datasets—many of which are essential to the engineering and scientific communities. U.S. government agencies like the National Science Foundation, Department of Energy, and NASA are critical sources of research data, technical specifications, and standards documentation in pretty much every area where IEEE Spectrum’s audience works—AI & computer science, biomedical devices, power and energy, semiconductors, telecommunications…the list goes on.
Access to that governmental data directly affects the reproducibility of experiments, the validation of models, and the integrity of the scholarly record.
So what happens if an entire dataset vanishes? Among other things, it can invalidate years of research built upon that foundation.
Until recently, wholesale deletion of data has been rare. In the United States, presidential transitions typically involve some changes to government websites to reflect new policy priorities. And after 9/11, the George W. Bush administration removed “millions of bytes” of information from government sites for security reasons as well as hundreds of Department of Defense documents and “tens of thousands” of Federal Energy Regulation Commission files.
The Obama and Biden administrations likewise made changes to government websites but didn’t engage in large-scale removal of Web pages or datasets. Obama, in fact, expanded public access to government data in 2009 by launching Data.gov, whose stated mission is in part “to unleash the power of government open data to inform decisions by the public and policymakers.”
During President Donald J. Trump’s first term, researchers at the Environmental Data & Governance Initiative found that some government sites became inaccessible, and the phrase “climate change” was purged from several government Web pages.
Access to governmental data directly affects the reproducibility of experiments, the validation of models, and the integrity of the scholarly record.
The second term has been different. In February, a few weeks after Trump was sworn in for his second term, The New York Times reported that his administration took down more than 8,000 Web pages and databases. Many of those pages have since reappeared, but some of the restored pages and files have had changes, including the erasure of terms like “climate change” (again) and “clean energy,” Grist reports. These moves have faced multiple court challenges; on 11 February, for instance, a federal judge ordered that public access to Web pages and datasets belonging to the Centers for Disease Control and Prevention and the Food and Drug Administration be restored.
In our April issue, Spectrum Assistant Editor Gwendolyn Rak reports on efforts to preserve public access to information. In addition to the ongoing work at the Internet Archive, she describes how archivists at the Library Innovation Lab at Harvard Law School amassed a copy of the 16-terabyte archive of Data.gov, which includes more than 311,000 public datasets. That copied archive is being updated daily with new data hoovered up via automated queries to application programming interfaces (APIs).
Archivists are the guardians of memory. We depend on them to help us stay in touch with our history, maintain our knowledge base, and provide context, allowing us to understand how we came to be where we are and to light the way forward. In the fields of science, engineering, and medicine, where today’s innovations stand on the shoulders of yesterday’s discoveries, these digital preservationists ensure that the circuit of human knowledge remains unbroken.
This article appears in the April 2025 print issue as “Lots of Copies Keep Stuff Safe.”
Editor’s note: This post was revised to match the print version.
-
Protecting Robots in Harsh Environments with Advanced Sealing Systems
by Hunter Cheng on 01. April 2025. at 15:00
This is a sponsored article brought to you by Freudenberg Sealing Technologies.
The increasing deployment of collaborative robots (cobots) in outdoor environments presents significant engineering challenges, requiring highly advanced sealing solutions to ensure reliability and durability. Unlike industrial robots that operate in controlled indoor environments, outdoor cobots are exposed to extreme weather conditions that can compromise their mechanical integrity. Maintenance robots used in servicing wind turbines, for example, must endure intense temperature fluctuations, high humidity, prolonged UV radiation exposure, and powerful wind loads. Similarly, agricultural robots operate in harsh conditions where they are continuously exposed to abrasive dust, chemically aggressive fertilizers and pesticides, and mechanical stresses from rough terrains.
To ensure these robotic systems maintain long-term functionality, sealing solutions must offer effective protection against environmental ingress, mechanical wear, corrosion, and chemical degradation. Outdoor robots must perform flawlessly in temperature ranges spanning from scorching heat to freezing cold while withstanding constant exposure to moisture, lubricants, solvents, and other contaminants. In addition, sealing systems must be resilient to continuous vibrations and mechanical shocks, which are inherent to robotic motion and can accelerate material fatigue over time.
Comprehensive Technical Requirements for Robotic Sealing Solutions
The development of sealing solutions for outdoor robotics demands an intricate balance of durability, flexibility, and resistance to wear. Robotic joints, particularly those in high-mobility systems, experience multidirectional movements within confined installation spaces, making the selection of appropriate sealing materials and geometries crucial. Traditional elastomeric O-rings, widely used in industrial applications, often fail under such extreme conditions. Exposure to high temperatures can cause thermal degradation, while continuous mechanical stress accelerates fatigue, leading to early seal failure. Chemical incompatibility with lubricants, fuels, and cleaning agents further contributes to material degradation, shortening operational lifespans.
Friction-related wear is another critical concern, especially in robotic joints that operate at high speeds. Excessive friction not only generates heat but can also affect movement precision. In collaborative robotics, where robots work alongside humans, such inefficiencies pose safety risks by delaying response times and reducing motion accuracy. Additionally, prolonged exposure to UV radiation can cause conventional sealing materials to become brittle and crack, further compromising their performance.
Advanced IPSR Technology: Tailored for Cobots
To address these demanding conditions, Freudenberg Sealing Technologies has developed a specialized sealing solution: Ingress Protection Seals for Robots (IPSR). Unlike conventional seals that rely on metallic springs for mechanical support, the IPSR design features an innovative Z-shaped geometry that dynamically adapts to the axial and radial movements typical in robotic joints.
Numerous seals are required in cobots and these are exposed to high speeds and forces.Freudenberg Sealing Technologies
This unique structural design distributes mechanical loads more efficiently, significantly reducing friction and wear over time. While traditional spring-supported seals tend to degrade due to mechanical fatigue, the IPSR configuration eliminates this limitation, ensuring long-lasting performance. Additionally, the optimized contact pressure reduces frictional forces in robotic joints, thereby minimizing heat generation and extending component lifespans. This results in lower maintenance requirements, a crucial factor in applications where downtime can lead to significant operational disruptions.
Optimized Through Advanced Simulation Techniques
The development of IPSR technology relied extensively on Finite Element Analysis (FEA) simulations to optimize seal geometries, material selection, and surface textures before physical prototyping. These advanced computational techniques allowed engineers to predict and enhance seal behavior under real-world operational conditions.
FEA simulations focused on key performance factors such as frictional forces, contact pressure distribution, deformation under load, and long-term fatigue resistance. By iteratively refining the design based on simulation data, Freudenberg engineers were able to develop a sealing solution that balances minimal friction with maximum durability.
Furthermore, these simulations provided insights into how IPSR seals would perform under extreme conditions, including exposure to humidity, rapid temperature changes, and prolonged mechanical stress. This predictive approach enabled early detection of potential failure points, allowing for targeted improvements before mass production. By reducing the need for extensive physical testing, Freudenberg was able to accelerate the development cycle while ensuring high-performance reliability.
Material Innovations: Superior Resistance and Longevity
The effectiveness of a sealing solution is largely determined by its material composition. Freudenberg utilizes advanced elastomeric compounds, including Fluoroprene XP and EPDM, both selected for their exceptional chemical resistance, mechanical strength, and thermal stability.
Fluoroprene XP, in particular, offers superior resistance to aggressive chemicals, including solvents, lubricants, fuels, and industrial cleaning agents. Additionally, its resilience against ozone and UV radiation makes it an ideal choice for outdoor applications where continuous exposure to sunlight could otherwise lead to material degradation. EPDM, on the other hand, provides outstanding flexibility at low temperatures and excellent aging resistance, making it suitable for applications that require long-term durability under fluctuating environmental conditions.
To further enhance performance, Freudenberg applies specialized solid-film lubricant coatings to IPSR seals. These coatings significantly reduce friction and eliminate stick-slip effects, ensuring smooth robotic motion and precise movement control. This friction management not only improves energy efficiency but also enhances the overall responsiveness of robotic systems, an essential factor in high-precision automation.
Extensive Validation Through Real-World Testing
While advanced simulations provide critical insights into seal behavior, empirical testing remains essential for validating real-world performance. Freudenberg subjected IPSR seals to rigorous durability tests, including prolonged exposure to moisture, dust, temperature cycling, chemical immersion, and mechanical vibration.
Throughout these tests, IPSR seals consistently achieved IP65 certification, demonstrating their ability to effectively prevent environmental contaminants from compromising robotic components. Real-world deployment in maintenance robotics for wind turbines and agricultural automation further confirmed their reliability, with extensive wear analysis showing significantly extended operational lifetimes compared to traditional sealing technologies.
Safety Through Advanced Friction Management
In collaborative robotics, sealing performance plays a direct role in operational safety. Excessive friction in robotic joints can delay emergency-stop responses and reduce motion precision, posing potential hazards in human-robot interaction. By incorporating low-friction coatings and optimized sealing geometries, Freudenberg ensures that robotic systems respond rapidly and accurately, enhancing workplace safety and efficiency.
Tailored Sealing Solutions for Various Robotic Systems
Freudenberg Sealing Technologies provides customized sealing solutions across a wide range of robotic applications, ensuring optimal performance in diverse environments.
Automated Guided Vehicles (AGVs) operate in industrial settings where they are exposed to abrasive contaminants, mechanical vibrations, and chemical exposure. Freudenberg employs reinforced PTFE composites to enhance durability and protect internal components.
Delta robots can perform complex movements at high speed. This requires seals that meet the high dynamic and acceleration requirements.Freudenberg Sealing Technologies
Delta robots, commonly used in food processing, pharmaceuticals, and precision electronics, require FDA-compliant materials that withstand rigorous cleaning procedures such as Cleaning-In-Place (CIP) and Sterilization-In-Place (SIP). Freudenberg utilizes advanced fluoropolymers that maintain structural integrity under aggressive sanitation processes.
Seals for Scara robots must have high chemical resistance, compressive strength and thermal resistance to function reliably in a variety of industrial environments.Freudenberg Sealing Technologies
SCARA robots benefit from Freudenberg’s Modular Plastic Sealing Concept (MPSC), which integrates sealing, bearing support, and vibration damping within a compact, lightweight design. This innovation optimizes robot weight distribution and extends component service life.
Six-axis robots used in automotive, aerospace, and electronics manufacturing require sealing solutions capable of withstanding high-speed operations, mechanical stress, and chemical exposure. Freudenberg’s Premium Sine Seal (PSS), featuring reinforced PTFE liners and specialized elastomer compounds, ensures maximum durability and minimal friction losses.
Continuous Innovation for Future Robotic Applications
Freudenberg Sealing Technologies remains at the forefront of innovation, continuously developing new materials, sealing designs, and validation methods to address evolving challenges in robotics. Through strategic customer collaborations, cutting-edge material science, and state-of-the-art simulation technologies, Freudenberg ensures that its sealing solutions provide unparalleled reliability, efficiency, and safety across all robotic platforms.
-
The Rise and Fall of Inflection's Emotionally Intelligent Chatbot
by Gary Rivlin on 01. April 2025. at 13:00
In the past few years, AI has set Silicon Valley on fire. The new book AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash in on Artificial Intelligence chronicles those blazing high times, telling the stories of the startups, venture capital firms, and legacy tech companies that are burning bright—and those that have already flamed out.
In the excerpt below, author Gary Rivlin tells the inside story of the startup Inflection, which was established in 2022 by LinkedIn founder Reid Hoffman and DeepMind founder Mustafa Suleyman. Inflection hoped to differentiate itself by building a chatbot with a high emotional intelligence, and the company was at one point valued at US $4 billion. But its chatbot, Pi, failed to gain market share and in March 2024 Microsoft acquired most of the company’s workforce, leaving what was left of Pi to be licensed for use as a foundation for customer service bots.
Pi was not human and therefore could never have a personality. Yet it would fall on Inflection’s “personality team” to imbue Pi with a set of characteristics and traits that might make it seem like it did. The team’s ranks included several engineers, two linguists, and also Rachel Taylor, who had been the creative director of a London-based ad agency prior to going to work for Inflection.
“Mustafa gave me a little bit of an overview of what they were working on, and I couldn’t stop thinking about it,” Taylor said. “I thought maybe it would be the most impactful thing I ever worked on.”
Humans develop a personality through a complex interplay of genetics and environmental influences, including upbringing, culture, and life experiences. Pi’s personality began with the team listing traits. Some were positives. Be kind, be supportive. Others were negative traits to avoid, like irritability, arrogance, and combativeness.
“You’re showing the model lots of comparisons that show it the difference between good and bad instances of that behavior,” Mustafa Suleyman said—“reinforcement learning with human feedback,” in industry parlance, or RLHF. Sometimes teams working on RLHF just label behavior they want a model to avoid (sexual, violent, homophobic). But Inflection had people assigning a numerical score to a machine’s responses. “That way the model basically learns, ‘Oh, this was a really good answer, I’m going to do more of that,’ or ‘That was terrible, I’m going to do less of that,’” said Anusha Balakrishnan, an Inflection engineer focused on fine-tuning. The scores were fed into an algorithm that adjusted the weighting of the model accordingly, and the process was repeated.
Developing Pi’s Personality Traits
Unlike many other AI companies, which outsourced reinforcement learning to third parties, Inflection hired and trained its own people. Applicants were put through a battery of tests, starting with a reading comprehension exercise that Suleyman described as “very nuanced and quite difficult.” Then came another set of exams and several rounds of training before they were put to work. The average “teacher” earned between $16 and $25 an hour, Suleyman said, but as much as $50 if someone was an expert in the right domain. “We try to make sure they come from a wide range of backgrounds and represent a wide range of ages,” Suleyman said.
Inflection had many hundreds of teachers training Pi in the spring of 2023. “In some cases, we paid several hundred dollars an hour for very, very specialist people like behavioral therapists, psychologists, playwrights, and novelists,” Suleyman said. They even hired several comedians at one point, to help give Pi a sense of humor. “Our aim is a much more informal, relaxed, conversational experience,” Suleyman said.
The company met a self-imposed deadline of March 12, 2023 for a beta version of Pi that they shared with thousands of testers. With its beta release, the company emerged from stealth mode. A press announcement described Pi as “a supportive and compassionate AI that is eager to talk about anything at any time.” The company described Pi a “new kind of AI” different than other chatbots on the market, By May, the app was free and available to anyone willing to register and sign in to use the service.
The New York Times rarely runs even a short item about the release of a new product, especially one from a small, unknown startup. Yet few companies could boast of founders with the connections and star power of Inflection: Reid Hoffman, the co-founder of LinkedIn, and Suleyman, who was AI royalty as a cofounder of DeepMind. This clout translated into prime real estate on the front page of the Times Business section, including a large, eye-catching illustration and a headline that stretched across multiple columns: “My New BFF: Pi, an Emotional Support Chatbot.” Reporter Erin Griffith was skeptical of the breathing exercises that Pi suggested to help her relieve the stresses in her life. But the bot did help her develop a plan for managing a particularly hectic day, and it certainly left her feeling seen. Pi reassured Griffith that her feelings were “understandable,” “reasonable,” and “totally normal.”
Suleyman posted a manifesto on the Inflection website on the day Pi was released. Social media basically had poisoned the world, he began. Outrage and anger drove engagement, and the lure of profits proved too strong. “Imagine an AI that helps you empathize with or even forgive ‘the other side,’ rather than be outraged by and fearful of them,” Suleyman wrote. “Imagine an AI that optimizes for your long-term goals and doesn’t take advantage of your need for distraction when you’re tired at the end of a long day.” He described the AI they were building as a “personal AI companion with the single mission of making you happier, healthier, and more productive.”
In June 2023, Inflection announced its series A funding round. Suleyman and Hoffman had gone out thinking they would raise between $600 million and $675 million, but after the launch of Pi, Inflection was pegged as one of the hot new startups. A long list of investors wanted a piece. “We were overwhelmed with offers,” Suleyman said. In the end, they raised $1.3 billion on a venture round that valued Inflection at $4 billion.
HarperCollins Publishers
Inflection’s Technical and Business Challenges
Pi’s willingness to tackle virtually any subject was a point of pride inside Inflection. Where other bots shut down users if they stepped anywhere near a sensitive topic, Pi invited a conversation. “It will try to acknowledge that a topic is sensitive or contentious and then be cautious about giving strong judgments and be led by the user,” Suleyman said. Pi corrected statements of fact that were wrong so as not to perpetuate misinformation but rather than outright reject a view, it offered counterevidence.
Suleyman was particularly proud of Pi in the weeks after Hamas’s attack on Israel and the subsequent bombing campaign Israel waged in Gaza. “It was good in real time while things were unfolding, it’s good now,” he said two months into the hostilities. “It’s very balanced and evenhanded, very respectful.” If it had one bias, it was a deliberate one in favor of “peace and respect for human life,” Suleyman said. A bot that believed at its core in the sanctity of human life did not seem a bad thing.
Taylor deemed the first version of Pi “acceptable.” “It was very, very polite and very formal,” she said. “But there wasn’t the conversationality we wanted.” Pleasant. Positive. Respectful. Those were all admirable traits but didn’t exactly add up to the “fun” experience they were selling. Yet finding that right balance proved difficult. The personality team would turn the dial up on one trait or another but it was as if they were playing Whac-A-Mole. They would fiddle with the weights and coax the model to use more slang and colloquialisms, but then Pi was “a little bit too friendly and informal in a way people might find rude,” Taylor said.
The wide range of preferences among users was a consistent topic of conversation inside the company. Pi’s default mode was “friendly” but a short list of alternatives was added for people to choose from: casual, witty, compassionate, devoted. Pi would shift modes if a user told it they were looking for a sympathetic ear and not the friend who tries to fix a problem. But the future Pi, as imagined by Suleyman, was a model that read a person’s emotional tone and quickly adjusted on its own, much as someone might do if greeting a friend with a hearty hello but then switching immediately when learning they’re calling with bad news. But bots were not at the point where they could read a person’s preferences without clear instructions. It took at least ten turns of the conversation, Suleyman said, and as many as thirty to discern a user’s mood.
“In the future, an AI is going to be many, many things all at once,” Suleyman said. “People ask me, ‘Is it a therapist?’ Well, it has flavors of therapist. It has flavors of a friend. It has flavors of supernerdy expert. It has flavors of coach and confidant.” Among their lofty goals was a Pi that had multiple personalities, like a cyborg Sybil with a dissociative identity disorder. As they saw it, Pi eventually would be able to assume a near-limitless number of modes able to match the moment.
By December 2023, Pi was available for Android and its roughly 3 billion worldwide users. But Suleyman and others at Inflection were vague about user numbers—deliberately so. They were a disappointment. That fall, pollsters asked people who used chatbots which one they turned to most often. Fifty-two percent said ChatGPT and another 20 percent named Claude. Perplexity was third with a 10 percent share, followed by Google’s Bard (9 percent) and Bing (7 percent). Pi was lumped in with the 2 percent of users who selected “other.”
The company had its usual long to-do list. Yet their main challenge was teaching Pi to get better at a wider range of tasks. People thought of Pi as a conversationalist, which was a good thing, but a helper that is good only at talking is limited. “Pi can’t code,” Balakrishnan said that winter. “It needs to get better at reasoning. It can’t take actions. It’s only really useful if you want to talk about your feelings.”
From the book: AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence by Gary Rivlin. Copyright © 2025 by Gary Rivlin. Reprinted courtesy of Harper Business, an imprint of HarperCollins Publishers.
-
IEEE Women in Engineering Membership on the Rise
by Kathy Pretz on 31. March 2025. at 18:00
It’s been a busy three years since IEEE Women in Engineering celebrated its 25th anniversary in 2022.
WIE facilitates the recruitment and retention of women in technical disciplines around the world. It also works to inspire girls to pursue an engineering career. There are student branch affinity groups at universities around the globe. Men may join the affinity group as well.
Women make up less than a third of the world’s workforce in technology-related fields, according to a report by the World Bank Group.
WIE has introduced several programs focused on increasing the number of women in the science, technology, engineering, and math fields. The new initiatives include grants and leadership programs, and several contests have been launched to encourage and support female students who are considering a STEM career.
The group’s hard work is paying off. Winnie Ye, the current WIE chair and an electronics professor at Carleton University, which is just south of Ottawa, reports that, as of February, the number of members and student members was up over the same period last year. There are more than 28,800 members, a year-over-year increase greater than 17 percent. Student membership, at nearly 21,000, is up by roughly the same percentage. There are now more than 1,100 affinity groups.
“Despite this growth, attracting and retaining paying members remains a challenge, particularly as students transition to professional membership,” Ye says. “Ensuring long-term engagement requires demonstrating clear value through career development, networking, and leadership opportunities.”
A day of celebration, grants, and leadership training
Many of the new programs were launched under Celia Shahnaz’s leadership. The WIE committee’s first elected chair, Shahnaz served in 2023 and 2024. In 2022 the committee made the chair an elected position.
Shahnaz has been a WIE volunteer for more than 24 years. She established the first WIE group in the IEEE Bangladesh Section in 2010 and became its first chair. The IEEE senior member is a professor of electrical and electronic engineering at the Bangladesh University of Engineering and Technology, in Dhaka.
She says she is most proud of bringing back the annual IEEE WIE Day, held on 29 June, the date the group was formally established in 1997.
“The day is a member-engaging event targeting membership development and membership retention,” Shahnaz says. Activities include networking events, seminars, and workshops.
This year’s theme is Pioneering Safe Cyberspace: Bridging Technology and Light for Security.
For some groups, one day is not enough, so WIE Day events stretch out over several weeks. The inaugural event held in 2023 featured more than 120 activities worldwide. The number nearly doubled last year, with over 230 events. This year’s celebrations are scheduled to kick off on 23 June, to coincide with International Women in Engineering Day, then end on 7 July.
The IEEE WIE Family Cares grant program was established to financially assist members with caregiving duties so they can attend an IEEE conference. A grant provides up to US $1,000 to cover expenses related to caring for children, senior citizens, and family members who are disabled. The grant is sponsored by the IEEE Foundation.
To keep its members updated on industry trends, as well as research results and their practical applications, WIE has partnered with several IEEE societies and councils to offer a virtual Distinguished Lecturers program. More than 20 groups have provided speakers, including the Computer Society, Power Electronics Society, and Sensors Council.
“I want to really strengthen WIE leadership with the local community so that we can provide more targeted resources and support women.” —Winnie Ye
There’s also the new Industry Experts Network, a global database of industry professionals, researchers, and leaders who can be called upon to participate in an event.
“For any of our events, we can access this list of amazing people, who we can ask to give talks and offer workshops and seminars,” Ye says. Several WIE programs are focused on providing women with leadership skills. Less than 10 percent of women hold positions such as CIO, CTO, and IT manager, or serve as technical team leaders, according to this year’s Women in Tech Stats report.
The WIE International Leadership Summits, which are held around the world, provide opportunities for networking, mentorship, and collaboration. Eight were held last year, and seven are scheduled for this year in countries including Jordan, Pakistan, Poland, and the United States.
The WIE Forum USA East helps its attendees develop and improve their leadership skills through talks by successful leaders. This year’s event is scheduled to be held from 6 to 8 November in Arlington, Va.
Ye says she is most excited about the return of WIE’s International Leadership Conference. After a two-year absence, the event is scheduled to be held on 15 and 16 May in San Jose, Calif. The theme is Accelerating Leadership in an AI-Powered World. Registration is now open.
STEM-related contests
Several contests have been launched to encourage young women to pursue a STEM career, Shahnaz says. Women make up about 35 percent of STEM graduates—a gender disparity that hasn’t changed in the past 10 years, according to UNESCO’s 2024 Global Education Monitoring Report.
Held for the first time in 2023, the WIE Climate Tech Big Idea Pitch competition encourages female engineering students and researchers to be entrepreneurial and is designed to increase the number of technical startups led by women.
“I really wanted to inspire women to build their capacity using their technical knowledge and professional skills to be a startup founder,” Shahnaz says. “We wanted them to come up with a solution to climate-change-related problems and showcase their business ideas and models. We also wanted to nurture the talent of women in all membership grades including senior members, Fellows, and life Fellows.”
In the 2023 contest, a team of engineering students from the Bangladesh University of Engineering and Technology won first place in the impact category for its design of a bamboo filter for diesel engines.
There were 60 submissions that first year, and the number doubled last year. The deadline for submissions for this year’s contest will be announced soon.
Capitalizing on the popularity of manga comics and graphic novels with young people, WIE launched a competition in 2023 to find the best-written manga that centered on Riko-chan, a fictional character who is a preuniversity student. Riko-chan uses STEM tools to help solve everyday problems. More than 40 people participated in the 2023 contest, and there were 81 submissions last year. Their stories are available to read online, and many have been translated into nine languages including Bangla, French, Hindi, and Spanish.
“We see this contest as an opportunity to showcase the diverse role models in engineering technology,” Ye says. “The goal is to spark curiosity among our younger audience and make STEM fields more relatable and more exciting for them.”
This year’s manga competition is now accepting submissions. Check out the rules and deadlines on the WIE website.
Mentoring and outreach programs
A new mentoring program is in the works, Ye says.
“We want to really create an active mentor-mentee matching and engagement platform within the WIE community,” she says.
As part of her vision for a more engaged and inclusive community, she has launched the WIE Ambassador program. The initiative is designed to empower dedicated WIE members to advocate for IEEE’s mission globally. The ambassadors can promote WIE initiatives, organize local events, and inspire more women to pursue STEM careers, Ye says.
She emphasizes the importance of expanding WIE’s presence in underrepresented regions such as China and South Africa.
“During my term,” she says, “I’m committed to expanding our presence in these regions. I want to really strengthen WIE leadership with the local community so that we can provide more targeted resources and support women. We want to make sure that they are aware of us and become more active.”
This article was updated on 11 April 2025.
-
Before the Undo Command, There Was the Electric Eraser
by Allison Marsh on 31. March 2025. at 15:00
I’m fascinated with the early 20th-century zeal for electrifying everyday things. Hand tools, toasters, hot combs—they all obviously benefited from the jolt of electrification. But the eraser? What was so problematic about the humble eraser that it needed electrifying?
A number of things, it turned out. According to Hermann Lukowski in his 1935 patent application for an apparatus for erasing, “Hand held rubbers are clumsy and cover a greater area than may be required.” Aye, there’s the rub, as it were. Lukowski’s cone-tipped electric eraser, he argued, could better handle the fine detail.
An electric eraser could also be a timesaver. In the days before computer-aided drawing and the ease of the delete and undo commands, manually erasing a document could be a delicate operation.
Consider the careful technique Roscoe C. Sloane and John M. Montz suggest in their 1930 book Elements of Topographic Drawing. To make a correction to a map, these civil engineering professors at Ohio State University recommend the following steps:
- With a smooth, sharp knife pick the ink from the paper. This can be done without marring the surface.
- Place a hard, smooth surface, such as a [drafting] triangle, under the erasure before rubbing starts.
- When practically all the ink has been removed with the knife, rub with a pencil eraser.
Erasing was not for the faint of heart!
A Brief History of the Eraser
Where did the eraser get its start? The British scientist Joseph Priestley is celebrated for his discovery of oxygen and not at all celebrated for his discovery of the eraser. Around 1766, while working on The History and Present State of Electricity, he found himself having to draw his own illustrations. First, though, he had to learn to draw, and because any new artist naturally makes mistakes, he also needed to erase.
In 1766 or thereabouts, Joseph Priestley discovered the erasing properties of natural rubber.Alamy
Alas, there weren’t a lot of great options for erasing at the time. For items drawn in ink, he could use a knife to scrape away errors; pumice or other rough stones could also be used to abrade the page and remove the ink. To erase pencil, the customary approach was to use a piece of bread or bread crumbs to gently grind the graphite off the page. All of the methods were problematic. Without extreme care, it was easy to damage the paper. Using bread was also messy, and as the writer and artist John Ruskin allegedly said, a waste of perfectly good bread.
As the story goes, Priestley one day accidentally grabbed a piece of caoutchouc, or natural rubber, instead of bread and discovered that it could erase his mistakes.
Priestley may have discovered this attribute of rubber, but Edward Nairne, an inventor, optician, and scientific-instrument maker, marketed it for sale. For three shillings (about one day’s wages for a skilled tradesman), you could purchase a half-inch (1.27-cm) cube of the material. Priestley acknowledged Nairne in the preface of his 1770 tutorial on how to draw, A Familiar Introduction to the Theory and Practice of Perspective, noting that caoutchouc was “excellently adapted to the purpose of wiping from paper the marks of a black-lead-pencil.” By the late 1770s, cubes of caoutchouc were generally known as rubbers or lead-eaters.
Natural rubber might be good for erasing, but it isn’t necessarily an item you want sitting on your desk. It is extremely sensitive to temperature, becoming hard and brittle in the cold and soft and gummy in the heat. Over time, it inevitably degrades. And worst of all, it becomes stinky.
What was so problematic about the humble eraser that it needed electrifying?
Luckily, there were lots of other people looking for ways to improve natural rubber, and in 1839 Charles Goodyear developed the vulcanization process. By adding sulfur to natural rubber and then heating it, Goodyear discovered how to stabilize rubber in a firm state, what we would call today the thermosetting of polymers. In 1844 Goodyear patented a process to create rubber fabric. He went on to make rubber shoes and other products. (The tire company that bears his name was founded by the brothers Charles and Frank Seiberling several decades later.) Goodyear unfortunately died penniless, but we did get a better eraser out of his discovery.
Who Really Invented the Electric Eraser?
Albert Dremel, who opened his eponymous company in 1932, often gets credit for the invention of the electric eraser, but if that’s true, I can find no definitive proof. Out of more than 50 U.S. patents held by Dremel, none are for an electric eraser. In fact, other inventors may have a better claim, such as Homer G. Coy, who filed a patent for an electrified automatic eraser in 1927, or Ola S. Pugerud, who filed a patent for a rotatable electric eraser in 1906.
The Dremel Moto-Tool, introduced in 1935, came with an array of swappable bits. One version could be used as an electric eraser.Dremel
In 1935 Dremel did come out with the Moto-Tool, the world’s first handheld, high-speed rotary tool that had interchangeable bits for sanding, engraving, burnishing, and sharpening. One version of the Moto-Tool was sold as an electric eraser, although it was held more like a hammer than a pencil.
Regardless of who invented the device, electric erasers were definitely on the market by 1929. Some of the earliest adopters were librarians, specifically those who maintained and had to frequently update the card catalog. Margaret Mann, an associate professor of library science at the University of Michigan, listed an electric eraser as recommended equipment in her 1930 book Introduction to Cataloging and the Classification of Books. She described a flat, round rubber eraser mounted on a motor-driven instrument similar to a dentist’s drill. The eraser could remove typewriting and print from catalog cards without leaving a rough appearance. By 1937, discussions of electric erasers were part of the library science curriculum at Columbia University. Electric erasers had gone mainstream.
To erase pencil, the customary approach was to use a piece of bread to gently grind the graphite off the page.
In 1930, the Charles Bruning Co.’s general catalog had six pages of erasers and accessories, with two pages devoted to the company’s electric erasing machine. Bruning, which specialized in engineering, drafting, and surveying supplies, also offered a variety of nonelectrified eraser products, including steel erasers (also known as desk knives), eraser shields (used to isolate the area to be erased), and a chisel-shaped eraser to put on the end of a pencil.
The Loren Specialty Manufacturing Co. arrived late to the electric eraser game, introducing its first such product in 1953. Held in the hand like a pen or pencil, the Presto electric eraser would vibrate to abrade a small area in need of correction. The company spun off the Presto brand in 1962, about the time the Presto Model 80 [shown at top] was produced. This particular unit was used by officer workers at the New York Life Insurance Co. and is now housed at the Smithsonian’s Cooper Hewitt.
The Creativity of the Eraser
When I was growing up, my dad kept an electric eraser next to his drafting table. I loved playing with it, but it wasn’t until I began researching this article that I realized I had been using it all wrong. The pros know you’re supposed to shape the cylindrical rubber into a point in order to erase fine lines.
Today almost all draftsmen, librarians, and office workers have gone digital, but some visual artists still use electric erasers. One of them is artist and educator Darrel Tank, who specializes in pencil drawings. I watched several of his surprisingly fascinating videos comparing various models of electric erasers. Seeing Tank use his favorite electric eraser to create texture on clothing or movement in hair made me realize that drawing is not just an additive process. Sometimes it is what’s removed that makes the difference.
- YouTube
Susan Piedmont-Palladino, an architect and professor at Virginia Tech’s Washington-Alexandria Architecture Center, has also thought a lot about erasing. She curated the exhibit “Tools of the Imagination: Drawing Tools and Technologies from the Eighteenth Century to the Present” at the National Building Museum in 2005 and authored the companion book of the same title. Piedmont-Palladino describes architectural design as a long process of doing, undoing, and redoing, deciding which ideas can stay and which must go.
Piedmont-Palladino writes lovingly of a not-too-distant past, where this design process was captured in a building’s plans. When the architect was drafting by hand, the paper itself became a record of distress, showing where it had been scraped, erased, and redrawn. You could see points of uncertainty and points of decisiveness. But today, when almost all architectural drawing is done on a computer, users delete instead of erase. With a few keystrokes, an object can disappear, move to another spot, or miraculously reappear from the trash can. The design process is no longer etched into the page.
Of course, the pencil, the eraser (electric or not), and the computer are all just tools for transmitting and visualizing ideas. The tools of any age reflect society in ways that aren’t always clear until new tools come to replace them. Both the pencil and the eraser had to be invented, and it is up to historians to make sure they aren’t forgotten.
Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.
An abridged version of this article appears in the April 2025 print issue as “When Electrification Came for the Eraser.”
References
The electric eraser, more than any object I have researched for Past Forward, has the most incorrect information about its history on the Internet—wrong names, bad dates, inaccurate assertions—which get repeated over and over again as fact. It’s a great reminder of the need to go back to original sources.
As always, I enjoyed digging through patents to trace the history of invention and innovation in electric erasers.
Other primary sources I consulted include Margaret Mann’s Introduction to Cataloging and the Classification of Books, a syllabus to Columbia University’s 1937 course on Library Service 201, and the Charles Bruning Co.’s 1930 catalog.
Although Henry Petroski’s The Pencil: A History of Design and Circumstance only has a little bit of information on the history of erasers, it’s a great read about the implement that does the writing that needs to be erased.
-
“The Doctor Will See Your Electronic Health Record Now”
by Robert N. Charette on 30. March 2025. at 11:00
Cheryl Conrad no longer seethes with the frustration that threatened to overwhelm her in 2006. As described in IEEE Spectrum, Cheryl’s husband, Tom, has a rare genetic disease that causes ammonia to accumulate in his blood. At an emergency room visit two decades ago, Cheryl told the doctors Tom needed an immediate dose of lactulose to avoid going into a coma, but they refused to medicate him until his primary doctor confirmed his medical condition hours later.
Making the situation more vexing was that Tom had been treated at that facility for the same problem a few months earlier, and no one could locate his medical records. After Tom’s recovery, Cheryl vowed to always have immediate access to them.
Today, Cheryl says, “Happily, I’m not involved anymore in lugging Tom’s medical records everywhere.” Tom’s two primary medical facilities use the same electronic health record (EHR) system, allowing doctors at both facilities to access his medical information quickly.
In 2004, President George W. Bush set an ambitious goal for U.S. health care providers to transition to EHRs by 2014. Electronic health records, he declared, would transform health care by ensuring that a person’s complete medical information was available “at the time and place of care, no matter where it originates.”
President George W. Bush looks at an electronic medical record system during a visit to the Cleveland Clinic on 27 January 2005. Brooks Kraft/Corbis/Getty Images
Over the next four years, a bipartisan Congress approved more than US $150 million in funding aimed at setting up electronic health record demonstration projects and creating the administrative infrastructure needed.
Then, in 2009, during efforts to mitigate the financial crisis, newly elected President Barack Obama signed the $787 billion economic stimulus bill. Part of it contained the Health Information Technology for Economic and Clinical Health Act, also known as the HITECH Act, which budgeted $49 billion to promote health information technology and EHRs in the United States.
As a result, Tom, like most Americans, now has an electronic health record. However, many millions of Americans now have multiple electronic health records. On average, patients in the United States visit 19 different kinds of doctors throughout their lives. Further, many specialists have unique EHR systems that do not automatically communicate medical data between each other, so you must update your medical information for each one. Nevertheless, Tom now has immediate access to all his medical treatment and test information, something not readily available 20 years ago.
Tom’s situation underlines the paradox of how far the United States has come since 2004 and how far it still must go to achieve President Bush’s vision of a complete, secure, easily accessible, and seamlessly interoperable lifetime EHR.
As of 2021, nearly 80 percent of physicians and almost all nonfederal acute-care hospitals deployed an electronic health record system.
For many patients in the United States today, instead of fragmented, paper medical record silos, they have a plethora of fragmented, electronic medical record silos. And thousands of health care providers are burdened with costly, poorly designed, and insecure EHR systems that have exacerbated clinician burnout, led to hundreds of millions of medical records lost in data breaches, and created new sources of medical errors.
EHR’s baseline standardization does help centralize a very fragmented health care system, but in the rush to get EHR systems adopted, key technological and security challenges were overlooked and underappreciated. Subsequently, problems were introduced due to the sheer complexity of the systems being deployed. These still-unresolved issues are now potentially coupled with the unknown consequences of bolting on immature AI-driven technologies. Unless more thought and care are taken now in how to proceed as a fully integrated health care system, we could unintentionally put the entire U.S. health care system in a worse place than when President Bush first declared his EHR goal in 2004.
IT to Correct Health Care Inefficiencies Is a Global Project
Putting government pressure on the health care industry to adopt EHR systems through various financial incentives made sense by the early 2000s. Health care in the United States was in deep trouble. Spending increased from $74.1 billion in 1970 to more than $1.4 trillion by 2000, 2.3 times as fast as the U.S. gross domestic product. Health care costs grew at three times the rate of inflation from 1990 to 2000 alone, surpassing 13 percent of GDP.
Two major studies conducted by the Institute of Medicine in 2000 and 2001, titled To Err Is Human and Crossing the Quality Chasm, found that health care was deteriorating in terms of accessibility, quality, and safety. Inferior quality and needless medical treatments, including overuse or duplication of diagnostic tests, underuse of effective medical practices, misuse of drug therapies, and poor communication between health care providers emerged as particularly frustrating problems.
Administrative waste and unnecessary expenditures were substantial cost drivers, from billing to resolving insurance claims to managing patients’ cases. Health care’s administrative side was characterized as a “ monstrosity,” showing huge transaction costs associated with an estimated 30 billion communications conducted by mail, fax, or telephone annually at that time.
Both health care experts and policymakers agreed that reductions in health care delivery and its costs were possible only by deploying health information technology such as electronic prescribing and EHR. Early adopters of EHR systems like the Mayo Clinic, Cleveland Clinic, and the U.S. Department of Veterans Affairs proved the case. Governments across the European Union and the United Kingdom reached the same conclusion.
There has been a consistent push, especially in more economically advanced countries, to adopt EHR systems over the past two decades. For example, the E.U. has set a goal of providing 100 percent of its citizens across 27 countries access to electronic health records by 2030. Several countries are well on their way to this achievement, including Belgium, Denmark, Estonia, Lithuania, and Poland. Outside the E.U., countries such as Israel and Singapore also have very advanced systems, and after a rocky start, Australia’s My Health Record system seems to have found its footing. The United Kingdom was hoping to be a global leader in adopting interoperable health information systems, but a disastrous implementation of its National Programme for IT ended in 2011 after nine years and more than £10 billion. Canada, China, India, and Japan also have EHR system initiatives in place at varying levels of maturity. However, it will likely be years before they achieve the same capabilities found in leading digital-health countries.
EHRs Need a Systems-Engineering Approach
When it comes to embracing automation, the health care industry has historically moved at a snail’s pace, and when it does move, money goes to IT automation first. Market forces alone were unlikely to speed up EHR adoption.
Even in the early 2000s, health care experts and government officials were confident that digitalization could reduce total health spending by 10 percent while improving patient care. In a highly influential 2005 study, the RAND Corp. estimated that adopting EHR systems in hospitals and physician offices would cost $98 billion and $17 billion, respectively. The report also estimated that these entities would save at least $77 billion a year after moving to digital records. A highly cited paper in HealthAffairs from 2005 also claimed that small physician practices could recoup their EHR system investments in 2.5 years and profit handsomely thereafter.
Moreover, RAND claimed that a fully automated health care system could save the United States $346 billion per year. When Michael O. Leavitt, then the Secretary of Health and Human Services, looked at the projected savings, he saw them as “a key part of saving Medicare.” As baby boomers began retiring en masse in the early 2010s, cutting health care costs was also a political imperative since Medicare funding was projected to run out by 2020.
Some doubted the EHR revolution’s health care improvement and cost reduction claims or that it could be achieved within 20 years. The Congressional Budget Office argued that the RAND report overstated the potential costs and benefits of EHR systems and ignored peer-reviewed studies that contradicted it. The CBO also pointed out that RAND assumed EHR systems would be widely adopted and effectively used, which implies that effective tools already existed, though very few commercially available systems were. There was also skepticism about whether replicating the benefits for early adopters of EHR systems—who spent decades perfecting their systems—was possible once the five-year period of governmental EHR adoption incentives ended.
Even former House Speaker Newt Gingrich, a strong advocate for electronic health record systems, warned that health care was “30 times more difficult to fix than national defense.” The extent of the problem was one reason the 2005 National Academy of Sciences report, Building a Better Delivery System: A New Engineering / Health Care Partnership, forcefully and repeatedly called for innovative systems-engineering approaches to be developed and applied across the entire health care delivery process. The scale, complexity, and extremely short time frame for attempting to transform the totality of the health care environment demanded a robust “system of systems” engineering approach.
This was especially true because of the potential human impacts of automation on health care professionals and patients. Researchers warned that ignoring the interplay of computer-mediated work and existing sociotechnical conditions in health care practices would result in unexpected, unintentional, and undesirable consequences.
Additionally, without standard mechanisms for making EHR systems interoperable, many potential benefits would not materialize. As David Brailer, the first National Health Information Technology Coordinator, stated, “Unless interoperability is achieved…potential clinical and economic benefits won’t be realized, and we will not move closer to badly needed health care reform in the U.S.”
HITECH’s Broken Promises and Unforeseen Consequences
A few years later, policymakers in the Obama administration thought it was unrealistic to prioritize interoperability. They feared that defining interoperability standards too early would lock the health industry into outdated information-sharing approaches. Further, no existing health care business model supported interoperability, and a strong business model actively discouraged providers from sharing information. If patient information could easily shift to another provider, for example, what incentive does the provider have to readily share it?
Instead, policymakers decided to have EHR systems adopted as widely and quickly as possible during the five years of HITECH incentives. Tackling interoperability would come later. The government’s unofficial operational mantra was that EHR systems needed to become operational before they could become interoperable.
“Researchers have found that doctors spend between 3.5 and 6 hours a day (4.5 hours on average) filling out their digital health records.”
Existing EHR system vendors, making $2 billion annually at the time, viewed the HITECH incentive program as a once-in-a-lifetime opportunity to increase market share and revenue streams. Like fresh chum to hungry sharks, the subsidy money attracted a host of new EHR technology entrants eager for a piece of the action. The resulting feeding frenzy pitted an IT-naïve health care industry rushing to adopt EHR systems against a horde of vendors willing to promise (almost) anything to make a sale.
A few years into the HITECH program, a 2013 report by RAND wryly observed the market distortion caused by what amounted to an EHR adoption mandate: “We found that (EHR system) usability represents a relatively new, unique, and vexing challenge to physician professional satisfaction. Few other service industries are exposed to universal and substantial incentives to adopt such a specific, highly regulated form of technology, which has, as our findings suggest, not yet matured.”
In addition to forcing health care providers to choose quickly among a host of immature EHR solutions, the HITECH program completely undercut the warnings raised about the need for systems engineering or considering the impact of automation on very human-centered aspects of health care delivery by professionals. Sadly, the lack of attention to these concerns affects current EHR systems.
Today, studies like that conducted by Stanford Medicine indicate that nearly 70 percent of health care professionals express some level of satisfaction with their electronic health record system and that more than 60 percent think EHR systems have improved patient care. Electronic prescribing has also been seen as a general success, with the risk of medication errors and adverse drug events reduced.
However, professional satisfaction with EHRs runs shallow. The poor usability of EHR systems surfaced early in the HITECH program and continues as a main driver for physician dissatisfaction. The Stanford Medicine study, for example, also reported that 54 percent of physicians polled felt their EHR systems detracted from their professional satisfaction, and 59 percent felt it required a complete overhaul.
“What we’ve essentially done is created 24/7/365 access to clinicians with no economic model for that: The doctors don’t get paid.” —Robert Wachter, chair of the department of medicine at the University of California, San Francisco
Poor EHR system usability results in laborious and low-value data entry, obstacles to face-to-face patient communication, and information overload, where clinicians have to wade through an excess of irrelevant data when treating a patient. A 2019 study in Mayo Clinic Proceedings comparing EHR system usability to other IT products like Google Search, Microsoft Word, and Amazon placed EHR products in the bottom 10 percent.
Electronic health record systems were supposed to increase provider productivity, but for many clinicians, their EHRs are productivity vampires instead. Researchers have found that doctors spend between 3.5 and 6 hours a day (4.5 hours on average) filling out their patient’s digital health records, with an Annals of Internal Medicine study reporting that doctors in outpatient settings spend only 27 percent of their work time face-to-face with their patients.
In those visits, patients often complain that their doctors spend too much time staring at their computers. They are not likely wrong, as nearly 70 percent of doctors in 2018 felt that EHRs took valuable time away from their patients. To address this issue, health care providers employ more than 100,000 medical scribes today—or about one for every 10 U.S. physicians—to record documentation during office visits, but this only highlights the unacceptable usability problem.
Furthermore, physicians are spending more time dealing with their EHRs because the government, health care managers, and insurance companies are requesting more patient information regarding billing, quality measures, and compliance data. Patient notes are twice as long as they were 10 years ago. This is not surprising, as EHR systems so far have not complemented clinician work as much as directed it.
“A phenomenon of the productivity vampire is that the goalposts get moved,” explains University of Michigan professor emeritus John Leslie King, who coined the phrase “productivity vampire.” King, a student of system–human interactions, continues, “With the ability to better track health care activities, more government and insurance companies are going to ask for that information in order for providers to get paid.”
Robert Wachter, chair of the department of medicine at the University of California, San Francisco, and author of The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, believes that EHRs “became an enabler of corporate control and outside entity control.”
“It became a way that entities that cared about what the doctor was doing could now look to see in real time what the doctor was doing, and then influence what the doctor was doing and even constrain it,” Wachter says.
Federal law mandates that patients have access to their medical information contained in EHR systems—which is great, says Wachter, but this also adds to clinician workloads, as patients now feel free to pepper their physicians with emails and messages about the information.
“What we’ve essentially done is created 24/7/365 access to clinicians with no economic model for that: The doctors don’t get paid,” Wachter says. His doctors’ biggest complaints are that their EHR system has overloaded email inboxes with patient inquiries. Some doctors report that their in-boxes have become the equivalent of a second set of patients.
It is not so much a problem with the electronic information system design per se, notes Wachter, but with EHR systems that “meet the payment system and the workflow system in ways that we really did not think about.” EHRs also promised to reduce stress among health care professionals. Numerous studies have found, however, that EHR systems worsen clinician burnout, with Stanford Medicine finding that 71 percent of physicians felt the systems contributed to burnout.
Half of U.S. physicians are experiencing burnout, with 63 percent reporting at least one manifestation in 2022. The average physician works 53 hours weekly (19 hours more than the general population) and spends over 4 hours daily on documentation.
Clinical burnout is lowest among clinicians with highly usable EHR systems or in specialties with the least interaction with their EHR systems, such as surgeons and radiologists. Physicians who make, on average, 4,000 EHR system clicks per shift, like emergency room doctors, report the highest levels of burnout.
Aggravating the situation, notes Wachter, was “that decision support is so rudimentary…which means that the doctors feel like they’re spending all this time entering data in the machine, (but) getting relatively little useful intelligence out of it.”
Poorly designed information systems can also compromise patient safety. Evidence suggests that EHR systems with unacceptable usability contribute to low-quality patient care and reduce the likelihood of catching medical errors. According to a study funded by the U.S. Agency for Healthcare Research and Quality, EHR system issues were involved in the majority of malpractice claims over a six-and-a-half-year period of study ending in 2021. Sadly, the situation has not changed today.
Interoperability, Cybersecurity Bite Back
EHR system interoperability closely follows poor EHR system usability as a driver of health care provider dissatisfaction. Recent data from the Assistant Secretary for Technology Policy / Office of the National Coordinator for Health Information Technology indicates that 70 percent of hospitals sometimes exchange patient data, though only 43 percent claim they regularly do. System-affiliated hospitals share the most information, while independent and small hospitals share the least.
Exchanging information using the same EHR system helps. Wachter observes that interoperability among similar EHR systems is straightforward, but across different EHR systems, he says, “it is still relatively weak.”
However, even if two hospitals use the same EHR vendor, communicating patient data can be difficult if each hospital’s system is customized. Studies indicate that patient mismatch rates can be as high as 50 percent, even in practices using the same EHR vendor. This often leads to duplicate patient records that lack vital patient information, which can result in avoidable patient injuries and deaths.
The ability to share information associated with a unique patient identifier (UPI), like other countries that use advanced EHRs, including Estonia, Israel, and Singapore, makes health information interoperability easier, says Christina Grimes, digital health strategist for the Healthcare Information and Management Systems Society (HIMSS).
But in the United States, “Congress has forbidden it since 1998” and steadfastly resists allowing for UPIs, she notes.
Using a single-payer health insurance system, like most other countries with advanced EHR systems, would also make sharing patient information easier, decrease time spent on EHRs, and reduce clinician burnout, but that is also a nonstarter in the United States for the foreseeable future.
Interoperability is even more challenging because an average hospital uses 10 different EHR vendors internally to support more than a dozen different health care functions, and an average health system has 16 different EHR vendors when affiliated providers are included. Grimes notes that only a small percentage of health care systems use fully integrated EHR systems that cover all functions.
EHR systems adoption also promised to bend the national health care cost curve, but these costs continue to rise at the national level. The United States spent an estimated $4.8 trillion on health care in 2023, or 17.6 percent of GDP. While there seems to be general agreement that EHRs can help with cost savings, no rigorous quantitative studies at the national level show the tens of billions of dollars of promised savings that RAND loudly proclaimed in 2005.
However, studies have shown that health care providers, especially those in rural areas, have had difficulty saving money by using EHR systems. A recent study, for example, points out that rural hospitals do not benefit as much from EHR systems as urban hospitals in terms of reducing operating costs. With 700 rural hospitals at risk of closing due to severe financial pressures, investing in EHR systems has not proved to be the financial panacea they thought it would be.
Cybersecurity is a major cost not included in the 2005 RAND study. Even though there were warnings that cybersecurity was being given short shrift, vendors, providers, and policymakers paid scant attention to the cybersecurity implications of EHR systems, especially the multitude of new cyberthreat access points that would be created and potentially exploited. Tom Leary, senior vice president and head of government relations at HIMSS, points out the painfully obvious fact that “security was an afterthought. You have to make sure that security by design is involved from the beginning, so we’re still paying for the decision not to invest in security.”
From 2009 to 2023, a total of 5,887 health care breaches of 500 records or more have been reported to the U.S. Department of Health and Human Services Office for Civil Rights resulting in some 520 million health care records being exposed. Health care breaches have also led to widespread disruption to medical care in various hospital systems, sometimes for over a month.
In 2024, the average cost of a health care data breach was $9.97 million. The cost of these breaches will soon surpass the $27 billion ($44.5 billion in 2024 dollars) provided under HITECH to adopt EHRs.
2025 may see the first major revision since 2013 to the Health Insurance Portability and Accountability Act (HIPAA) Security Rule outlining how electronic protected health information will need to be cybersecured. The proposed rule will likely force health care providers and their EHR vendors to make cybersecurity investment a much higher priority.
$100 Billion Spent on Health Care IT: Was the Juice Worth the (Mega) Squeeze?
The U.S. health care industry has spent more than $100 billion on information technology, but few providers are fully meeting President Bush’s vision of a nation of seamlessly interoperable and secure digital health records.
Many past government policymakers now admit they failed to understand the complex business dynamics, technical scale, complexity, or time needed to create a nationwide system of usable, interoperable EHR systems. The entire process lacked systems-engineering thinking. As Seema Verma, former administrator of the Centers for Medicare and Medicaid Services, told Fortune, “We didn’t think about how all these systems connect with one another. That was the real missing piece.”
Over the past eight years, successive administrations and congresses have taken actions to try to rectify these early oversights. In 2016, the 21st Century Cures Act was passed, which kept EHR system vendors and providers from blocking the sharing of patient data, and spurred them to start working in earnest to create a trusted health information exchange. The Cures Act mandated standardized application programming interfaces (APIs) to promote interoperability. In 2022, the Trusted Exchange Framework and Common Agreement (TEFCA) was published, which aims to facilitate technical principles for securely exchanging health information.
“The EHR venture has proved troublesome thus far. The trouble is far from over.” —John Leslie King, University of Michigan professor emeritus
In late 2023, the first Qualified Health Information Networks (QHINs) were approved to begin supporting the exchange of data governed by TEFCA, and in 2024, updates were made to the APIs to make information interoperability easier. These seven QHINs allow thousands of health providers to more easily exchange information. Combined with the emerging consolidation among hospital systems around three EHR vendors—Epic Systems Corp., Oracle Health, and Meditech—this should improve interoperability in the next decade.
These changes, says HIMSS’s Tom Leary, will help give “all patients access to their data in whatever format they want with limited barriers. The health care environment is starting to become patient-centric now. So, as a patient, I should soon be able to go out to any of my healthcare providers to really get that information.”
HIMSS’s Christina Grimes adds that the patient-centric change is the continuing consolidation of EHR system portals. “Patients really want one portal to interact with instead of the number they have today,” she says.
In 2024, the Assistant Secretary for Technology Policy / Office of the National Coordinator for Health IT, the U.S. government department responsible for overseeing electronic health systems’ adoption and standards, was reorganized to focus more on cybersecurity and advanced technology like AI. In addition to the proposed HIPAA security requirements, Congress is also considering new laws to mandate better cybersecurity. There is hope that AI can help overcome EHR system usability issues, especially clinician burnout and interoperability issues like patient matching.
Wachter states that the new AI scribes are showing real promise. “The way it works is that I can now have a conversation with my patient and look the patient in the eye. I’m actually focusing on them and not my keyboard. And then a note, formatted correctly, just magically appears. Almost ironically, this new set of AI technologies may well solve some of the problems that the last technology created.”
Whether these technologies live up to the hype remains to be seen. More concerning is whether AI will exacerbate the rampant feeling among providers that they have become tools of their tools and not masters of them.
As EHR systems become more usable, interoperable, and patient-friendly, the underlying foundations of medical care can be finally addressed. High-quality evidence backs only about 10 percent of the care patients receive today. One of the great potentials of digitizing health records is to discover what treatments work best and why and then distribute that information to the health care community. While this is an active research area, more research and funding are needed.
Twenty years ago, Tom Conrad, who himself was a senior computer scientist, told me he was skeptical that having more information necessarily meant that better medical decisions would automatically be made. He pointed out that when doctors’ earnings are related to the number of patients they see, there is a trade-off between the better care that EHR provides and the sheer amount of time required to review a more complete medical record. Today, the trade-off is not in the patients’ or doctors’ favor. Whether it can ever be balanced is one of the great unknowns.
Obviously, no one wants to go back to paper records. However, as John Leslie King says, “The way forward involves multiple moving targets due to advances in technology, care, and administration. Most EHR vendors are moving as fast as they can.”
However, it would be foolish to think it will be smooth sailing from here on, King says: “The EHR venture has proved troublesome thus far. The trouble is far from over.”
-
This Solar Engineer Is Patching Lebanon’s Power Grid
by Edd Gent on 29. March 2025. at 14:00
In Mira Daher’s home country of Lebanon, the national grid provides power for only a few hours a day. The country’s state-owned energy provider, Electricity of Lebanon (EDL), has long struggled to meet demand, and a crippling economic crisis that began in 2019 has worsened the situation. Most residents now rely on privately owned diesel-powered generators for the bulk of their energy needs.
But in recent years, the rapidly falling cost of solar panels has given Lebanese businesses and families a compelling alternative, and the country has seen a boom in private solar-power installations. Total installed solar capacity jumped nearly eightfold between 2020 and 2022 to more than 870 megawatts, primarily as a result of off-grid rooftop installations.
Daher, head of tendering at the renewable-energy company Earth Technologies, in Antelias, Lebanon, has played an important part in this ongoing revolution. She is in charge of bidding for new solar projects, drawing up designs, and ensuring that they are correctly implemented on-site.
“I enjoy the variety and the challenge of managing diverse projects, each with its own unique requirements and technical hurdles,” she says. “And knowing that my efforts also contribute to a sustainable future for Lebanon fills me with pride and motivates me a lot.”
An Early Mentor
Daher grew up in the southern Lebanese city of Saida (also called Sidon) where her father worked as an electrical engineer in the construction sector. His work helped to inspire her interest in technology at a young age, she says. When she was applying for university, he encouraged her to study electrical engineering too.
“My first mentor was my father,” says Daher. “He increased my curiosity and passion for technology and engineering, and when I watched him work and solve complex problems, that motivated me to follow in his footsteps.”
In 2016, she enrolled at Beirut Arab University to study electrical and electronics engineering. When she graduated in 2019, Daher says, the country’s solar boom was just taking off, which prompted her to pursue a master’s degree in power and energy, with a specialization in solar power, at the American University of Beirut.
“My thesis concentrated on the energy situation in Lebanon and explored potential solutions to increase the reliance on renewable resources,” she says. “Five or six years ago, solar systems had high costs. But today the cost [has] decreased a lot because of new technologies, and because there is a lot of production of solar panels in China.”
Entering the Workforce
After graduating in 2021, Daher started a job as a solar-energy engineer at the Beirut-based solar-power company Mashriq Energy, where she was responsible for developing designs and bids for new solar installations, similar to her current role. It was a steep learning curve, Daher says, because she had to quickly pick up business skills, including financial modeling and contract negotiations. She also learned to deal with the practicalities of building large solar developments, such as site constraints and regulations. In 2022, she joined Earth Technologies as a solar project design engineer.
Various organizations, including Lebanese government and nongovernmental agencies such as the United Nations, request bids for solar power installations they want to build—a process known as tendering. Daher’s principal responsibility is to prepare and submit bids for these projects, but she also supervises their implementation.
Daher’s role requires her to maintain a broad base of knowledge about the solar projects she oversees.Mira Daher
“I oversee the entire project cycle, from identifying and managing tenders to designing, pricing, and implementing solar projects across residential, industrial, commercial, and utility sectors,” she says.
The first step in the process is to visit the proposed installation site to determine where solar panels should be positioned based on the landscape and local weather conditions. Once this is done, Daher and her team come up with a design for the plant. This involves figuring out what kinds of solar panels, inverters, and batteries will fit the budget and how to wire all the components together.
The team runs simulations of the proposed plant to ensure that the design meets the client’s needs. Daher is then responsible for negotiating with the client to make sure that the proposal fulfills their technical and budgetary requirements. Once the client has approved the design, other teams oversee construction of the plant, though Daher says she makes occasional site visits to ensure the design is being implemented correctly.
Daher’s role requires her to have a solid understanding of all the components that go into a solar plant, from the different brands of power electronics to the civil engineering required to build supporting structures for solar panels. “You have to know everything about the project,” she says.
Solar Power for Development
Earth Technologies operates across the Middle East and Africa, but Daher says most of the solar installations she works on are in Lebanon. Some of the most interesting have been development-focused projects funded by the U.N.
Daher led the U.N.-funded installation of solar panels at nine hospitals, as well as a project that uses solar power to pump water to people in remote parts of the country. More recently, she has started work on a solar and battery installation for street lighting in the town of Bourj Hammoud, which will allow shops to stay open later and help to boost the local economy. The projects she has overseen generally cost around US $700,000 to $800,000.
But securing funding for renewable projects is an ongoing challenge in Lebanon, says Daher, given the uncertain economic situation. More recently, the country was also rocked by the conflict between Israel and the Lebanon-based paramilitary group Hezbollah. This resulted in widespread bombing of Beirut, the capital, and the country’s southern regions last October and November.
“The two months of conflict were incredibly challenging,” says Daher. “The environment was unsafe and filled with uncertainty, leaving us constantly anxious about what the future held.”
Safety concerns forced her to relocate from her home in Beirut to a village called Ain El Jdideh. This meant she had to drive about an hour and a half on unsafe roads to get to work. Several of the major projects she was working on were also halted as they were in the areas that bore the brunt of the conflict. One U.N.-funded project she worked on in Ansar, in southern Lebanon, was knocked offline when an adjacent building was destroyed.
“Despite these hardships, we persevered, and I am grateful that the war has ended, allowing us to regain some stability and security,” says Daher.
A Challenging But Fulfilling Career
Despite these difficulties, Daher remains optimistic about the future of renewable energy in Lebanon, and she says it can be a deeply rewarding career. Breaking into the industry requires a strong educational foundation, though, so she recommends first pursuing a degree focused on power systems and renewable technologies.
The energy sector is a male-dominated field, says Daher, which can make it difficult for women to find their footing. “I’ve often encountered biases, stereotypes that can make it more difficult to be taken seriously, or to have my voice heard,” she adds. “Overcoming these obstacles requires resilience, confidence, and a commitment to demonstrating my expertise and capabilities.”
It also requires a commitment to continual learning, due to the continued advances being made in solar-power technology. “It’s very important to stay up to date,” she says. “This field is always evolving. Every day, you can see a lot of new technologies.”
-
Video Friday: Watch this 3D-Printed Robot Escape
by Evan Ackerman on 28. March 2025. at 16:45
Your weekly selection of awesome robot videos
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND
ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC
ICRA 2025: 19–23 May 2025, ATLANTA, GA
London Humanoids Summit: 29–30 May 2025, LONDON
IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN
2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX
RSS 2025: 21–25 June 2025, LOS ANGELES
ETH Robotics Summer School: 21–27 June 2025, GENEVA
IAS 2025: 30 June–4 July 2025, GENOA, ITALY
ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
IEEE World Haptics: 8–11 July 2025, SUWON, KOREA
IFAC Symposium on Robotics: 15–18 July 2025, PARIS
RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL
RO-MAN 2025: 25–29 August 2025, EINDHOVEN, NETHERLANDS
Enjoy today’s videos!
This robot can walk, without electronics, and only with the addition of a cartridge of compressed gas, right off the 3D-printer. It can also be printed in one go, from one material. Researchers from the University of California San Diego and BASF, describe how they developed the robot in an advanced online publication in the journal Advanced Intelligent Systems. They used the simplest technology available: a desktop 3D-printer and an off-the-shelf printing material. This design approach is not only robust, it is also cheap—each robot costs about $20 to manufacture.
And details!
[ Paper ] via [ University of California San Diego ]
Why do you want a humanoid robot to walk like a human? So that it doesn’t look weird, I guess, but it’s hard to imagine that a system that doesn’t have the same arrangement of joints and muscles that we do will move optimally by just trying to mimic us.
[ Figure ]
I don’t know how it manages it, but this little soft robotic worm somehow moves with an incredible amount of personality.
Soft actuators are critical for enabling soft robots, medical devices, and haptic systems. Many soft actuators, however, require power to hold a configuration and rely on hard circuitry for control, limiting their potential applications. In this work, the first soft electromagnetic system is demonstrated for externally-controlled bistable actuation or self-regulated astable oscillation.
[ Paper ] via [ Georgia Tech ]
Thanks, Ellen!
A 180-degree pelvis rotation would put the “break” in “breakdancing” if this were a human doing it.
[ Boston Dynamics ]
My colleagues were impressed by this cooking robot, but that may be because journalists are always impressed by free food.
[ Posha ]
This is our latest work about a hybrid aerial-terrestrial quadruped robot called SPIDAR, which shows unique and complex locomotion styles in both aerial and terrestrial domains including thrust-assisted crawling motion. This work has been presented in the International Symposium of Robotics Research (ISRR) 2024.
[ Paper ] via [ Dragon Lab ]
Thanks, Moju!
This fresh, newly captured video from Unitree’s testing grounds showcases the breakneck speed of humanoid intelligence advancement. Every day brings something thrilling!
[ Unitree ]
There should be more robots that you can ride around on.
[ AgileX Robotics ]
There should be more robots that wear hats at work.
[ Ugo ]
iRobot, who pioneered giant docks for robot vacuums, is now moving away from giant docks for robot vacuums.
[ iRobot ]
There’s a famous experiment where if you put a dead fish in current, it starts swimming, just because of its biomechanical design. Somehow, you can do the same thing with an unactuated quadruped robot on a treadmill.
[ Delft University of Technology ]
Mush! Narrowly!
[ Hybrid Robotics ]
It’s freaking me out a little bit that this couple is apparently wandering around a huge mall that is populated only by robots and zero other humans.
[ MagicLab ]
I’m trying, I really am, but the yellow is just not working for me.
[ Kepler ]
By having Stretch take on the physically demanding task of unloading trailers stacked floor to ceiling with boxes, Gap Inc has reduced injuries, lowered turnover, and watched employees get excited about automation intended to keep them safe.
[ Boston Dynamics ]
Since arriving at Mars in 2012, NASA’s Curiosity rover has been ingesting samples of Martian rock, soil, and air to better understand the past and present habitability of the Red Planet. Of particular interest to its search are organic molecules: the building blocks of life. Now, Curiosity’s onboard chemistry lab has detected long-chain hydrocarbons in a mudstone called “Cumberland,” the largest organics yet discovered on Mars.
[ NASA ]
This University of Toronto Robotics Institute Seminar is from Sergey Levine at UC Berkeley, on Robotics Foundation Models.
General-purpose pretrained models have transformed natural language processing, computer vision, and other fields. In principle, such approaches should be ideal in robotics: since gathering large amounts of data for any given robotic platform and application is likely to be difficult, general pretrained models that provide broad capabilities present an ideal recipe to enable robotic learning at scale for real-world applications.
From the perspective of general AI research, such approaches also offer a promising and intriguing approach to some of the grandest AI challenges: if large-scale training on embodied experience can provide diverse physical capabilities, this would shed light not only on the practical questions around designing broadly capable robots, but the foundations of situated problem-solving, physical understanding, and decision making. However, realizing this potential requires handling a number of challenging obstacles. What data shall we use to train robotic foundation models? What will be the training objective? How should alignment or post-training be done? In this talk, I will discuss how we can approach some of these challenges. -
Listen to Weather Satellites—or the Universe—With the Versatile Discovery Dish
by Stephen Cass on 28. March 2025. at 14:00
The U.S. government recommends that everyone have a disaster kit that includes a weather radio. These radios tune to a nationwide network run by the National Oceanic and Atmospheric Administration (NOAA) and the Federal Communications Commission that provides alerts about hazardous weather and other major emergencies. Such broadcasts can be a lifeline when other communication systems go out. But what if you could step it up and get not just audio information but also images, charts, and written reports, even while completely off the grid?
Turns out you can, thanks to modern geosynchronous weather satellites, and it’s never been easier than with KrakenRF’s new Discovery Dish system. This system involves buying a US $115 70-centimeter-diameter parabolic antenna, and then one of a number of $109 swappable feeds that cover different frequency bands. To try out the system, I got one feed suitable for picking up L-band satellite transmissions, and another tuned for detecting the radio emissions from galactic hydrogen clouds.
The parabolic antenna comes as three metal petals plus some ancillary bits and pieces for holding the feed and mounting the dish on a support. Everything is held together with nuts and bolts, so it can be dissembled and reassembled, and the petals are light and stack together nicely—you could pack them in a suitcase if you ever wanted to travel and sample a different sky.
In addition to KrakenRF’s dish and feed, you’ll also need a software-defined radio (SDR) receiver and a computer with software to decode the signals coming from the feed. Many SDRs can be used, but you’ll need one that comes with what’s known as a bias tee built in, or you’ll need to add a bias tee yourself. The bias tee supplies power to the low-noise amplifiers used in KrakenRF’s feeds. I used the recommended $34 RTL-SDR Blog V3 (which comes as a USB dongle), with my MacBook, but you can use a PC or Raspberry Pi as a host computer as well.
The Discovery Dish is formed by three petals [top center] that bolt together with other mounting gear [top left and right]. Feeds are mounted on a pole and adjusted to be level with the dish’s focus [bottom]. Different feeds allow different applications, such as 1680 megahertz for receiving L-band satellite transmissions or 1420 MHz for radio astronomy. A software-defined radio receiver decodes signals.James Provost
After I inserted the L-band feed into the dish, it was time to look for a satellite. Following KrakenRF’s guide, I used Carl Reinemann’s Web app to print out a list of azimuths and elevations for geosynchronous weather satellites based on my location. Then I headed up to the roof of my New York City apartment building with the mast from my portable ham radio antenna to provide a mount. And then I headed straight back down again when I realized that it was too blustery for a temporary mount. The dish is perforated with holes to reduce air resistance, but there was still a real risk of the wind toppling the portable mast and sweeping it over the side of the building.
A couple of days later, I returned to calmer conditions, and with my iPhone employed as a compass and inclinometer, I pointed the dish at the coordinates for the GOES-East weather satellite. This satellite hangs over the equator at a longitude of 75 degrees west, close to that of New York City. A second satellite, GOES-West, sits at 135 degrees west, over the Pacific Ocean.
These GOES satellites are fourth-generation spacecraft in a long line of invaluable weather satellites that have been operated by NOAA and the U.S. National Weather Service for 50 years. The first of the current generation, known as GOES-R, launched in 2016 and features a number of upgrades. For radio enthusiasts, the most significant of the upgrades are its downlink broadcast capabilities.
The current GOES satellites transmit images taken in multiple wavelengths and scales. A false-color full-disk image [above] is captured in an infrared band that detects moisture and ash; the image at top shows the eastern United States in an approximation of what you would see with the naked eye. Stephen Cass/NOAA
The GOES-R satellites transmit data at 400 kilobits per second, versus a maximum of 128 Kb/s for previous generations, allowing more information to be included, such as images from other weather satellites around the globe. The satellites also merge satellite-image data and emergency and weather information into a single link that can be simultaneously picked up by one receiver, instead of needing two as previously.
For fine dish-pointing adjustments, I was guided by watching the signal in the frequency spectrum analyzer built into SatDump, an open-software package designed for decoding satellite transmissions picked up by SDR receivers. I groaned when no matter how I adjusted the dish, I could barely get the signal above the noise. But much to my surprise, I nonetheless started seeing an image of the Earth begin to form on the display.
The original GOES-R design specified that receiving ground dishes would have to be at least one meter in diameter, but the folks at KrakenRF have built their feeds around a new ultralow-noise amplifier that can make the weaker signal gathered by their smaller dish usable. Soon I had pictures of the Earth in multiple wavelengths, both raw and in false color, with and without the superimposed outlines of states and countries, plus a wide assortment of other charts plotting rainfall and wind speeds for different areas.
The GOES satellites also broadcast information uploaded from the U.S. National Weather Service, such as this chart of marine wind speeds.Stephen Cass/National Weather Service
My next test was to do a spot of radio astronomy, swapping out the L-band feed for the galactic hydrogen emission feed. Getting results was a much longer process: First I had to point the dish at a bit of the sky where I knew the Milky Way wasn’t to obtain baseline data (done with the help of the Stellarium astronomy site). Then I pointed the dish straight up and waited for the rotation of the Earth to bring the Milky Way into view. Pulling the signal out of the noise is a slow process—you have to integrate 5 minutes of data from the receiver—but eventually a nice curve formed that indicated I was still safely within the embrace of the spiral arms of our home galaxy. Much more sophisticated radio astronomy can be done, especially if you mount the dish on a scanning platform to generate 2D maps. But I swapped back the L-band feed just to marvel at how our planet looks from 36,000 kilometers away!
-
Andrew Ng: Unbiggen AI
by Eliza Strickland on 09. February 2022. at 15:31
Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.
Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
Andrew Ng on...
- What’s next for really big models
- The career advice he didn’t listen to
- Defining the data-centric AI movement
- Synthetic data
- Why Landing AI asks its customers to do the work
The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?
Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
When you say you want a foundation model for computer vision, what do you mean by that?
Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
What needs to happen for someone to build a foundation model for video?
Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.
Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AII remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
I expect they’re both convinced now.
Ng: I think so, yes.
Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
How do you define data-centric AI, and why do you consider it a movement?
Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.
You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?
Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?
Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew NgFor example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?
Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.
One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
When you talk about engineering the data, what do you mean exactly?
Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
What about using synthetic data, is that often a good solution?
Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
Do you mean that synthetic data would allow you to try the model on more data sets?
Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew NgSynthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?
Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.
One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?
Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.
Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?
Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
-
How AI Will Change Chip Design
by Rina Diane Caballar on 08. February 2022. at 14:00
The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.
Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.
But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.
How is AI currently being used to design the next generation of chips?
Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.
Heather GorrMathWorks
Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.
What are the benefits of using AI for chip design?
Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.
So it’s like having a digital twin in a sense?
Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.
So, it’s going to be more efficient and, as you said, cheaper?
Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.
We’ve talked about the benefits. How about the drawbacks?
Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.
Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.
One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.
How can engineers use AI to better prepare and extract insights from hardware or sensor data?
Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.
One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.
What should engineers and designers consider when using AI for chip design?
Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.
How do you think AI will affect chip designers’ jobs?
Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.
How do you envision the future of AI and chip design?
Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
-
Atomically Thin Materials Significantly Shrink Qubits
by Dexter Johnson on 07. February 2022. at 16:12
Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.
IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.
Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.
“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”
The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.
Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).
Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT
In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.
As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.
In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.
“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.
On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.
While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.
“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”
This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.
“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.
Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.