IEEE News

IEEE Spectrum IEEE Spectrum

  • NASA Made the Hubble Telescope to Be Remade
    by Ned Potter on 02. October 2024. at 13:00



    When NASA decided in the 1970s that the Hubble Space Telescope should be serviceable in space, the engineering challenges must have seemed nearly insurmountable. How could a machine that complex and delicate be repaired by astronauts wearing 130-kilogram suits with thick gloves?

    In the end, spacewalkers not only fixed the telescope, they regularly remade it.

    That was possible because engineers designed Hubble to be toroidal, its major systems laid out in wedge-shaped equipment bays that astronauts could open from the outside. A series of maintenance workstations on the telescope’s outer surface ensured astronauts could have ready access to crucial telescope parts.

    On five space-shuttle servicing missions between 1993 and 2009, 16 spacewalkers replaced every major component except the telescope’s mirrors and outer skin. They increased its electric supply by 20 percent. And they tripled its ability to concentrate and sense light, job one of any telescope.

    The now legendary orbital observatory was built to last 15 years in space. But with updates, it has operated for more than 30—a history of invention and re-invention to make any engineering team proud. “Twice the lifetime,” says astronaut Kathryn Sullivan, who flew on Hubble’s 1990 launch mission. “Just try finding something else that has improved with age in space. I dare you.”

  • Partial Automation Doesn't Make Vehicles Safer
    by Willie D. Jones on 02. October 2024. at 11:00



    Early on the morning of 3 September, a multi-car accident occurred on Interstate 95 in Pennsylvania, raising alarms about the dangers of relying too heavily on advanced driver assistance systems (ADAS). Two men were killed when a Ford Mustang Mach-E electric vehicle, traveling at 114 kilometers per hour (71 mph), crashed into a car that had pulled over to the highway’s left shoulder. According to Pennsylvania State Police, the driver of the Mustang mistakenly believed that the car’s BlueCruise hands-free driving feature and adaptive cruise control could take full responsibility for driving.

    The crash is part of a worrying trend involving drivers who overestimate the capabilities of partial automation systems. Ford’s BlueCruise system, while advanced, provides only level 2 vehicle autonomy. This means it can assist with steering, lane-keeping, and speed control on prequalified highways, but the driver must remain alert and ready to take over at any moment.

    State police and federal investigators discovered that the driver of the Mustang involved in the deadly I-95 incident was both intoxicated and texting at the time of the crash, factors that likely contributed to their failure to regain control of the vehicle when necessary. The driver has been charged with vehicular homicide, involuntary manslaughter, and several other offenses.

    Are Self-Driving Cars Safer?

    This incident is the latest in a series of crashes involving Mustang Mach-E vehicles equipped with level 2 partial automation. Similar accidents were reported earlier this year in Texas and Philadelphia, all occurring at night on highways and resulting in fatalities. In response, the National Highway Traffic Safety Administration (NHTSA) launched an investigation into the crashes and the role ADAS systems may have played in them.

    Unfortunately, there isn’t good data on the proportion of fatal crashes involving vehicles equipped with these partial automation systems. —David Kidd, Insurance Institute for Highway Safety

    This is not a niche issue. Consulting and analysis firms including Munich-based Roland Berger predict that by 2025, more than one-third of new cars rolling off the world’s assembly lines will be equipped with at least level 2 autonomy. According to a Roland Berger survey of auto manufacturers, only 14 percent of vehicles produced next year will have no ADAS features at all.

    “Unfortunately, there isn’t good data on the proportion of fatal crashes involving vehicles equipped with these partial automation systems,” says David Kidd, a researcher at the Arlington, Va.–based Insurance Institute for Highway Safety (IIHS). The nonprofit agency conducts vehicle safety testing and research, including evaluating vehicle crashworthiness.

    IIHS evaluates whether ADAS provides a safety benefit by combining information about what vehicles come equipped with with data maintained by the Highway Loss Data Institute and police crash reports. But that record keeping, says Kidd, doesn’t yield hard data on the proportion of vehicles with systems such as BlueCruise or Tesla’s Autopilot that are involved in fatal crashes. Still, he notes, looking at information about the incidence of crashes involving vehicles that have level 2 driver assistance systems and the rate at which crashes happen with those not so equipped, “there is no significant difference.”

    Asked about the fact that these three Mach-E crashes happened at night, Kidd points out that it’s not just a coincidence. Nighttime presents a very difficult set of conditions for these systems. “All the vehicles [with partial automation] we tested do an excellent job [of picking up the visual cues they need to avoid collisions] during the day, but after dark, they struggle.”

    Automated Systems Make Riskier Drivers

    IIHS released a report in July underscoring the danger of misusing ADAS systems. The study found that partial automation features like Ford’s BlueCruise are best understood as convenience features rather than safety technologies. According to IIHS President David Harkey, “Everything we’re seeing tells us that partial automation is a convenience feature like power windows or heated seats rather than a safety technology.

    “Other technologies,” says Kidd, “like automatic emergency braking, lane departure warning, and blind-spot monitoring, which are designed to warn of an imminent crash, are effective at preventing crashes. We look at the partial automation technologies and these collision warning technologies differently because they have very different safety implications.”

    The July IIHS study also highlighted a phenomenon known as risk compensation, where drivers using automated systems tend to engage in riskier behaviors, such as texting or driving under the influence, believing that the technology will save them from accidents. A similar issue arose with the widespread introduction of anti-lock braking systems in the 1980s, when drivers falsely assumed they could brake later or safely come to a stop from higher speeds, often with disastrous results.

    What’s Next for ADAS?

    While automakers like Ford say that ADAS is not designed to take the driver out of the loop, incidents like the Pennsylvania and Texas crashes underscore the need for better education and possibly stricter regulations around the use of these technologies. Until full vehicle autonomy is realized, drivers must remain vigilant, even when using advanced assistance features.

    As partial automation systems become more common, experts warn that robust safeguards are needed to prevent their misuse. The IIHS study concluded that “Designing partial driving automation with robust safeguards to deter misuse will be crucial to minimizing the possibility that the systems will inadvertently increase crash risk.”

    “There are things auto manufacturers can do to help keep drivers involved with the driving task and make them use the technologies responsibly,” says Kidd. “IIHS has a new ratings program, called Safeguards, that evaluates manufacturers’ implementation of driver monitoring technologies.”

    To receive a good rating, Kidd says, “Vehicles with partial automation will need to ensure that drivers are looking at the road, that their hands are in a place where they’re ready to take control if the automation technology makes a mistake, and that they’re wearing their seatbelt.” Kidd admits that no technology can determine whether someone’s mind is focused on the road and the driving task. But by monitoring a person’s gaze, head posture, and hand position, sensors can make sure the person’s actions are consistent with someone who is actively engaged in driving. “The whole sense of this program is to make sure that the [level 2 driving automation] technology isn’t portrayed as being more capable than it is. It does support the driver on an ongoing basis, but it certainly does not replace the driver.”

    The European Commission released a report in March pointing out that progress toward reducing road fatalities is stalling in too many countries. This sticking point in the number of roadway deaths is an example of a phenomenon known as risk homeostasis, where risk compensation serves to counterbalance the intended effects of a safety advance, rendering the net effect unchanged. Asked what will counteract risk compensation so there will be a significant reduction in the annual worldwide roadway death toll, the IIHS’s Kidd said “We are still in the early stages of understanding whether automating all of the driving task—like what Waymo and Cruise are doing with their level 4 driving systems—is the answer. It looks like they will be safer than human drivers but it’s still too early to tell.”

  • Defending Taiwan With Chips and Drones
    by Harry Goldstein on 01. October 2024. at 21:00



    The majority of the world’s advanced logic chips are made in Taiwan, and most of those are made by one company: Taiwan Semiconductor Manufacturing Co. (TSMC). While it seems risky for companies like Nvidia, Apple, and Google to depend so much on one supplier, for Taiwan’s leaders, that’s a feature, not a bug.

    In fact, this concentration of chip production is the tentpole of the island’s strategy to defend itself from China, which under the Chinese Communist Party’s One China Principle, considers Taiwan a renegade province that will be united with the mainland one way or another.

    Taiwan’s Silicon Shield strategy rests on two assumptions. The first is that the United States won’t let China take the island and its chip production facilities (which reportedly have kill switches on the most advanced extreme ultraviolet machines that could render them useless in the event of an attack). The second is that China won’t risk destroying perhaps the most vital part of its own semiconductor supply chain as a consequence of a hostile takeover.

    The U.S. military seems steadfast in its resolve to keep Taiwan out of Chinese hands. In fact, one general has declared that the United States is prepared to unleash thousands of aerial and maritime drones in the event China invades.

    “U.S. naval attack drones positioned on the likely routes could disrupt or possibly even halt a Chinese invasion, much as Ukrainian sea drones have denied Russia access to the western Black Sea.”--Bryan Clark

    “I want to turn the Taiwan Strait into an unmanned hellscape using a number of classified capabilities,” Adm. Samuel Paparo, commander of the U.S. Indo-Pacific Command told Washington Post columnist Josh Rogin in June.

    There is now, two and half years into the Russia-Ukraine war, plenty of evidence that the kinds of drones Paparo referenced can play key roles in logistics, surveillance, and offensive operations. And U.S. war planners are learning a lot from that conflict that applies to Taiwan’s situation.

    As Bryan Clark, a senior fellow at the Hudson Institute and director of the Institute’s Center for Defense Concepts and Technology, points out in this issue, “U.S. naval attack drones positioned on the likely routes could disrupt or possibly even halt a Chinese invasion, much as Ukrainian sea drones have denied Russia access to the western Black Sea.”

    There is no way to know when or even if a conflict over Taiwan will commence. So for now, Taiwan is doubling down on its Silicon Shield by launching more renewable generation projects so that its chipmakers can meet customer demands to minimize the carbon footprint of chip production.

    The island already has 2.4 gigawatts of offshore-wind capacity with another 3 GW planned or under construction, reports IEEE Spectrum contributing editor Peter Fairley, who visited Taiwan earlier this year. In “Powering Taiwan’s Silicon Shield” [p. 22], he notes that the additional capacity will help TSMC meet its goal of having 60 percent of its energy come from renewables by 2030. Fairley also reports on clever power-saving innovations deployed in TSMC’s fabs to reduce the company’s annual electricity consumption by about 175 gigawatt-hours.

    Between bringing more renewable energy online and making their fabs more efficient, the chipmakers of Taiwan hope to keep their customers happy while the island’s leaders hope to deter its neighbor across the strait—if not with its Silicon Shield, then with the silicon brains of the drone hordes that could fly and float into the breach in the island’s defense.

  • Special Events Mark IEEE Honor Society’s 120th Anniversary
    by Amy Michael on 01. October 2024. at 18:00



    On 28 October 1904, on the campus of the University of Illinois Urbana-Champaign, Eta Kappa Nu (HKN) was founded by a group of young men led by Maurice L. Carr, whose vision was to create an honor society that recognized electrical engineers who embodied the ideals of scholarship, character, and attitude and to promote the profession.

    From its humble beginning until today, HKN has established nearly 280 university chapters worldwide. Currently, the honor society boasts more than 40,000 members. Having inducted roughly 200,000 members during its existence, it has never strayed from the core principles espoused by its founders.

    “Eta Kappa Nu grew because there have always been many members who have been willing and eager to serve it loyally and unselfishly,” Carr said in the “Dreams That Have Come True,” article published in the 1939 October/November issue of the honor society’s triannual magazine, The Bridge.

    In 2010, HKN became the honor society of IEEE, resulting in global expansion, establishing chapters outside the United States, and inducting students and professionals from the IEEE fields of interest.

    Today the character portion of HKN’s creed translates into its students collectively providing more than 100,000 hours of service each year to their communities and campuses through science, technology, engineering, and mathematics outreach programs and tutoring.

    Hackathons, fireside chats, and more

    In honor of its 120th anniversary, IEEE-HKN is celebrating with several exciting events.

    HKN’s first online hackathon is scheduled to be held from 11 to 22 October. Students around the world compete to solve engineering problems for prizes and bragging rights.


    On 28 October, IEEE-HKN’s Founders Day, 2019 IEEE-HKN President Karen Panetta is hosting a virtual fireside chat with HKN Eminent Members Vint Cerf and Bob Kahn. The two Internet pioneers are expected to share the inside story of how they conceived and designed its architecture.

    The fireside chat is due to come after a presentation for the hackathon winners, and it will be followed by an online networking session for participants to share their HKN stories and brainstorm how to continue the forward momentum for the next generation of engineers.

    The three events are open to everyone. Register now to attend any or all of them.

    “My favorite part of being an HKN member is the sense of community.” — Matteo Alasio

    HKN has set up a dedicated page showcasing how members and nonmembers can participate in the celebrations. The page also honors the society’s proud history with a time line of its impressive growth during the past 120 years.

    IEEE-HKN alumni have been gathering at events across the United States to celebrate the anniversary. They include IEEE Region 3’s SoutheastCon, the IEEE Life Members Conference, the IEEE Communication Society Conference, the IEEE Power & Energy Society General Meeting, the IEEE World Forum on Public Safety Technology, and the Frontiers in Education Conference.

    an old black and white photograph of a large group of men posing wearing suits and ties and some holding hats in their hands A gathering of the alumni of the Eta Kappa Nu San Francisco Chapter that took place in October 1925 on the top floor of the Pacific Telephone and Telegraph Company’s building.IEEE-HKN

    IEEE-HKN’s reach and impact

    The honor society’s leaders attribute its success to never straying from its core founding principles while remaining relevant. It offers support throughout its members’ career journeys and provides a vibrant network of like-minded professionals. Today that translates to annual conferences, webinars, podcasts, alumni networking opportunities, professional and leadership development services, mentoring initiatives, an awards program, and scholarships.

    “When I joined HKN as a student, my chapter meant a great deal to me, as a community of friends, as leadership and professional development, and as inspiration to keep working toward my next goal,” says Sean Bentley, 2024 IEEE-HKN president-elect. “As I moved through my career and looked for ways to give back, I was happy to answer the call for service with HKN.”

    HKN’s success is made possible by the commitment of its volunteers and donors, who give their time, expertise, and resources guided by a zeal to nurture the next generation of engineering professionals.

    Matteo Alasio, an IEEE-HKN alum and former president of the Mu Nu chapter, at the Politecnico di Torino in Italy, says, “My favorite part of being an HKN member is the sense of community. Being part of a big family that works together to help students and promote professional development is incredibly fulfilling. It’s inspiring to collaborate with others who are dedicated to making a positive impact.”

    HKN is a lifelong designation. If you are inducted into Eta Kappa Nu, your membership never expires. Visit the IEEE-HKN website to reconnect with the society or to learn more about its programs, chapters, students, and opportunities. You also can sign up for its 2024 Student Leadership Conference, to be held in November in Charlotte, N.C.

  • Brain-like Computers Tackle the Extreme Edge
    by Dina Genkina on 01. October 2024. at 16:00



    Neuromorphic computing draws inspiration from the brain, and Steven Brightfield, chief marketing officer for Sydney-based startup BrainChip, says that makes it perfect for use in battery-powered devices doing AI processing.

    “The reason for that is evolution,” Brightfield says. “Our brain had a power budget.” Similarly, the market BrainChip is targeting is power constrained. ”You have a battery and there’s only so much energy coming out of the battery that can power the AI that you’re using.”

    Today, BrainChip announced their chip design, the Akida Pico, is now available. Akida Pico, which was developed for use in power-constrained devices, is a stripped-down, miniaturized version of BrainChip’s Akida design, introduced last year. Akida Pico consumes 1 milliwatt of power, or even less depending on the application. The chip design targets the extreme edge, which is comprised of small user devices such as mobile phones, wearables, and smart appliances that typically have severe limitations on power and wireless communications capacities. Akida Pico joins similar neuromorphic devices on the market designed for the edge, such as Innatera’s T1 chip, announced earlier this year, and SynSense’s Xylo, announced in July 2023.

    Neuron Spikes Save Energy

    Neuromorphic computing devices mimic the spiking nature of the brain. Instead of traditional logic gates, computational units—referred to as ‘neurons’—send out electrical pulses, called spikes, to communicate with each other. If a spike reaches a certain threshold when it hits another neuron, that one is activated in turn. Different neurons can create spikes independent of a global clock, resulting in highly parallel operation.

    A particular strength of this approach is that power is only consumed when there are spikes. In a regular deep learning model, each artificial neuron simply performs an operation on its inputs: It has no internal state. In a spiking neural network architecture, in addition to processing inputs, a neuron has an internal state. This means the output can depend not only on the current inputs, but on the history of past inputs, says Mike Davies, director of the neuromorphic computing lab at Intel. These neurons can choose not to output anything if, for example, the input hasn’t changed sufficiently from previous inputs, thus saving energy.

    “Where neuromorphic really excels is in processing signal streams when you can’t afford to wait to collect the whole stream of data and then process it in a delayed, batched manner. It’s suited for a streaming, real-time mode of operation,” Davies says. Davies’ team recently published a result showing their Loihi chip’s energy use was one-thousandth of a GPU’s use for streaming use cases.

    Akida Pico includes its neural processing engine, along with event processing and model weight storage SRAM units, direct memory units for spike conversion and configuration, and optional peripherals. Brightfield says in some devices, such as simple detectors, the chip can be used as a stand-alone device, without a microcontroller or any other external processing. For other use cases that require further on-device processing, it can be combined with a microcontroller, CPU, or any other processing unit.

    A block diagram of the Akida Pico chip design BrainChip’s Akida Pico design includes a miniaturized version of their neuromorphic processing engine, suitable for small, battery-operated devices.BrainChip

    BrainChip has also worked to develop AI model architectures that are optimized for minimal power use in their device. They showed off their techniques with an application that detects keywords in speech. This is useful for voice assistance like Amazon’s Alexa, which waits for the ‘Hello, Alexa’ keywords to activate.

    The BrainChip team used their recently developed model architecture to reduce power use to one-fifth of the power consumed by traditional models running on a conventional microprocessor, as demonstrated in their simulator. “I think Amazon spends $200 million a year in cloud computing services to wake up Alexa,” Brightfield says. “They do that using a microcontroller and a neural processing unit (NPU), and it still consumes hundreds of milliwatts of power.” If BrainChip’s solution indeed provides the claimed power savings for each device, the effect would be significant.

    In a second demonstration, they used a similar machine learning model to demonstrate audio de-noising, for use in hearing aids or noise canceling headphones.

    To date, neuromorphic computers have not found widespread commercial uses, and it remains to be seen if these miniature edge devices will take off, in part because of the diminished capabilities of such low-power AI applications. “If you’re at the very tiny neural network level, there’s just a limited amount of magic you can bring to a problem,” Intel’s Davis says.

    BrainChip’s Brightfield, however, is hopeful that the application space is there. “It could be speech wake up. It could just be noise reduction in your earbuds or your AR glasses or your hearing aids. Those are all the kind of use cases that we think are targeted. We also think there’s use cases that we don’t know that somebody’s going to invent.”

  • Happy IEEE Day!
    by IEEE on 01. October 2024. at 13:10



    Happy IEEE Day!

    IEEE Day commemorates the first gathering of IEEE members to share their technical ideas in 1884.

    First celebrated in 2009, IEEE Day marks its 15th anniversary this year.

    Worldwide celebrations demonstrate the ways thousands of IEEE members in local communities join together to collaborate on ideas that leverage technology for a better tomorrow.


    Celebrate IEEE Day with colleagues from IEEE Sections, Student Branches, Affinity groups, and Society Chapters. Events happen both virtually and in person all around the world.

    Join the celebration around the world!

    Every year, IEEE members from IEEE Sections, Student Branches, Affinity groups, and Society Chapters join hands to celebrate IEEE Day. Events happen both virtually and in person. IEEE Day celebrates the first time in history when engineers worldwide gathered to share their technical ideas in 1884.

    View events→

    Special Activities & Offers for Members

    Check out our special offers and activities for IEEE members and future members. And share these with your friends and colleagues.

    View offers→

    Compete in contests and win prizes!

    Have some fun and compete in the photo and video contests. Get your phone and camera ready when you attend one of the events. This year we will have both Photo and Video Contests. You can submit your entries in STEM, technical, humanitarian and social categories.

    View contests→

  • Even Gamma Rays Can't Stop This Memory
    by Kohava Mendelsohn on 01. October 2024. at 11:00



    This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

    In space, high-energy gamma radiation can change the properties of semiconductors, altering how they work or rendering them completely unusable. Finding devices that can withstand radiation is important not just to keep astronauts safe but also to ensure that a spacecraft lasts the many years of its mission. Constructing a device that can easily measure radiation exposure is just as valuable. Now, a globe-spanning group of researchers has found that a type of memristor, a device that stores data as resistance even in the absence of a power supply, can not only measure gamma radiation but also heal itself after being exposed to it.

    Memristors have demonstrated the ability to self-heal under radiation before, says Firman Simanjuntak, a professor of materials science and engineering at the University of Southampton whose team developed this memristor. But until recently, no one really understood how they healed—or how best to apply the devices. Recently, there’s been “a new space race,” he says, with more satellites in orbit and more deep-space missions on the launchpad, so “everyone wants to make their devices … tolerant towards radiation.” Simanjuntak’s team has been exploring the properties of different types of memristors since 2019, but now wanted to test how their devices change when exposed to blasts of gamma radiation.

    Normally, memristors set their resistance according to their exposure to high-enough voltage. One voltage boosts the resistance, which then remains at that level when subject to lower voltages. The opposite voltage decreases the resistance, resetting the device. The relationship between voltage and resistance depends on the previous voltage, which is why the devices are said to have a memory.

    The hafnium oxide memristor used by Simanjuntak is a type of memristor that cannot be reset, called a WORM (write once, read many times) device, suitable for permanent storage. Once it is set with a negative or positive voltage, the opposing voltage does not change the device. It consists of several layers of material: first conductive platinum, then aluminum doped hafnium oxide (an insulator), then a layer of titanium, then a layer of conductive silver at the top.

    When voltage is applied to these memristors, a bridge of silver ions forms in the hafnium oxide, which allows the current to flow through, setting its conductance value. Unlike in other memristors, this device’s silver bridge is stable and fixes in place, which is why once the device is set, it usually can’t be returned to a rest state.

    That is, unless radiation is involved. The first discovery the researchers made was that under gamma radiation, the device acts as a resettable switch. They believe that the gamma rays break the bond between the hafnium and oxygen atoms, causing a layer of titanium oxide to form at the top of the memristor, and a layer of platinum oxide to form at the bottom. The titanium oxide layer creates an extra barrier for the silver ions to cross, so a weaker bridge is formed, one that can be broken and reset by a new voltage.

    The extra platinum oxide layer caused by the gamma rays also serves as a barrier to incoming electrons. This means a higher voltage is required to set the memristor. Using this knowledge, the researchers were able to create a simple circuit that measured amounts of radiation by checking the voltage that was required to set the memristor. A higher voltage meant the device had encountered more radiation.

    A diagram with four stages, each showing the layers of silver, titanium, hafnium oxide, and platinum that form the memristor,. It demonstrates the formation of a conducting bridge of silver ions, alongside a weaker bridge under radiation From a regular state, the memristor forms a stable conductive bridge. Under radiation, a thicker layer of titanium oxide creates a slower-forming, weaker conductive bridge.OM Kumar et al./IEEE Electron Device Letters

    But the true marvel of these hafnium oxide memristors is their ability to self-heal after a big dose of radiation. The researchers treated the memristor with 5 megarads of radiation—500 times more than a lethal dose in humans. Once the gamma radiation was removed, the titanium oxide and platinum oxide layers gradually dissipated, the oxygen atoms returning to form hafnium oxide again. After 30 days, instead of still requiring a higher-than-normal voltage to form, the devices that were exposed to radiation required the same voltage to form as untouched devices.

    “It’s quite exciting what they’re doing,” says Pavel Borisov, a researcher at Loughborough University in the UK who studies how to use memristors to mimic the synapses in the human brain. His team conducted similar experiments with a silicon oxide based memristor, and also found that radiation changed the behavior of the device. In Borisov’s experiments, however, the memristors did not heal after the radiation.

    Memristors are simple, lightweight, and low power, which already makes them ideal for use in space applications. In the future, Simanjuntak hopes to use memristors to develop radiation-proof memory devices that would enable satellites in space to do onboard calculations. “You can use a memristor for data storage, but also you can use it for computation,” he says, “So you could make everything simpler, and reduce the costs as well.”

    This research was accepted for publication in a future issue of Electron Device Letters.

  • Leading Educator Weighs in on University DEI Program Cuts
    by Kathy Pretz on 30. September 2024. at 18:00



    Many U.S. university students returning to campus this month will find their school no longer has a diversity, equity, and inclusion program. More than 200 universities in 30 states so far this year have eliminated, cut back, or changed their DEI efforts, according to an article in The Chronicle of Higher Education.

    It is happening at mostly publicly funded universities, because state legislators and governors are enacting laws that prohibit or defund DEI programs. They’re also cutting budgets and sometimes implementing other measures that restrict diversity efforts. Some colleges have closed their DEI programs altogether to avoid political pressure.

    The Institute asked Andrea J. Goldsmith, a top educator and longtime proponent of diversity efforts within the engineering field and society, to weigh in.

    Goldsmith shared her personal opinion about DEI with The Institute, not as Princeton’s dean of engineering and applied sciences. A wireless communications pioneer, she is an IEEE Fellow who launched the IEEE Board of Directors Diversity and Inclusion Committee in 2019 and once served as its chair.

    She received this year’s IEEE Mulligan Education Medal for educating, mentoring, and inspiring generations of students, and for authoring pioneering textbooks in advanced digital communications.

    “For the longest time,” Goldsmith says, “there was so much positive momentum toward improving diversity and inclusion. And now there’s a backlash, which is really unfortunate, but it’s not everywhere.” She says she is proud of her university’s president, who has been vocal that diversity is about excellence and that Princeton is better because its students and faculty are diverse.

    In the interview, Goldsmith spoke about why she thinks the topic has become so controversial, what measures universities can take to ensure their students have a sense of belonging, and what can be done to retain female engineers—a group that has been underrepresented in the field.

    The Institute: What do you think is behind the movement to dissolve DEI programs?

    Goldsmith: That’s a very complex question, and I certainly don’t have the answer.

    It has become a politically charged issue because there’s a notion that DEI programs are really about quotas or advancing people who are not deserving of the positions they have been given. Part of the backlash also was spurred by the Oct. 7 attack on Israel, the war in Gaza, and the protests. One notion is that Jewish students are also a minority that needs protection, and why is it that DEI programs are only focused on certain segments of the population as opposed to diversity and inclusion for everyone, for people with all different perspectives, and those who are victims or subject to explicit bias, implicit bias, or discrimination? I think that these are legitimate concerns, and that programs around diversity and inclusion should be addressing them.

    The goal of diversity and inclusion is that everybody should be able to participate and reach their full potential. That should go for every profession and, in particular, every segment of the engineering community.

    Also in the middle of this backlash is the U.S. Supreme Court’s 2023 decision that ended race-conscious affirmative action in college admissions—which means that universities cannot take diversity into account explicitly in their admission of students. The decision in and of itself only affects undergraduate admissions, but it has raised concerns about broadening the decision to faculty hiring or for other kinds of programs that promote diversity and inclusion within universities and private companies.

    I think the Supreme Court’s decision, along with the political polarization and the recent protests at universities, have all been pieces of a puzzle that have come together to paint all DEI programs with a broad brush of not being about excellence and lowering barriers but really being about promoting certain groups of people at the expense of others.

    How might the elimination of DEI programs impact the engineering profession specifically?

    Goldsmith: I think it depends on what it means to eliminate DEI programs. Programs to promote the diversity of ideas and perspectives in engineering are essential for the success of the profession. As an optimist, I believe we should continue to have programs that ensure our profession can bring in people with diverse perspectives and experiences.

    Does that mean that every DEI program in engineering companies and universities needs to evolve or change? Not necessarily. Maybe some programs do because they aren’t necessarily achieving the goal of ensuring that diverse people can thrive.

    “My work in the profession of engineering to enhance diversity and inclusion has really been about excellence for the profession.”

    We need to be mindful of the concerns that have been raised about DEI programs. I don’t think they are completely unfounded.

    If we do the easy thing—which is to just eliminate the programs without replacing them with something else or evolving them—then it will hurt the engineering profession.

    The metrics being used to assess whether these programs are achieving their goals need to be reviewed. If they are not, the programs need to be improved. If we do that, I think DEI programs will continue to positively impact the engineering profession.

    For universities that have cut or reduced their programs, what are some other ways to make sure all students have a sense of belonging?

    Goldsmith: I would look at what other initiatives could be started that would have a different name but still have the goal of ensuring that students have a sense of belonging.

    Long before DEI programs, there were other initiatives within universities that helped students figure out their place within the school, initiated them into what it means to be a member of the community, and created a sense of belonging through various activities. These include prefreshman and freshman orientation programs, student groups and organizations, student-led courses (with or without credit), eating clubs, fraternities, and sororities, to name just a few. I am referring here to any program within a university that creates a sense of community for those who participate—which is a pretty broad category of programs.

    These continue, but they aren’t called DEI programs. They’ve been around for decades, if not since the university system was founded.

    How can universities and companies ensure that all people have a good experience in school and the workplace?

    Goldsmith: This year has been a huge challenge for universities, with protests, sit-ins, arrests, and violence.

    One of the things I said in my opening remarks to freshmen at the start of this semester is that you will learn more from people around you who have different viewpoints and perspectives than you will from people who think like you. And that engaging with people who disagree with you in a respectful and scholarly way and being open to potentially changing your perspective will not only create a better community of scholars but also better prepare you for postgraduation life, where you may be interacting with a boss, coworkers, family, and friends who don’t agree with you.

    Finding ways to engage with people who don’t agree with you is essential for engaging with the world in a positive way. I know we don’t think about that as much in engineering because we’re going about building our technologies, doing our equations, or developing our programs. But so much of engineering is collaboration and understanding other people, whether it’s your customers, your boss, or your collaborators.

    I would argue everyone is diverse. There’s no such thing as a nondiverse person, because no two people have the exact same set of experiences. Figuring out how to engage with people who are different is essential for success in college, grad school, your career, and your life.

    I think it’s a bit different in companies, because you can fire someone who does a sit-in in the boss’s office. You can’t do that in universities. But I think workplaces also need to create an environment where diverse people can engage with each other beyond just what they’re working on in a way that’s respectful and intellectual.

    Reports show that half of female engineers leave the high-tech industry because they have a poor work experience. Why is that, and what can be done to retain women?

    Goldsmith: That is one of the harder questions facing the engineering profession. The challenges that women face are implicit, including sometimes explicit bias. In extreme cases, there are sexual and other kinds of harassment, and bullying. These egregious behaviors have decreased some. The Me Too movement raised a lot of awareness, but [poor behavior] still is far more prevalent than we want it to be. It’s very difficult for women who have experienced that kind of egregious and illegal behavior to speak up. For example, if it’s their Ph.D. advisor, what does that mean if they speak up? Do they lose their funding? Do they lose all the research they’ve done? This powerful person can bad-mouth them for job applications and potential future opportunities.

    So, it’s very difficult to curb these behaviors. However, there has been a lot of awareness raised, and universities and companies have put protections in place against them.

    Then there’s implicit bias, where a qualified woman is passed over for a promotion, or women are asked to take meeting notes but not the men. Or a woman leader gets a bad performance review because she doesn’t take no for an answer, is too blunt, or too pushy. All these are things that male leaders are actually lauded for.

    There is data on the barriers and challenges that women face and what universities and employers can do to mitigate them. These are the experiences that hurt women’s morale and upward mobility and, ultimately, make them leave the profession.

    One of the most important things for a woman to be successful in this profession is to have mentors and supporters. So it is important to make sure that women engineers are assigned mentors at every stage, from student to senior faculty or engineer and everything in between, to help them understand the challenges they face and how to deal with them, as well as to promote and support them.

    I also think having leaders in universities and companies recognize and articulate the importance of diversity helps set the tone from the top down and tends to mitigate some of the bias and implicit bias in people lower in the organization.

    I think the backlash against DEI is going to make it harder for leaders to articulate the value of diversity, and to put in place some of the best practices around ensuring that diverse people are considered for positions and reach their full potential.

    We have definitely taken a step backward in the past year on the understanding that diversity is about excellence and implementing best practices that we know work to mitigate the challenges that diverse people face. But that just means we need to redouble our efforts.

    Although this isn’t the best time to be optimistic about diversity in engineering, if we take the long view, I think that things are certainly better than they were 20 or 30 years ago. And I think 20 or 30 years from now they’ll be even better.

  • The Incredible Story Behind the First Transistor Radio
    by Allison Marsh on 30. September 2024. at 14:00



    Imagine if your boss called a meeting in May to announce that he’s committing 10 percent of the company’s revenue to the development of a brand-new mass-market consumer product, made with a not-yet-ready-for-mass-production component. Oh, and he wants it on store shelves in less than six months, in time for the holiday shopping season. Ambitious, yes. Kind of nuts, also yes.

    But that’s pretty much what Pat Haggerty, vice president of Texas Instruments, did in 1954. The result was the Regency TR-1, the world’s first commercial transistor radio, which debuted 70 years ago this month. The engineers delivered on Haggerty’s audacious goal, and I certainly hope they received a substantial year-end bonus.

    Why did Texas Instruments make the Regency TR-1 transistor radio?

    But how did Texas Instruments come to make a transistor radio in the first place? TI traces its roots to a company called Geophysical Service Inc. (GSI), which made seismic instrumentation for the oil industry as well as electronics for the military. In 1945, GSI hired Patrick E. Haggerty as the general manager of its laboratory and manufacturing division and its electronics work. By 1951, Haggerty’s division was significantly outpacing GSI’s geophysical division, and so the Dallas-based company reorganized as Texas Instruments to focus on electronics.

    Meanwhile, on 30 June 1948, Bell Labs announced John Bardeen and Walter Brattain’s game-changing invention of the transistor. No longer would electronics be dependent on large, hot vacuum tubes. The U.S. government chose not to classify the technology because of its potentially broad applications. In 1951, Bell Labs began licensing the transistor for US $25,000 through the Western Electric Co.; Haggerty bought a license for TI the following year.

    The engineers delivered on Haggerty’s audacious goal, and I certainly hope they received a substantial year-end bonus.

    TI was still a small company, with not much in the way of R&D capacity. But Haggerty and the other founders wanted it to become a big and profitable company. And so they established research labs to focus on semiconductor materials and a project-engineering group to develop marketable products.

    Black and white photo of a gloved hand holding a small rectangular radio with a round dial. The TR-1 was the first transistor radio, and it ignited a desire for portable gadgets that continues to this day. Bettmann/Getty Images

    Haggerty made a good investment when he hired Gordon Teal, a 22-year veteran of Bell Labs. Although Teal wasn’t part of the team that invented the germanium transistor, he recognized that it could be improved by using a single grown crystal, such as silicon. Haggerty was familiar with Teal’s work from a 1951 Bell Labs symposium on transistor technology. Teal happened to be homesick for his native Texas, so when TI advertised for a research director in the New York Times, he applied, and Haggerty offered him the job of assistant vice president instead. Teal started at TI on 1 January 1953.

    Fifteen months later, Teal gave Haggerty a demonstration of the first silicon transistor, and he presented his findings three and a half weeks later at the Institute of Radio Engineers’ National Conference on Airborne Electronics, in Dayton, Ohio. His innocuously titled paper, “Some Recent Developments in Silicon and Germanium Materials and Devices,” completely understated the magnitude of the announcement. The audience was astounded to hear that TI had not just one but three types of silicon transistors already in production, as Michael Riordan recounts in his excellent article “The Lost History of the Transistor” (IEEE Spectrum, October 2004).

    And fun fact: The TR-1 shown at top once belonged to Willis Adcock, a physical chemist hired by Teal to perfect TI’s silicon transistors as well as transistors for the TR-1. (The radio is now in the collections of the Smithsonian’s National Museum of American History.)

    The TR-1 became a product in less than six months

    This advancement in silicon put TI on the map as a major player in the transistor industry, but Haggerty was impatient. He wanted a transistorized commercial product now, even if that meant using germanium transistors. On 21 May 1954, Haggerty challenged a research group at TI to have a working prototype of a transistor radio by the following week; four days later, the team came through, with a breadboard containing eight transistors. Haggerty decided that was good enough to commit $2 million—just under 10 percent of TI’s revenue—to commercializing the radio.

    Of course, a working prototype is not the same as a mass-production product, and Haggerty knew TI needed a partner to help manufacture the radio. That partner turned out to be Industrial Development Engineering Associates (IDEA), a small company out of Indianapolis that specialized in antenna boosters and other electronic goods. They signed an agreement in June 1954 with the goal of announcing the new radio in October. TI would provide the components, and IDEA would manufacture the radio under its Regency brand.

    Germanium transistors at the time cost $10 to $15 apiece. With eight transistors, the radio was too expensive to be marketed at the desired price point of $50 (more than $580 today, which is coincidentally about what it’ll cost you to buy one in good condition on eBay). Vacuum-tube radios were selling for less, but TI and IDEA figured early adopters would pay that much to try out a new technology. Part of Haggerty’s strategy was to increase the volume of transistor production to eventually lower the per-transistor cost, which he managed to push down to about $2.50.

    By the time TI met with IDEA, the breadboard was down to six transistors. It was IDEA’s challenge to figure out how to make the transistorized radio at a profit. According to an oral history with Richard Koch, IDEA’s chief engineer on the project, TI’s real goal was to make transistors, and the radio was simply the gimmick to get there. In fact, part of the TI–IDEA agreement was that any patents that came out of the project would be in the public domain so that TI was free to sell more transistors to other buyers.

    At the initial meeting, Koch, who had never seen a transistor before in real life, suggested substituting a germanium diode for the detector (which extracted the audio signal from the desired radio frequency), bringing the transistor count down to five. After thinking about the configuration a bit more, Koch eliminated another transistor by using a single transistor for the oscillator/mixer circuit.

    Photo of the inside of a small rectangular gadget, showing electronic components and a battery. TI’s original prototype used eight germanium transistors, which engineers reduced to six and, ultimately, four for the production model.Division of Work and Industry/National Museum of American History/Smithsonian Institution

    The final design was four transistors set in a superheterodyne design, a type of receiver that combines two frequencies to produce an intermediate frequency that can be easily amplified, thereby boosting a weak signal and decreasing the required antenna size. The TR-1 had two transistors as intermediate-frequency amplifiers and one as an audio amplifier, plus the oscillator/mixer. Koch applied for a patent for the circuitry the following year.

    The radio ran on a 22.5-volt battery, which offered a playing life of 20 to 30 hours and cost about $1.25. (Such batteries were also used in the external power and electronics pack for hearing aids, the only other consumer product to use transistors up until this point.)

    While IDEA’s team was working on the circuitry, they outsourced the design of the TR-1’s packaging to the Chicago firm of Painter, Teague, and Petertil. Their first design didn’t work because the components didn’t fit. Would their second design be better? As Koch later recalled, IDEA’s purchasing agent, Floyd Hayhurst, picked up the molding dies for the radio cases in Chicago and rushed them back to Indianapolis. He arrived at 2:00 in the morning, and the team got to work. Fortunately, everything fit this time. The plastic case was a little warped, but that was simple to fix: They slapped a wooden piece on each case as it came off the line so it wouldn’t twist as it cooled.

    This video shows how each radio was assembled by hand:

    On 18 October 1954, Texas Instruments announced the first commercial transistorized radio. It would be available in select outlets in New York and Los Angeles beginning 1 November, with wider distribution once production ramped up. The Regency TR-1 Transistor Pocket Radio initially came in black, gray, red, and ivory. They later added green and mahogany, as well as a run of pearlescents and translucents: lavender, pearl white, meridian blue, powder pink, and lime.

    The TR-1 got so-so reviews, faced competition

    Consumer Reports was not enthusiastic about the Regency TR-1. In its April 1955 review, it found that transmission of speech was “adequate” under good conditions, but music transmission was unsatisfactory under any conditions, especially on a noisy street or crowded beach. The magazine used adjectives such as whistle, squeal, thin, tinny, and high-pitched to describe various sounds—not exactly high praise for a radio. It also found fault with the on/off switch. Their recommendation: Wait for further refinement before buying one.

    Newspaper ad for a $49.95 radio touted as \u201cthe first transistor radio ever built!\u201d More than 100,000 TR-1s were sold in its first year, but the radio was never very profitable.Archive PL/Alamy

    The engineers at TI and IDEA didn’t necessarily disagree. They knew they were making a sound-quality trade-off by going with just four transistors. They also had quality-control problems with the transistors and other components, with initial failure rates up to 50 percent. Eventually, IDEA got the failure rate down to 12 to 15 percent.

    Unbeknownst to TI or IDEA, Raytheon was also working on a transistorized radio—a tabletop model rather than a pocket-size one. That gave them the space to use six transistors, which significantly upped the sound quality. Raytheon’s radio came out in February 1955. Priced at $79.95, it weighed 2 kilograms and ran on four D-cell batteries. That August, a small Japanese company called Tokyo Telecommunications Engineering Corp. released its first transistor radio, the TR-55. A few years later, the company changed its name to Sony and went on to dominate the world’s consumer radio market.

    The legacy of the Regency TR-1

    The Regency TR-1 was a success by many measures: It sold 100,000 in its first year, and it helped jump-start the transistor market. But the radio was never very profitable. Within a few years, both Texas Instruments and IDEA left the commercial AM radio business, TI to focus on semiconductors, and IDEA to concentrate on citizens band radios. Yet Pat Haggerty estimated that this little pocket radio pushed the market in transistorized consumer goods ahead by two years. It was a leap of faith that worked out, thanks to some hardworking engineers with a vision.

    Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

    An abridged version of this article appears in the October 2024 print issue as “The First Transistor Radio.”

    References


    In 1984, Michael Wolff conducted oral histories with IDEA’s lead engineer Richard Koch and purchasing agent Floyd Hayhurst. Wolff subsequently used them the following year in his IEEE Spectrum article “The Secret Six-Month Project,” which includes some great references at the end.

    Robert J. Simcoe wrote “The Revolution in Your Pocket” for the fall 2004 issue of Invention and Technology to commemorate the 50th anniversary of the Regency TR-1.

    As with many collectibles, the Regency TR-1 has its champions who have gathered together many primary sources. For example, Steve Reyer, a professor of electrical engineering at the Milwaukee School of Engineering before he passed away in 2018, organized his efforts in a webpage that’s now hosted by https://www.collectornet.net.

  • Disabling a Nuclear Weapon in Midflight
    by John R. Allen on 29. September 2024. at 13:00



    In 1956 Henry Kissinger speculated in Foreign Affairs about how the nuclear stalemate between the United States and the Soviet Union could force national security officials into a terrible dilemma. His thesis was that the United States risked sending a signal to potential aggressors that, faced with conflict, defense officials would have only two choices: settle for peace at any price, or retaliate with thermonuclear ruin. Not only had “victory in an all-out war become technically impossible,” Kissinger wrote, but in addition, it could “no longer be imposed at acceptable cost.”

    His conclusion was that decision-makers needed better options between these catastrophic extremes. And yet this gaping hole in nuclear response policy persists to this day. With Russia and China leading an alliance actively opposing Western and like-minded nations, with war in Europe and the Middle East, and spiraling tensions in Asia, it would not be histrionic to suggest that the future of the planet is at stake. It is time to find a way past this dead end.

    Seventy years ago only the Soviet Union and the United States possessed nuclear weapons. Today there are eight or nine countries that have weapons of mass destruction. Three of them—Russia, China, and North Korea—have publicly declared irreconcilable opposition to American-style liberal democracy.

    Their antagonism creates an urgent security challenge. During its war with Ukraine, now in its third year, Russian leadership has repeatedly threatened to use tactical nuclear weapons. Then, earlier this year, the Putin government blocked United Nations enforcement of North Korea’s compliance with international sanctions, enabling the Hermit Kingdom to more easily circumvent access restrictions on nuclear technology.

    Thousands of nuclear missiles can be in the air within minutes of a launch command; the consequence of an operational mistake or security miscalculation would be the obliteration of global society. Considered in this light, there is arguably no more urgent or morally necessary imperative than devising a means of neutralizing nuclear-equipped missiles midflight, should such a mistake occur.

    Today the delivery of a nuclear package is irreversible once the launch command has been given. It is impossible to recall or de-activate a land-based, sea-based, or cruise missile once they are on their way. This is a deliberate policy-and-design choice born of concern that electronic sabotage, for example in the form of hostile radio signals, could disable the weapons once they are in flight.

    And yet the possibility of a misunderstanding leading to nuclear retaliation remains all too real. For example, in 1983, Stanislav Petrov literally saved the world by overruling, based on his own judgement, a “high reliability” report from the Soviet Union’s Oko satellite surveillance network. He was later proven correct; the system had mistakenly interpreted sunlight reflections off high-altitude clouds as rocket flares, indicating an American attack. Had he followed his training and allowed a Soviet retaliation to proceed, his superiors would have realized within minutes that they had made a horrific mistake in response to a technical glitch, not an American first strike.

    A Trident submarine missile bursting out of the ocean's waters and into the air during a launch A Trident I submarine-launched ballistic missile was test fired from the submarine USS Mariano G. Vallejo, which was decommissioned in 1995.U.S. Navy

    So why, 40 years later, do we still lack a means of averting the unthinkable? In his book Command and Control, Eric Schlosser quoted an early commander in chief of the Strategic Air Command (SAC), General Thomas S. Power, who explained why there is still no way to revoke a nuclear order. Power said that the very existence of a recall or self-destruct mechanism “would create a fail-disable potential for knowledge agents to ‘dud’” the weapon. Schlosser wrote that “missiles being flight-tested usually had a command-destruct mechanism—explosives attached to the airframe that could be set off by remote control, destroying the missile if it flew off course. SAC refused to add that capability to operational missiles, out of concern that the Soviets might find a way to detonate them all in midflight.”

    In 1990, Sherman Frankel pointed out in Science and Global Security that “there already exists an agreement between the United States and the Soviet Union, usually referred to as the 1971 Accidents Agreement, that specifies what is to be done in the event of an accidental or unauthorized launch of a nuclear weapon. The relevant section says that “in the event of an accident, the Party whose nuclear weapon is involved will immediately make every effort to take necessary measures to render harmless or destroy such weapon without its causing damage.” That’s a nice thought, but “in the ensuing decades, no capability to remotely divert or destroy a nuclear-armed missile...has been deployed by the U.S. government,” Frankel says. This is still true today.

    The inability to reverse a nuclear decision has persisted because two generations of officials and policymakers have grossly underestimated our ability to prevent adversaries from attacking the hardware and software of nuclear-equipped missiles before or after they are launched.

    The systems that deliver these warheads to their targets fall into three major categories, collectively known as the nuclear triad. It consists of submarine-launched ballistic missiles (SLBMs), ground-launched intercontinental ballistic missiles (ICBMs), and bombs launched from strategic bombers, including cruise missiles. About half of the United States’ active arsenal is carried on the Navy’s 14 nuclear Trident II ballistic-missile submarines, which are on constant patrol in the Atlantic and Pacific oceans. The ground-launched missiles are called Minuteman III, a 50-year-old system that the U.S. Air Force describes as the “cornerstone of the free world.” Approximately 400 ICBMs are siloed in ready-to-launch configurations across Montana, North Dakota, and Wyoming. Recently, under a vast program known as Sentinel, the U.S. Department of Defense embarked on a plan to replace the Minuteman IIIs at an estimated cost of US $140 billion.

    Each SLBM and ICBM can be equipped with multiple independently targetable reentry vehicles, or MIRVs. These are aerodynamic shells, each containing a nuclear warhead, that can steer themselves with great accuracy to targets established in advance of their launch. Trident II can carry as many as 12 MIRVs, although to stay within treaty constraints, the U.S. Navy limits the number to about four. Today the United States has about 1,770 warheads deployed in the sea, in the ground, or on strategic bombers.

    While civilian rockets and some military systems carry bidirectional communications for telemetry and guidance, strategic weapons are deliberately and completely isolated. Because our technological ability to secure a radio channel is incomparably improved, a secure monodirectional link that would allow the U.S. president to abort a mission in case of accident or reconciliation is possible today.

    A black and white image of three airmen working on a MIRV system U.S. Air Force technicians work on a Minuteman III’s Multiple Independently-targetable Reentry Vehicle system. The reentry vehicles are the black cones.U.S. Air Force

    ICBMs launched from the continental United States would take about 30 minutes to reach Russia; SLBMs would reach targets there in about half that time. During the 5-minute boost phase that lifts the rocket above the atmosphere, controllers could contact the airframe through ground-, sea-, or space-based (satellite) communication channels. After the engines shut down, the missile continues on a 20- or 25-minute (or less for SLBMs) parabolic arc, governed entirely by Newtonian mechanics. During that time, both terrestrial and satellite communications are still possible. However, as the reentry vehicle containing the warhead enters the atmosphere, a plasma sheaths the vehicle. That plasma blocks reception of radio waves, so during the reentry and descent phases, which combined last about a minute, receipt of abort instructions would only be possible after the plasma sheaths subside. What that means in practical terms is that there would be a communications window of only a few seconds before detonation, and probably only with space-borne transmitters.

    There are several alternative approaches to the design and implementation of this safety mechanism. Satellite-navigation beacons such as GPS, for example, transmit signals in the L- band and decode terrestrial and near-earth messages at about 50 bits per second, which is more than enough for this purpose. Satellite-communication systems, as another example, compensate for weather, terrain, and urban canyons with specialized K-band beamforming antennas and adaptive noise-resistant modulation techniques, like spread spectrum, with data rates measured in megabits per second (Mb/s).

    For either kind of signal, the received-carrier strength would be about 100 decibels per milliwatt; anything above that level, as it presumably would be at or near the missile’s apogee, would improve reliability without compromising security. The upshot is that the technology needed to implement this protection scheme—even for an abort command issued in the last few seconds of the missile’s trajectory—is available now. Today we understand how to reliably receive extremely low-powered satellite signals, reject interference and noise, and encode messages, using such techniques as symmetric cryptography so that they are sufficiently indecipherable for this application.

    The signals, codes, and disablement protocols can be dynamically programmed immediately prior to launch. Even if an adversary was able to see the digital design, they would not know which key to use or how to implement it. Given all this, we believe that the ability to disarm a launched warhead should be included in the Pentagon’s extension of the controversial Sentinel modernization program.

    What exactly would happen with the missile if a deactivate message was sent? It could be one of several things, depending on where the missile was in its trajectory. It could instruct the rocket to self-destruct on ascent, redirect the rocket into outer space, or disarm the payload before reentry or during descent.

    Of course, all of these scenarios presume that the microelectronics platform underpinning the missile and weapon is secure and has not been tampered with. According to the Government Accountability Office, “the primary domestic source of microelectronics for nuclear weapons components is the Microsystems Engineering, Sciences, and Applications (MESA) Complex at Sandia National Laboratories in New Mexico.” Thanks to Sandia and other laboratories, there are significant physical barriers to microelectronic tampering. These could be enhanced with recent design advances that promote semiconductor supply-chain security.

    Towards that end, Joe Costello, the founder and former CEO of the semiconductor software giant Cadence Design Systems, and a Kaufman Award winner, told us that there are many security measures and layers of device protection that simply did not exist as recently as a decade ago. He said, “We have the opportunity, and the duty, to protect our national security infrastructure in ways that were inconceivable when nuclear fail-safe policy was being made. We know what to do, from design to manufacturing. But we’re stuck with century-old thinking and decades-old technology. This is a transcendent risk to our future.”

    Kissinger concluded his classic treatise by stating that “Our dilemma has been defined as the alternative of Armageddon or defeat without war. We can overcome the paralysis induced by such a prospect only by creating other alternatives both in our diplomacy and our military policy.” Indeed, the recall or deactivation of nuclear weapons post-launch, but before detonation, is imperative to the national security of the United States and the preservation of human life on the planet.

  • IEEE’s Let’s Make Light Competition Returns to Tackle Illumination Gap
    by Willie D. Jones on 28. September 2024. at 18:00



    In economically advantaged countries, it’s hard to imagine a time when electric lighting wasn’t available in nearly every home, business, and public facility. But according to the World Economic Forum, the sun remains the primary light source for more than 1 billion people worldwide.

    Known as light poverty, the lack of access to reliable, adequate, artificial light is experienced by many of the world’s poorest people. They rely on unsafe, inefficient lighting sources such as candles and kerosene lamps to perform tasks such as studying, cooking, working, and doing household chores after dusk.

    Overcoming the stark contrast in living conditions is the focus of IEEE Smart Lighting’s Let’s Make Light competition.

    Open to anyone 18 or older, the contest seeks innovative lighting technologies that can be affordable, accessible, and sustainable for people now living in extreme poverty.

    The entry that best responds to the challenge—developing a lighting system that is reliable and grid-independent and can be locally manufactured and repaired—will be awarded a US $3,000 prize. The second prize is $2,000, and the third-place finisher receives $1,000.

    The deadline for submissions is 1 November.

    The contest’s origin

    The Let’s Make Light competition was born out of a presentation on global lighting issues, including light poverty, given to IEEE Life Fellow John Nelson, then chair of IEEE Smart Village, and IEEE Fellow Georges Zissis, former chair of the IEEE Future Directions Committee.

    Wanting to know more about light poverty, Nelson forwarded the presentation to Toby Cumberbatch, who has extensive experience in developing practical solutions for communities facing the issue. Cumberbatch, an IEEE senior member, is a professor emeritus of electrical engineering at the Cooper Union, in New York City. For years, he taught his first-year engineering students how to create technology to help underserved communities.

    “A winning design has to be usable by people who don’t even know what an on-off switch is.” —Toby Cumberbatch

    Cumberbatch’s candid response was that the ideas presented didn’t adequately address the needs of impoverished end users he and his students had been trying to help.

    That led Zissis to create the Let’s Make Light competition in hopes that it would ignite a spark in the larger IEEE community to develop technologies that would truly serve those who need it most. He appointed Cumberbatch as co-chair of the competition committee.

    Understanding the wealth gap

    Last year’s entries highlighted a significant gap in understanding the factors behind light poverty, Cumberbatch says. The factors include limited electrical grid access and the inability to afford all but the most rudimentary products. Cumberbatch says he and his students have even encountered communities with nonmonetary economies.
    Past entries have failed to address the core challenge of providing practical and user-friendly lighting solutions.

    Reflecting on some recent submissions, Cumberbatch noted a fundamental disconnect. “The entries included charging stations for electric vehicles and proposals to use lasers to light streets,” he says. “A winning design has to be usable by people who don’t even know what an on-off switch is.”

    To ensure this year’s contestants better address the problem, IEEE Future Directions released a video illustrating the realities of poverty and the essential qualities that a successful lighting solution must possess, such as being safe, clean, accessible, and affordable.

    “With the right resources,” the video’s narrator says, “people living in these remote communities will create new and better ways to work and live their lives.”

    For more details, visit the Let’s Make Light competition’s website.

  • Video Friday: ICRA Turns 40
    by Evan Ackerman on 27. September 2024. at 16:00



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    IROS 2024: 14–18 October 2024, ABU DHABI, UAE
    ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
    Cybathlon 2024: 25–27 October 2024, ZURICH
    Humanoids 204: 22–24 November 2024, NANCY, FRANCE

    Enjoy today’s videos!

    The interaction between humans and machines is gaining increasing importance due to the advancing degree of automation. This video showcases the development of robotic systems capable of recognizing and responding to human wishes.

    By Jana Jost, Sebastian Hoose, Nils Gramse, Benedikt Pschera, and Jan Emmerich from Fraunhofer IML

    [ Fraunhofer IML ]

    Humans are capable of continuously manipulating a wide variety of deformable objects into complex shapes, owing largely to our ability to reason about material properties as well as our ability to reason in the presence of geometric occlusion in the object’s state. To study the robotic systems and algorithms capable of deforming volumetric objects, we introduce a novel robotics task of continuously deforming clay on a pottery wheel, and we present a baseline approach for tackling such a task by learning from demonstration.

    By Adam Hung, Uksang Yoo, Jonathan Francis, Jean Oh, and Jeffrey Ichnowski from CMU Robotics Insittute

    [ Carnegie Mellon University Robotics Institute ]

    Suction-based robotic grippers are common in industrial applications due to their simplicity and robustness, but [they] struggle with geometric complexity. Grippers that can handle varied surfaces as easily as traditional suction grippers would be more effective. Here we show how a fractal structure allows suction-based grippers to increase conformability and expand approach angle range.

    By Patrick O’Brien, Jakub F. Kowalewski, Chad C. Kessens, and Jeffrey Ian Lipton from Northeastern University Transformative Robotics Lab

    [ Northeastern University ]

    We introduce a newly developed robotic musician designed to play an acoustic guitar in a rich and expressive manner. Unlike previous robotic guitarists, our Expressive Robotic Guitarist (ERG) is designed to play a commercial acoustic guitar while controlling a wide dynamic range, millisecond-level note generation, and a variety of playing techniques such as strumming, picking, overtones, and hammer-ons.

    By Ning Yang , Amit Rogel , and Gil Weinberg from Georgia Tech

    [ Georgia Tech ]

    The iCub project was initiated in 2004 by Giorgio Metta, Giulio Sandini, and David Vernon to create a robotic platform for embodied cognition research. The main goals of the project were to design a humanoid robot, named iCub, to create a community by leveraging on open-source licensing, and implement several basic elements of artificial cognition and developmental robotics. More than 50 iCub have been built and used worldwide for various research projects.

    [ Istituto Italiano di Tecnologia ]

    In our video, we present SCALER-B, a multi-modal versatile climbing robot that is a quadruped robot capable of standing up, bipedal locomotion, bipedal climbing, and pullups with two finger grippers.

    By Yusuke Tanaka, Alexander Schperberg, Alvin Zhu, and Dennis Hong from UCLA

    [ Robotics Mechanical Laboratory at UCLA ]

    This video explores Waseda University’s innovative journey in developing wind instrument-playing robots, from automated performance to interactive musical engagement. Through demonstrations of technical advancements and collaborative performances, the video illustrates how Waseda University is pushing the boundaries of robotics, blending technology and artistry to create interactive robotic musicians.

    By Jia-Yeu Lin and Atsuo Takanishi from Waseda University

    [ Waseda University ]

    This video presents a brief history of robot painting projects with the intention of educating viewers about the specific, core robotics challenges that people developing robot painters face. We focus on four robotics challenges: controls, the simulation-to-reality gap, generative intelligence, and human-robot interaction. We show how various projects tackle these challenges with quotes from experts in the field.

    By Peter Schaldenbrand, Gerry Chen, Vihaan Misra, Lorie Chen, Ken Goldberg, and Jean Oh from CMU

    [ Carnegie Mellon University ]

    The wheeled humanoid neoDavid is one of the most complex humanoid robots worldwide. All finger joints can be controlled individually, giving the system exceptional dexterity. neoDavids Variable Stiffness Actuators (VSAs) enable very high performance in the tasks with fast collisions, highly energetic vibrations, or explosive motions, such as hammering, using power-tools, e.g. a drill-hammer, or throwing a ball.

    [ DLR Institute of Robotics andMechatronics ]

    LG Electronics’ journey to commercialize robot navigation technology in various areas such as home, public spaces, and factories will be introduced in this paper. Technical challenges ahead in robot navigation to make an innovation for our better life will be discussed. With the vision on ‘Zero Labor Home’, the next smart home agent robot will bring us next innovation in our lives with the advances of spatial AI, i.e. combination of robot navigation and AI technology.

    By Hyoung-Rock Kim, DongKi Noh and Seung-Min Baek from LG

    [ LG ]

    HILARE stands for: Heuristiques Intégrées aux Logiciels et aux Automatismes dans un Robot Evolutif. The HILARE project started by the end of 1977 at LAAS (Laboratoire d’Automatique et d’Analyse des Systèmes at this time) under the leadership of Georges Giralt. The video features HILARE robot and delivers explanations.

    By Aurelie Clodic, Raja Chatila, Marc Vaisset, Matthieu Herrb, Stephy Le Foll, Jerome Lamy, and Simon Lacroix from LAAS/CNRS (Note that the video narration is in French with English subtitles.)

    [ LAAS/CNRS ]

    Humanoid legged locomotion is versatile, but typically used for reaching nearby targets. Employing a personal transporter (PT) designed for humans, such as a Segway, offers an alternative for humanoids navigating the real world, enabling them to switch from walking to wheeled locomotion for covering larger distances, similar to humans. In this work, we develop control strategies that allow humanoids to operate PTs while maintaining balance.

    By Vidyasagar Rajendran, William Thibault, Francisco Javier Andrade Chavez, and Katja Mombaur from University of Waterloo

    [ University of Waterloo ]

    Motion planning, and in particular in tight settings, is a key problem in robotics and manufacturing. One infamous example for a difficult, tight motion planning problem is the Alpha Puzzle. We present a first demonstration in the real world of an Alpha Puzzle solution with a Universal Robotics UR5e, using a solution path generated from our previous work.

    By Dror Livnat, Yuval Lavi, Michael M. Bilevich, Tomer Buber, and Dan Halperin from Tel Aviv University

    [ Tel Aviv University ]

    Interaction between humans and their environment has been a key factor in the evolution and the expansion of intelligent species. Here we present methods to design and build an artificial environment through interactive robotic surfaces.

    By Fabio Zuliani, Neil Chennoufi, Alihan Bakir, Francesco Bruno, and Jamie Paik from EPFL

    [ EPFL Reconfigurable Robotics Lab ]

    At the intersection of swarm robotics and architecture, we created the Swarm Garden, a novel responsive system to be deployed on façades. The Swarm Garden is an adaptive shading system made of a swarm of robotic modules that respond to humans and the environment while creating beautiful spaces. In this video, we showcase 35 robotic modules that we designed and built for The Swarm Garden.

    By Merihan Alhafnawi, Lucia Stein-Montalvo, Jad Bendarkawi, Yenet Tafesse, Vicky Chow, Sigrid Adriaenssens, and Radhika Nagpal from Princeton University

    [ Princeton University ]

    My team at the University of Southern Denmark has been pioneering the field of self-recharging drones since 2017. These drones are equipped with a robust perception and navigation system, enabling them to identify powerlines and approach them for landing. A unique feature of our drones is their self-recharging capability. They accomplish this by landing on powerlines and utilizing a passively actuated gripping mechanism to secure themselves to the powerline cable.

    By Emad Ebeid from University of southern Denmark

    [ University of Southern Denmark (SDU) ]

    This paper explores the design and implementation of Furnituroids, shape-changing mobile furniture robots that embrace ambiguity to offer multiple and dynamic affordances for both individual and social behaviors.

    By Yasuto Nakanishi from Keio University

    [ Keio University ]

  • Remembering Illustrator Harry Campbell
    by Mark Montgomery on 27. September 2024. at 15:25



    Harry Campbell, a renowned illustrator and longtime IEEE Spectrum contributor, recently passed away after a valiant battle with cancer. Harry’s innovative style and unique approach toward technology topics were a perfect fit for Spectrum, and his contributions spanned two decades and five redesigns.

    Harry also created compelling illustrations for The New York Times, Scientific American, and Time, and he was recognized for his excellent work by the Society of Illustrators and the Society of Publication Designers.

    A common thread running through Harry’s work for Spectrum was his exploded-view perspective that drew from engineering drafting vernacular. He applied this technique to all of his illustrations for us, whether the topic was advanced technologies or the disquieting effects of technology on society. Harry preferred to develop his own concepts, graciously telling me at times “No offense, but…” when I offered up an idea for a story. The end result was always a unique and beautiful illustration appreciated by our readers and the Spectrum staff. His images reminded us that technology isn’t just an abstraction—it is also deeply human.

    Below is a sample of Harry’s work for IEEE Spectrum over the years.


    An illustration of a salt shaker with chips coming out.

    From “The Trouble With Multicore,” IEEE Spectrum, July 2019.


    An illustration of a person riding a phone like a surfboard.

    From “Engineering in the Twilight of Moore’s Law,” IEEE Spectrum, March 2018.


    An illustration of computer chips designed as flowers.

    From “The Chain Reaction That Propels Civilization,” IEEE Spectrum, May 2022.


    An illustration of a hand dropping colored blocks into a shape.

    From “Changing the Transistor Channel,” IEEE Spectrum, July 2013.


    An illustration of a coin with a "B" on it being passed through a slot.

    From “Bitcoin: The Cryptoanarchists’ Answer to Cash,” IEEE Spectrum, June 2012.


    An illustration of a phone with an old speaker coming out of the screen.

    From “The Screen Is the Speaker,” IEEE Spectrum, March 2024.


    An illustration of a cube with electronic elements inside being held by a pair of hands.

    From “Antifragile Systems,” IEEE Spectrum, March 2013.


    An illustraion of a exploded view of a laptop screen.

    From IEEE Spectrum, March 2024.


    An illustration of a skull made up of bright loopy lines and numbers.

    From “The Creepy New Digital Afterlife Industry,” IEEE Spectrum, November 2023.

  • Ansys SimAI Software Predicts Fully Transient Vehicle Crash Outcomes
    by Ansys on 27. September 2024. at 12:31



    The Ansys SimAI™ cloud-enabled generative artificial intelligence (AI) platform combines the predictive accuracy of Ansys simulation with the speed of generative AI. Because of the software’s versatile underlying neural networks, it can extend to many types of simulation, including structural applications.
    This white paper shows how the SimAI cloud-based software applies to highly nonlinear, transient structural simulations, such as automobile crashes, and includes:

    • Vehicle kinematics and deformation
    • Forces acting upon the vehicle
    • How it interacts with its environment
    • How understanding the changing and rapid sequence of events helps predict outcomes

    These simulations can reduce the potential for occupant injuries and the severity of vehicle damage and help understand the crash’s overall dynamics. Ultimately, this leads to safer automotive design.

    Download this free whitepaper now!

  • IEEE Medal of Honor Prize Increased to $2 Million
    by IEEE on 26. September 2024. at 20:00



    For more than a century, IEEE has awarded its Medal of Honor to recognize the extraordinary work of individuals whose technical achievements have had world-changing impact. To better demonstrate how these technology, engineering, and science innovators have changed our society globally, IEEE announced that starting next year, the IEEE Medal of Honor monetary prize will be increased to US $2 million. This significant increase places the award among the largest such monetary prizes worldwide, and is a substantial increase from its previous prize of $50,000.

    In addition, for the first time, the IEEE Medal of Honor laureate will be announced at a dedicated press conference, to be held in February in New York City. The organization’s highest award, as well as additional high-profile awards, will be presented to recipients at next year’s IEEE Honors Ceremony, which will for the first time be held in Tokyo, in April.

    The words IEEE Medal of Honor, with a 8 point star IEEE

    “By significantly increasing the IEEE Medal of Honor monetary prize to $2 million, we are elevating our recognition of extraordinary individuals and the work they have done to benefit humanity to its rightful place as one of the world’s most prestigious technology-focused prizes and awards,” said 2024 IEEE President and CEO Thomas M. Coughlin.

    The IEEE Medal of Honor is bestowed for remarkable, society-changing achievements such as the creation of the Internet; development of life-saving medical device technologies including the CAT scan, MRI, ultrasound, and pacemaker; as well as transistors, semiconductors, and other innovations at the heart of modern electronics and computing.

    “IEEE Medal of Honor laureates dare to envision the new and revolutionary, and make possible what was previously considered impossible,” said K. J. Ray Liu, chair of the Ad Hoc Committee on Raising the Prestige of IEEE Awards and 2022 IEEE President and CEO. “Their seismic accomplishments and positive impact on our world inspires today’s technologists, who stand on their shoulders to continue advancing technology to make the world a better place.”

    “By significantly increasing the IEEE Medal of Honor monetary prize to $2 million, we are elevating our recognition of extraordinary individuals and the work they have done to benefit humanity to its rightful place as one of the world’s most prestigious technology-focused prizes and awards.” —2024 IEEE President and CEO Thomas M. Coughlin

    The IEEE Medal of Honor may be awarded to an individual or team of up to three who have made exceptional contributions or had extraordinary careers in technology, engineering, and science. The criteria for the award’s consideration include the significance and originality of the achievement and its impact on society and the profession, as well as relevant publications and patents tied to the achievement.

    Past recipients include technology pioneers and IEEE Life Fellows Robert E. Kahn, Vinton G. “Vint” Cerf, Asad M. Madni, and Mildred Dresselhaus.

    As IEEE continues to honor transformative achievements in technology, engineering, and science, it reinforces its commitment to recognizing innovation that shapes our world. As a public charity, the increased Medal of Honor prize reflects IEEE’s unwavering mission of advancing technology for humanity.

    This book covers the past 100 years of the IEEE Medal of Honor.

    Register for the press conference live stream to learn who the 2025 IEEE Medal of Honor recipient will be.

    Read the full news release here.

  • Build a No-Fuss Particle Detector
    by Tim Kuhlbusch on 26. September 2024. at 13:00



    There’s nothing like particle physics to make you aware that we exist in an endless three-dimensional pinball game. All around us, subatomic particles arc, collide, and barrel along with merry abandon. Some originate within our own bodies, others come from the far ends of the cosmos. But detecting this invisible tumult requires equipment, which can be costly. I wanted to create a way to detect at least some of the pinballs for less than US $15.

    My main reason was to have a new teaching tool. I’m doing a Ph.D. in the Physics Institute III B at RWTH Aachen University, and I realized such a detector would help satisfy my teaching obligations while tapping into my interests in physics, electronics, and software design.

    Fortunately, I didn’t have to start from scratch. Oliver Keller at CERN’s S’Cool Lab has created a DIY particle detector that relies on inexpensive silicon photodiodes to detect alpha and beta particles (helium nuclei and free electrons whizzing through the air, respectively) and estimate their energy. Normally, photodiodes are used to respond to light, such as the signals used in fiber-optic communications. But a charged particle striking the photodiode will also produce a pulse of current, with higher-energy particles generating bigger pulses. In practice, given typical conditions and the sensitivity of the photodiodes, this primarily means detecting beta particles.

    In Keller’s design, these pulses are amplified, converted to voltages, and transmitted via a cable from an audio jack on the detector to the microphone input of a laptop or smartphone. The data is then digitized and recorded.

    A colleague of mine had built the CERN device, but I realized there was room for improvement. Passing the analog pulse signal through the length of an audio cable left the detector prone to noise from various sources. In addition, the design requires its own power source, in the form of a 9-volt battery. Apart from the hassle of having a separate battery, this also means that if you miswire the device, you’ll send an unacceptable voltage into an expensive smartphone!

    Reducing Amplification Noise

    I decided I would solve these problems by bringing the digitization to the photodiodes. The closer I could get it, the less noise I’d have to contend with. Noise-resistant digitized data could then be sent via a USB connection, which could also supply power to the detector.

    Three PCBs stacked on top of each other. The BetaBoard uses three types of printed circuit board: The cover [top] and a body board [middle] have no circuit traces and are used to create a light-tight and electromagnetically shielded enclosure; the bottom board hosts a photodiode detector array and an RP2040 microcontroller. James Provost

    Of course, to digitize the signal from the photodiodes, I would need some onboard processing power. I settled on the RP2040 microcontroller. Although it does have some known problems with its analog-to-digital converter, you can work around them, and the chip has more than enough compute power as well as a built-in USB controller.

    In my first design of my so-called BetaBoard, I created a single printed circuit board populated with the RP2040, an array of photodiodes, and a set of low-noise amplifier integrated circuits. I wrapped the board in aluminum tape to prevent light from triggering the photo detectors. The results proved the concept, but while I’d eliminated the noise from the audio cable, I discovered I’d introduced a new source of noise: the USB power supply.

    Higher-frequency noise—over 1 kilohertz—from the USB connection comes from data and polling signals flowing over the interface. Lower-frequency noise originates in the AC power supply for the host computer—50 hertz here in Europe. I filtered out the high-frequency noise by inserting a low-pass RC filter before the amplifiers’ supply voltage pins and liberally using capacitors in the rest of the circuitry. Filtering out the 50-Hz noise in hardware is tricky, so my solution was to just integrate a digital high-pass filter into the software I wrote for the RP2040. (Hardware and software files are available from my Github repository.)

    The software also provides a serial interface to the outside world: A human or a program can send commands via the USB cable and get data back. I wrote a Python script to record data and generate visualizations.

    Another improvement I made to my initial design was to eliminate the need to wrap the board in aluminum tape (or place it in a container, as in Keller’s original version).

    To do that, I designed two other types of PCB with the same external dimensions as the original board, but without any circuitry. The first type has two large cutouts: an open area over the photodiode array and amplifiers, and another area over the RP2040 and its supporting circuitry. The photodiode cutout is surrounded by a broad metal fill on the back and front of the PCB, with the fills connected by vias. By stacking two of this type of PCB on the circuit board containing the components, I created an enclosure that provides shielding against electromagnetic interference.

    A diagram showing P-region, depletion layer, and N-region stacked on top of each other, with an incident particle creating charge carriers that are swept into the P and N regions. A chart of voltage against time shows a spike. A photodiode has a junction between positively and negatively doped regions, with a neutral depletion layer forming in between. Incoming light or charged particles [red line] creates charge carriers in the depletion region. This produces a spike in current between the doped regions. The height of the spike is proportional to the energy of the particle.James Provost

    The second type of PCB acts as a cover for the stack, with a smaller cutout over the photodiode array, over which I placed some black tape—enough to block light but still allow beta particles to reach the photodiodes.

    The result is a robust detector, albeit not the most sensitive in the world. I estimate that where a research-grade detector would register 100 counts per second from a given beta emitter, I’m getting about 10. But you can do meaningful measurements with it. My next step is to give it the ability to detect alpha particles as well as beta particles, as Keller’s version can do. I could do this now by modifying a $10 photodiode, but I’m experimenting with ways to use the cheaper photodiodes used in the rest of the design. I’m also working on the documentation so that it can be used in classroom settings that don’t have the luxury of having the detector designer present!

  • Detachable Robotic Hand Crawls Around on Finger-Legs
    by Evan Ackerman on 26. September 2024. at 12:00



    When we think of grasping robots, we think of manipulators of some sort on the ends of arms of some sort. Because of course we do—that’s how (most of us) are built, and that’s the mindset with which we have consequently optimized the world around us. But one of the great things about robots is that they don’t have to be constrained by our constraints, and at ICRA@40 in Rotterdam this week, we saw a novel new Thing: a robotic hand that can detach from its arm and then crawl around to grasp objects that would be otherwise out of reach, designed by roboticists from EPFL in Switzerland.

    Fundamentally, robot hands and crawling robots share a lot of similarities, including a body along with some wiggly bits that stick out and do stuff. But most robotic hands are designed to grasp rather than crawl, and as far as I’m aware, no robotic hands have been designed to do both of those things at the same time. Since both capabilities are important, you don’t necessarily want to stick with a traditional grasping-focused hand design. The researchers employed a genetic algorithm and simulation to test a bunch of different configurations in order to optimize for the ability to hold things and to move.

    You’ll notice that the fingers bend backwards as well as forwards, which effectively doubles the ways in which the hand (or, “Handcrawler”) can grasp objects. And it’s a little bit hard to tell from the video, but the Handcrawler attaches to the wrist using magnets for alignment along with a screw that extends to lock the hand into place.

    “Although you see it in scary movies, I think we’re the first to introduce this idea to robotics.” —Xiao Gao, EPFL

    The whole system is controlled manually in the video, but lead author Xiao Gao tells us that they already have an autonomous version (with external localization) working in the lab. In fact, they’ve managed to run an entire grasping sequence autonomously, with the Handcrawler detaching from the arm, crawling to a location the arm can’t reach, picking up an object, and then returning and reattaching itself to the arm again.

    Beyond Manual Dexterity: Designing a Multi-fingered Robotic Hand for Grasping and Crawling, by Xiao Gao, Kunpeng Yao, Kai Junge, Josie Hughes, and Aude Billard from EPFL and MIT, was presented at ICRA@40 this week in Rotterdam.

  • Forums, Competitions, Challenges: Inspiring Creativity in Robotics
    by Khalifa University on 25. September 2024. at 13:16



    This is a sponsored article brought to you by Khalifa University of Science and Technology.

    A total of eight intense competitions to inspire creativity and innovation along with 13 forums dedicated to diverse segments of robotics and artificial intelligence will be part of the 36th edition of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024) in Abu Dhabi.

    Logo for IROS 2024 robotics conference, featuring a line drawing of electrical devices and the words IROS 24 and Abu Dhabi.

    These competitions at the Middle East and North Africa (MENA) region’s first-ever global conference and exhibition from 14-18 October 2024 at the Abu Dhabi National Exhibition Center (ADNEC) will highlight some of the key aspects of robotics. These include physical or athletic intelligence of robots, remote robot navigation, robot manipulation, underwater robotics, perception and sensing as well as challenges for wildlife preservation.

    This edition of IROS is one of the largest of its kind globally in this category because of active participation across all levels, with 5,740 authors, 16 keynote speakers, 46 workshops, 11 tutorials, as well as 28 exhibitors and 12 startups. The forums at IROS will explore the rapidly evolving role of robotics in many industry sectors as well as policy-making and regulatory areas. Several leading corporate majors, and industry professionals from across the globe are gathering for IROS 2024 which is themed “Robotics for Sustainable Development.”

    “The intense robotics competitions will inspire creativity, while the products on display as well as keynotes will pave the way for more community-relevant solutions.” —Jorge Dias, IROS 2024 General Chair

    Dr. Jorge Dias, IROS 2024 General Chair, said: “Such a large gathering of scientists, researchers, industry leaders and government stakeholders in Abu Dhabi for IROS 2024 also demonstrates the role of UAE in pioneering new technologies and in providing an international platform for knowledge exchange and sharing of expertise. The intense robotics competitions will inspire creativity, while the products on display as well as keynotes will pave the way for more community-relevant solutions.”

    The competitions are:

    In addition to these competitions, the Falcon Monitoring Challenge (FMC) will focus on advancing the field of wildlife tracking and conservation through the development of sophisticated, noninvasive monitoring systems.

    A photo of several people and a man on a laptop with a drone in the foreground.  Khalifa University

    IROS 2024 will also include three keynote talks on ‘Robotic Competitions’ that will be moderated by Professor Lakmal Seneviratne, Director, Center for Autonomous Robotic Systems (KU-CARS), Khalifa University. The keynotes will be delivered by Professor Pedro Lima, Institute for Systems and Robotics, Instituto Superior Técnico, University of. Lisbon, Portugal; Dr. Timothy Chung, General Manager, Autonomy and Robotics, Microsoft, US; and Dr. Ubbo Visser, President of the RoboCup Federation, Director of Graduate Studies, and Associate Professor of Computer Science, University of Miami, US.

    The forums at IROS 2024 will include:

    Other forums include:

    One of the largest and most important robotics research conferences in the world, IROS 2024 provides a platform for the international robotics community to exchange knowledge and ideas about the latest advances in intelligent robots and smart machines. A total of 3,344 paper submissions representing 60 countries, have been received from researchers and scientists across the world. China tops the list with more than 1,000 papers, the US with 777, Germany with 302, Japan with 253, and the UK and South Korea with 173 each. The UAE remains top in the Arab region with 68 papers.

    One of the largest and most important robotics research conferences in the world, IROS 2024 provides a platform for the international robotics community to exchange knowledge and ideas.

    For eight consecutive years since 2017, Abu Dhabi has remained first on the world’s safest cities list, according to online database Numbeo, which assessed 329 global cities for the 2024 listing. This reflects the emirate’s ongoing efforts to ensure a good quality of life for citizens and residents. With a multicultural community, Abu Dhabi is home to people from more than 200 nationalities, and draws a large number of tourists to some of the top art galleries in the city such as Louvre Abu Dhabi and the Guggenheim Abu Dhabi, as well as other destinations such as Ferrari World Abu Dhabi and Warner Bros. World™ Abu Dhabi.

    Because of its listing as one of the safest cities, Abu Dhabi continues to host several international conferences and exhibitions. Abu Dhabi is set to host the UNCTAD World Investment Forum, the 13th World Trade Organization (WTO) Ministerial Conference (MC13), the 12th World Environment Education Congress in 2024, and the IUCN World Conservation Congress in 2025.

    IROS 2024 is sponsored by IEEE Robotics and Automation Society, Abu Dhabi Convention and Exhibition Bureau, the Robotics Society of Japan (RSJ), the Society of Instrument and Control Engineers (SICE), the New Technology Foundation, and the IEEE Industrial Electronics Society (IES).

    More information at https://iros2024-abudhabi.org/

  • IEEE’s Disaster Relief Program Adds to Its Mobile Response Fleet
    by Chris McManes on 24. September 2024. at 18:00



    The IEEE MOVE (Mobile Outreach using Volunteer Engagement) program was launched in 2016 to provide U.S. communities with power and communications capabilities in areas affected by widespread outages due to natural disasters. IEEE MOVE volunteers often collaborate with the American Red Cross.

    During the past eight years, the initiative has expanded from one truck based in North Carolina to two, with the second located in Texas. In July IEEE MOVE added a third vehicle, MOVE-3, a van based in San Diego.

    IEEE MOVE introduced the new vehicle on 14 August during a ceremony in San Diego. IEEE leaders demonstrated the truck’s modular technology and shared how the components can be transported by plane or helicopter if necessary.

    Making MOVE-3 modular

    The two other MOVE vehicles are equipped with satellite Internet service, 5G/LTE connectivity, and IP phone service. The trucks can charge up to 100 cellphone batteries simultaneously.

    All systems are self-contained, with power generation capability.

    “Volunteering is intellectually stimulating. It’s a good opportunity to use your technical knowledge, skills, and abilities.” —Tim Troske

    “MOVE-3 has the same technologies but in a modular format so they can be transported easily to remote locations. Unlike the other, larger vehicles, MOVE-3 is a smaller van, which can arrive at disaster sites more quickly,” says IEEE Senior Member Tim Troske, operations lead for the new vehicle. “MOVE-3 has a solar power station that is strong enough to charge two lithium-ion battery packs.”

    The vehicle’s flexibility allows the equipment to be deployed not only across California—which is susceptible to wildfires, landslides, and earthquakes—but also to Alaska, Hawaii, and other parts of the Western United States. Similar modular equipment is used by IEEE MOVE programs in Puerto Rico and India.

    a group of image standing in front of a large van and a building in the background with red text The new MOVE-3 vehicle was introduced at a ceremony in San Diego. From left: Kathy Hayashi (Region 6 director), Tim Troske (MOVE West operations lead), Loretta Arellano (MOVE USA program director), Kathleen Kramer (IEEE president-elect), Tim Lee (IEEE USA president-elect), Sean Mahoney (American Red Cross Southern California Region CEO) and Bob Birch (American Red Cross local DST manager).IEEE

    Become a volunteer

    When the vehicles are not deployed for disaster relief, volunteers take them to schools and science fairs to educate students and community members about ways technology can help people during natural disasters.

    IEEE MOVE is looking for more volunteers, says IEEE Senior Member Loretta Arellano, MOVE program director, who oversees its U.S. operations.

    “Volunteering is intellectually stimulating,” says Troske, who experienced his first emergency deployment in August 2022 after flash floods devastated eastern Kentucky. “It’s a good opportunity to use your technical knowledge, skills, and abilities. You’re at the point of your life where you’ve got all this built-up knowledge and skills. It’s nice to be able to still use them and give back to your community.”

    For more information on IEEE MOVE, visit the program’s website. To volunteer, fill out the program’s survey form.

    IEEE MOVE is sponsored by IEEE-USA and receives funding from donations to the IEEE Foundation.

  • What It Takes To Let People Play With the Past
    by Stephen Cass on 23. September 2024. at 14:00



    The Media Archaeology Lab is one of the largest public collections in the world of obsolete, yet functional, technology. Located on the University of Colorado Boulder campus, the MAL is where you can watch a magic lantern show, play Star Castle on a Vectrex games console, or check out the weather on an Atari 800 via Fujinet. IEEE Spectrum spoke to managing director Libi Rose Striegl about the MAL’s mission and her role in keeping all that obsolete tech functional, so that people of today can experience the media of the past.

    ​Libi Rose


    Libi Rose Striegl is the managing director for the Media Archaeology Lab at the University of Colorado Boulder.

    How is the MAL different from other collections of historical and vintage technology?

    Libi Rose: Our major difference is that we treat ourselves as a lab and an experimental space for hands-on use, as opposed to a museum-type collection. We’re very much focused on the humanistic side of computer use. We’re interested in unexpected juxtapositions of technologies and ways that we can get people of all ages and all backgrounds to use these things, in either the expected ways or in unexpected ways.

    What’s your role at the lab?

    Rose: I do all the day-to-day admin work, managing our volunteer group, working with professors on campus to do course integration. Doing off-site events, doing repair work myself or coordinating it. [Recording a new addition] myself or coordinating it. Coordinating donations. Social-media accounts. Kind of a whole crew of people’s worth of work in one job! My office is also the repair space.

    “We’re very much focused on the humanistic side of computer use.”

    What’s the hardest part about keeping old systems running?

    Rose: We don’t have a huge amount of trouble with old computer systems other than not having time. It’s other things that are hard to keep running. Our older things, our mechanical things, the information is gone. The people who did that work in the past have passed away. And so we’re kind of re-creating the wheel when we want to do something like repair a mechanical calculator, or figure out how to make a phonograph that stopped working start working again. For newer stuff, the hardest part of a lot of it is that the hardware itself exists, but maybe server-side infrastructure is [gone]. So older cellphones are very hard to work with, because while we can turn them on, we can’t do much else with them unless you start getting into building your own analog cell network, which we’ve talked about. Missing infrastructure is why we end up doing a lot of things. We run our little analog TV station in-house.

    An analog TV station?

    Rose: Yes, otherwise you can’t really see what broadcast TV would have looked like on those old analog televisions!

    How do visitors respond?

    Rose: It sort of depends on age and familiarity with things. Young kids are often brought in by their parents to be introduced to stuff. And my favorite reactions are from 7- and 8-year-olds who are like, “Oh, my God. I’m so sorry for you old people who had to do this.” College-age students have either their own nostalgia or sort of residual nostalgia from their parents or grandparents. They’re really interested in interacting with something that they saw on television or that their parents told them about. Older folks tend to jump right onto the nostalgia train. We get a lot of good conversation around that and where technology goes when it dies, what that all means.

    This article appears in the October 2024 print issues as “5 Questions for Libi Rose.”

  • Finally, A Flying Car(t)
    by Evan Ackerman on 21. September 2024. at 13:00



    Where’s your flying car? I’m sorry to say that I have no idea. But here’s something that is somewhat similar, in that it flies, transports things, and has “car” in the name: it’s a flying cart, called the Palletrone (pallet+drone), designed for human-robot interaction-based aerial cargo transportation.


    The way this thing works is fairly straightforward. The Palletrone will try to keep its roll and pitch at zero, to make sure that there’s a flat and stable platform for your preciouses, even if you don’t load those preciouses onto the drone evenly. Once loaded up, the drone relies on you to tell it where to go and what to do, using its IMU to respond to the slightest touch and translating those forces into control over the Palletrone’s horizontal, vertical, and yaw trajectories. This is particularly tricky to do, because the system has to be able to differentiate between the force exerted by cargo, and the force exerted by a human, since if the IMU senses a force moving the drone downward, it could be either. But professor Seung Jae Lee tells us that they developed “a simple but effective method to distinguish between them.”

    Since the drone has to do all of this sensing and movement without pitching or rolling (since that would dump its cargo directly onto the floor) it’s equipped with internal propeller arms that can be rotated to vector thrust in any direction. We were curious about how having a bunch of unpredictable stuff sitting right above those rotors might affect the performance of the drone. But Seung Jae Lee says that the drone’s porous side structures allow for sufficient airflow and that even when the entire top of the drone is covered, thrust is only decreased by about 5 percent.

    The current incarnation of the Palletrone is not particularly smart, and you need to remain in control of it, although if you let it go it will do its best to remain stationary (until it runs out of batteries). The researchers describe the experience of using this thing as “akin to maneuvering a shopping cart,” although I would guess that it’s somewhat noisier. In the video, the Palletrone is loaded down with just under 3 kilograms of cargo, which is respectable enough for testing. The drone is obviously not powerful enough to haul your typical grocery bag up the stairs to your apartment. But, it’s a couple of steps in the right direction, at least.

    We also asked Seung Jae Lee about how he envisions the Palletrone being used, besides as just a logistics platform for either commercial or industrial use. “By attaching a camera to the platform, it could serve as a flying tripod or even act as a dolly, allowing for flexible camera movements and angles,” he says. “This would be particularly useful in environments where specialized filming equipment is difficult to procure.”

    And for those of you about to comment something along the lines of, “this can’t possibly have enough battery life to be real-world useful,” they’re already working to solve that, with a docking system that allows one Palletrone to change the battery of another in-flight:

    One Palletrone swaps out the battery of a second Palletrone.Seoul Tech

    The Palletrone Cart: Human-Robot Interaction-Based Aerial Cargo Transportation,” by Geonwoo Park, Hyungeun Park, Wooyong Park, Dongjae Lee, Murim Kim, and Seung Jae Lee from Seoul National University of Science and Technology in Korea, is published in IEEE Robotics And Automation Letters.

  • Video Friday: Zipline Delivers
    by Evan Ackerman on 20. September 2024. at 15:30



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
    IROS 2024: 14–18 October 2024, ABU DHABI, UAE
    ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
    Cybathlon 2024: 25–27 October 2024, ZURICH

    Enjoy today’s videos!

    Zipline has (finally) posted some real live footage of its new Platform 2 drone, and while it’s just as weird looking as before, it seems to actually work really well.

    [ Zipline ]

    I appreciate Disney Research’s insistence on always eventually asking, “okay, but can we get this to work on a real robot in the real world?”

    [ Paper from ETH Zurich and Disney Research [PDF] ]

    In this video, we showcase our humanoid robot, Nadia, being remotely controlled for boxing training using a simple VR motion capture setup. A remote user takes charge of Nadia’s movements, demonstrating the power of our advanced teleoperation system. Watch as Nadia performs precise boxing moves, highlighting the potential for humanoid robots in dynamic, real-world tasks.

    [ IHMC ]

    Guide dogs are expensive to train and maintain—if available at all. Because of these limiting factors, relatively few blind people use them. Computer science assistant professor Donghyun Kim and Ph.D candidate Hochul Hwang are hoping to change that with the help of UMass database analyst Gail Gunn and her guide dog, Brawny.

    [ University of Massachusetts, Amherst ]

    Thanks Julia!

    The current paradigm for motion planning generates solutions from scratch for every new problem, which consumes significant amounts of time and computational resources. Our approach builds a large number of complex scenes in simulation, collects expert data from a motion planner, then distills it into a reactive generalist policy. We then combine this with lightweight optimization to obtain a safe path for real world deployment.

    [ Neural MP ]

    A nice mix of NAO and AI for embodied teaching.

    [ Aldebaran ]

    When retail and logistics giant Otto Group set out to strengthen its operational efficiency and safety, it turned to robotics and automation. The Otto Group has become the first company in Europe to deploy the mobile case handling robot Stretch, which unloads floor-loaded trailers and containers.

    [ Boston Dynamics ]

    From groceries to last-minute treats, Wing is here to make sure deliveries arrive quickly and safely. Our latest aircraft design features a larger, more standardized box and can carry a higher payload which came directly from customer and partner feedback.

    [ Wing ]

    It’s the jacket that gets me.

    [ Devanthro ]

    In this video, we introduce Rotograb, a robotic hand that merges the dexterity of human hands with the strength and efficiency of industrial grippers. Rotograb features a new rotating thumb mechanism, allowing for precision in-hand manipulation and power grasps while being adaptable. The robotic hand was developed by students during “Real World Robotics”, a master course by the Soft Robotics Lab at ETH Zurich.

    [ ETH Zurich ]

    A small scene where Rémi, our distinguished professor, is teaching chess to the person remotely operating Reachy! The grippers allow for easy and precise handling of chess pieces, even the small ones! The robot shown in this video is the Beta version of Reachy 2, our new robot coming very soon!

    [ Pollen ]

    Enhancing the adaptability and versatility of unmanned micro aerial vehicles (MAVs) is crucial for expanding their application range. In this article, we present a bimodal reconfigurable robot capable of operating in both regular quadcopter flight mode and a unique revolving flight mode, which allows independent control of the vehicle’s position and roll-pitch attitude.

    [ City University Hong Kong ]

    The Parallel Continuum Manipulator (PACOMA) is an advanced robotic system designed to replace traditional robotic arms in space missions, such as exploration, in-orbit servicing, and docking. Its design emphasizes robustness against misalignments and impacts, high precision and payload capacity, and sufficient mechanical damping for stable, controlled movements.

    [ DFKI Robotics Innovation Center ]

    Even the FPV pros from Team BlackSheep do, very occasionally, crash.

    [ Team BlackSheep ]

    This is a one-hour uninterrupted video of a robot cleaning bathrooms in real time. I’m not sure if it’s practical, but I am sure that it’s impressive, honestly.

    [ Somatic ]

  • Startup Says It Can Make a 100x Faster CPU
    by Dina Genkina on 20. September 2024. at 14:00



    In an era of fast-evolving AI accelerators, general purpose CPUs don’t get a lot of love. “If you look at the CPU generation by generation, you see incremental improvements,” says Timo Valtonen, CEO and co-founder of Finland-based Flow Computing.

    Valtonen’s goal is to put CPUs back in their rightful, ‘central’ role. In order to do that, he and his team are proposing a new paradigm. Instead of trying to speed up computation by putting 16 identical CPU cores into, say, a laptop, a manufacturer could put 4 standard CPU cores and 64 of Flow Computing’s so-called parallel processing unit (PPU) cores into the same footprint, and achieve up to 100 times better performance. Valtonen and his collaborators laid out their case at the IEEE Hot Chips conference in August.

    The PPU provides a speed-up in cases where the computing task is parallelizable, but a traditional CPU isn’t well equipped to take advantage of that parallelism, yet offloading to something like a GPU would be too costly.

    “Typically, we say, ‘okay, parallelization is only worthwhile if we have a large workload,’ because otherwise the overhead kills lot of our gains,” says Jörg Keller, professor and chair of parallelism and VLSI at FernUniversität in Hagen, Germany, who is not affiliated with Flow Computing. “And this now changes towards smaller workloads, which means that there are more places in the code where you can apply this parallelization.”

    Computing tasks can roughly be broken up into two categories: sequential tasks, where each step depends on the outcome of a previous step, and parallel tasks, which can be done independently. Flow Computing CTO and co-founder Martti Forsell says a single architecture cannot be optimized for both types of tasks. So, the idea is to have separate units that are optimized for each type of task.

    “When we have a sequential workload as part of the code, then the CPU part will execute it. And when it comes to parallel parts, then the CPU will assign that part to PPU. Then we have the best of both words,” Forsell says.

    According to Forsell, there are four main requirements for a computer architecture that’s optimized for parallelism: tolerating memory latency, which means finding ways to not just sit idle while the next piece of data is being loaded from memory; sufficient bandwidth for communication between so-called threads, chains of processor instructions that are running in parallel; efficient synchronization, which means making sure the parallel parts of the code execute in the correct order; and low-level parallelism, or the ability to use the multiple functional units that actually perform mathematical and logical operations simultaneously. For Flow Computing new approach, “we have redesigned, or started designing an architecture from scratch, from the beginning, for parallel computation,” Forsell says.

    Any CPU can be potentially upgraded

    To hide the latency of memory access, the PPU implements multi-threading: when each thread calls to memory, another thread can start running while the first thread waits for a response. To optimize bandwidth, the PPU is equipped with a flexible communication network, such that any functional unit can talk to any other one as needed, also allowing for low-level parallelism. To deal with synchronization delays, it utilizes a proprietary algorithm called wave synchronization that is claimed to be up to 10,000 times more efficient than traditional synchronization protocols.

    To demonstrate the power of the PPU, Forsell and his collaborators built a proof-of-concept FPGA implementation of their design. The team says that the FPGA performed identically to their simulator, demonstrating that the PPU is functioning as expected. The team performed several comparison studies between their PPU design and existing CPUS. “Up to 100x [improvement] was reached in our preliminary performance comparisons assuming that there would be a silicon implementation of a Flow PPU running at the same speed as one of the compared commercial processors and using our microarchitecture,” Forsell says.

    Now, the team is working on a compiler for their PPU, as well as looking for partners in the CPU production space. They are hoping that a large CPU manufacturer will be interested in their product, so that they could work on a co-design. Their PPU can be implemented with any instruction set architecture, so any CPU can be potentially upgraded.

    “Now is really the time for this technology to go to market,” says Keller. “Because now we have the necessity of energy efficient computing in mobile devices, and at the same time, we have the need for high computational performance.”

  • IEEE-USA’s New Guide Helps Companies Navigate AI Risks
    by Jeanna Matthews on 19. September 2024. at 18:00



    Organizations that develop or deploy artificial intelligence systems know that the use of AI entails a diverse array of risks including legal and regulatory consequences, potential reputational damage, and ethical issues such as bias and lack of transparency. They also know that with good governance, they can mitigate the risks and ensure that AI systems are developed and used responsibly. The objectives include ensuring that the systems are fair, transparent, accountable, and beneficial to society.

    Even organizations that are striving for responsible AI struggle to evaluate whether they are meeting their goals. That’s why the IEEE-USA AI Policy Committee published “A Flexible Maturity Model for AI Governance Based on the NIST AI Risk Management Framework,” which helps organizations assess and track their progress. The maturity model is based on guidance laid out in the U.S. National Institute of Standards and Technology’s AI Risk Management Framework (RMF) and other NIST documents.

    Building on NIST’s work

    NIST’s RMF, a well-respected document on AI governance, describes best practices for AI risk management. But the framework does not provide specific guidance on how organizations might evolve toward the best practices it outlines, nor does it suggest how organizations can evaluate the extent to which they’re following the guidelines. Organizations therefore can struggle with questions about how to implement the framework. What’s more, external stakeholders including investors and consumers can find it challenging to use the document to assess the practices of an AI provider.

    The new IEEE-USA maturity model complements the RMF, enabling organizations to determine their stage along their responsible AI governance journey, track their progress, and create a road map for improvement. Maturity models are tools for measuring an organization’s degree of engagement or compliance with a technical standard and its ability to continuously improve in a particular discipline. Organizations have used the models since the 1980a to help them assess and develop complex capabilities.

    The framework’s activities are built around the RMF’s four pillars, which enable dialogue, understanding, and activities to manage AI risks and responsibility in developing trustworthy AI systems. The pillars are:

    • Map: The context is recognized, and risks relating to the context are identified.
    • Measure: Identified risks are assessed, analyzed, or tracked.
    • Manage: Risks are prioritized and acted upon based on a projected impact.
    • Govern: A culture of risk management is cultivated and present.

    A flexible questionnaire

    The foundation of the IEEE-USA maturity model is a flexible questionnaire based on the RMF. The questionnaire has a list of statements, each of which covers one or more of the recommended RMF activities. For example, one statement is: “We evaluate and document bias and fairness issues caused by our AI systems.” The statements focus on concrete, verifiable actions that companies can perform while avoiding general and abstract statements such as “Our AI systems are fair.”

    The statements are organized into topics that align with the RFM’s pillars. Topics, in turn, are organized into the stages of the AI development life cycle, as described in the RMF: planning and design, data collection and model building, and deployment. An evaluator who’s assessing an AI system at a particular stage can easily examine only the relevant topics.

    Scoring guidelines

    The maturity model includes these scoring guidelines, which reflect the ideals set out in the RMF:

    • Robustness, extending from ad-hoc to systematic implementation of the activities.
    • Coverage, ranging from engaging in none of the activities to engaging in all of them.
    • Input diversity, ranging from having activities informed by inputs from a single team to diverse input from internal and external stakeholders.

    Evaluators can choose to assess individual statements or larger topics, thus controlling the level of granularity of the assessment. In addition, the evaluators are meant to provide documentary evidence to explain their assigned scores. The evidence can include internal company documents such as procedure manuals, as well as annual reports, news articles, and other external material.

    After scoring individual statements or topics, evaluators aggregate the results to get an overall score. The maturity model allows for flexibility, depending on the evaluator’s interests. For example, scores can be aggregated by the NIST pillars, producing scores for the “map,” “measure,” “manage,” and “govern” functions.

    When used internally, the maturity model can help organizations determine where they stand on responsible AI and can identify steps to improve their governance.

    The aggregation can expose systematic weaknesses in an organization’s approach to AI responsibility. If a company’s score is high for “govern” activities but low for the other pillars, for example, it might be creating sound policies that aren’t being implemented.

    Another option for scoring is to aggregate the numbers by some of the dimensions of AI responsibility highlighted in the RMF: performance, fairness, privacy, ecology, transparency, security, explainability, safety, and third-party (intellectual property and copyright). This aggregation method can help determine if organizations are ignoring certain issues. Some organizations, for example, might boast about their AI responsibility based on their activity in a handful of risk areas while ignoring other categories.

    A road toward better decision-making

    When used internally, the maturity model can help organizations determine where they stand on responsible AI and can identify steps to improve their governance. The model enables companies to set goals and track their progress through repeated evaluations. Investors, buyers, consumers, and other external stakeholders can employ the model to inform decisions about the company and its products.

    When used by internal or external stakeholders, the new IEEE-USA maturity model can complement the NIST AI RMF and help track an organization’s progress along the path of responsible governance.

  • Cat's Eye Camera Can See Through Camouflage
    by Kohava Mendelsohn on 19. September 2024. at 14:30



    Did that rock move, or is it a squirrel crossing the road? Tracking objects that look a lot like their surroundings is a big problem for many autonomous vision systems. AI algorithms can solve this camouflage problem, but they take time and computing power. A new camera designed by researchers in South Korea provides a faster solution. The camera takes inspiration from the eyes of a cat, using two modifications that let it distinguish objects from their background, even at night.

    “In the future … a variety of intelligent robots will require the development of vision systems that are best suited for their specific visual tasks,” says Young Min Song, a professor of electrical engineering and computer science at Gwangju Institute of Science and Technology and one of the camera’s designers. Song’s recent research has been focused on using the “perfectly adapted” eyes of animals to enhance camera hardware, allowing for specialized cameras for different jobs. For example, fish eyes have wider fields of view as a consequence of their curved retinas. Cats may be common and easy to overlook, he says, but their eyes actually offer a lot of inspiration.

    This particular camera copied two adaptations from cats’ eyes: their vertical pupils and a reflective structure behind their retinas. Combined, these allowed the camera to be 10 percent more accurate at distinguishing camouflaged objects from their backgrounds and 52 percent more efficient at absorbing incoming light.

    Using a vertical pupil to narrow focus

    A side by side diagram showing the differences in vision between conventional and feline pupils in daylight While conventional cameras can clearly see the foreground and background of an image, the slitted pupils of a cat focus directly on a target, preventing it from blending in with its surroundings. Kim et al./Science Advances

    In conventional camera systems, when there is adequate light, the aperture—the camera’s version of a pupil—is small and circular. This structure allows for a large depth of field (the distance between the closest and farthest objects in focus), clearly seeing both the foreground and the background. By contrast, cat eyes narrow to a vertical pupil during the day. This shifts the focus to a target, distinguishing it more clearly from the background.

    The researchers 3D printed a vertical slit to use as an aperture for their camera. They tested the vertical slit using seven computer vision algorithms designed to track moving objects. The vertical slit increased contrast between a target object and its background, even if they were visually similar. It beat the conventional camera on five of the seven tests. For the two tests it performed worse than the conventional camera, the accuracies of the two cameras were within 10 percent of each other.

    Using a reflector to gather additional light

    A side by side diagram showing the differences in vision between conventional and feline pupils in darkness Cats can see more clearly at night than conventional cameras due to reflectors in their eyes that bring extra light to their retinas.Kim et al./Science Advances

    Cat eyes have an in-built reflector, called a tapetum lucidum, which sits behind the retina. It reflects light that passes through the retina back at it, so it can process both the incoming light and reflected light, giving felines superior night vision. You can see this biological adaptation yourself by looking at a cat’s eyes at night: they will glow.

    The researchers created an artificial version of this biological structure by placing a silver reflector under each photodiode in the camera. Photodiodes without a reflector generated current when more than 1.39 watts per square meter of light fell on them, while photodiodes with a reflector activated with 0.007 W/m2 of light. That means the photodiode could generate an image with about 1/200th the light.

    A golden-colored device composed of two sections that branch together to form a hexagon Each photodiode was placed above a reflector and joined by metal electrodes to create a curved image sensor.Kim et al./Science Advances

    To decrease visual aberrations (imperfections in the way the lens of the camera focuses light), Song and his team opted to create a curved image sensor, like the back of the human eye. In such a setup, a standard image sensor chip won’t work, because it’s rigid and flat. Instead it often relies on many individual photodiodes arranged on a curved substrate. A common problem with such curved sensors is that they require ultrathin silicon photodiodes, which inherently absorb less light than a standard imager’s pixels. But reflectors behind each photodiode in the artificial cat’s eye compensated for this, enabling the researchers to create a curved imager without sacrificing light absorption.

    Together, vertical slits and reflectors led to a camera that could see more clearly in the dark and isn’t fooled by camouflage. “Applying these two characteristics to autonomous vehicles or intelligent robots could naturally improve their ability to see objects more clearly at night and to identify specific targets more accurately,” says Song. He foresees this camera being used for self-driving cars or drones in complex urban environments.

    Song’s lab is continuing to work on using biological solutions to solve artificial vision problems. Currently, they are developing devices that mimic how brains process images, hoping to one day combine them with their biologically-inspired cameras. The goal, says Song, is to “mimic the neural systems of nature.”

    Song and his colleague’s work was published this week in the journal Science Advances.

  • Barrier Breaker Shapes Aerospace Engineering's Future
    by Willie D. Jones on 18. September 2024. at 12:00



    Wesley L. Harris’s life is a testament to the power of mentorship and determination. Harris, born in 1941 in Richmond, Virginia, grew up during the tumultuous years of the Civil Rights Movement and faced an environment fraught with challenges. His parents, both of whom only had a third-grade education, walked to Richmond from rural Virginia counties when the Great Depression left the region’s farming communities destitute. They found work as laborers in the city’s tobacco factories but pushed their son to pursue higher education so he could live a better life.

    Today, Harris is a professor of aeronautics and astronautics at MIT and heads the school’s Hypersonic Research Laboratory. More importantly, he is committed to fostering the next generation of engineers, particularly students of color.

    “I’ve been keeping my head down, working with students of color—especially at the Ph.D. level—to produce more scholars,” Harris says. “I do feel good about that.”

    From physics to aerospace engineering

    Harris’s journey into the world of science began under the guidance of his physics teacher at the all-Black Armstrong High School, in Richmond. The instructor taught Harris how to build a cloud chamber to investigate the collision of alpha particles with water droplets. The chamber made it possible to visualize the passage of ionizing radiation emitted by radium 226, which Harris sourced from a wristwatch that used the substance to make the watch hands glow in the dark.

    The project won first prize at Virginia’s statewide Black high school science fair, and he took the bold step of signing up for a separate science fair held for the state’s White students. Harris’s project received the third-place prize in physics at that event.

    Those awards and his teacher’s unwavering belief in Harris’s potential pushed him to aim higher. He says that he wanted nothing more than to become a physicist like her. Ironically, it was also her influence that led him to shift his career path from physics to aeronautical engineering.

    When discussing which college he should attend, she spoke to him as though he were a soldier getting his marching orders. “Wesley, you will go to the University of Virginia [in Charlottesville],” she proclaimed.

    Harris applied, knowing full well that the school did not allow Black students in the 1960s to pursue degrees in mathematics, physics, chemistry, English, economics, or political science.

    The only available point of entry for him was the university’s School of Engineering. He chose aerospace as his focus—the only engineering discipline that interested him. Harris became one of only seven Black students on a campus with 4,000 undergrads and the first Black student to join the prestigious Jefferson Society literary and debate club. He graduated in 1964 with a bachelor’s degree in aerospace engineering. He went on to earn his master’s and doctoral degrees in aerospace engineering from Princeton in 1966 and 1968, respectively.

    Harris’s Ph.D. thesis advisor at Princeton reinforced the values of mentorship and leadership instilled by his high school teacher, urging Harris to focus not only on his research but on how he could uplift others.

    Harris began his teaching career by breaking down barriers at the University of Virginia in 1968. He was the first Black person in the school’s history to be offered a tenured faculty position. He was also the university’s first Black engineering professor. In 1972, he joined MIT as a professor of aeronautics and astronautics.

    Harris’s dedication to supporting underrepresented minority groups at MIT began early in his tenure. In 1975, he founded the Office of Minority Education, where he pioneered innovative teaching methods such as videotaping and replaying lectures, which helped countless students succeed. “Some of those old videotapes may still be around,” he says, laughing.

    “I’ve been keeping my head down, working with students of color—especially at the Ph.D. level—to produce more scholars. I do feel good about that.”

    Over the years, he has periodically stepped away from MIT to take on other roles, including Program Manager in the Fluid and Thermal Physics Office and as manager of Computational Methods at NASA’s headquarters in Washington, D.C., from 1979 to 1980. He returned to NASA in 1993 and served as Associate Administrator for Aeronautics, overseeing personnel, programs, and facilities until 1995.

    He also served as Chief Administrative Officer and Vice President at the University of Tennessee Space Institute in Knoxville from 1990 to 1993 and as Dean of Engineering at the University of Connecticut, in Storrs, from 1985 to 1990.

    He was selected for membership in an oversight group convened by the U.S. House of Representatives Science Subcommittee on Research and Technology to monitor the funding activities of the National Science Foundation. He has also been a member and chair of the U.S. Army Science Board.

    Solving problems with aircraft

    Harris is a respected aeronautical innovator. Near the end of the Vietnam War, the U.S. Army approached MIT to help it solve a problem. Helicopters were being shot down by the enemy, who had learned to distinguish attack helicopters from those used for performing reconnaissance or transporting personnel and cargo by the noise they made. The Army needed a solution that would reduce the helicopters’ acoustic signatures without compromising performance. Harris and his aeronautics team at MIT delivered that technology. In January 1978, they presented a lab report detailing their findings to the U.S. Department of Defense. “Experimental and Theoretical Studies on Model Helicopter Rotor Noise” was subsequently published in The Journal of Sound and Vibration. A year later, Harris and his colleagues at the Fluid Dynamic Research Laboratory wrote another lab report on the topic, “Parametric Studies of Model Helicopter Blade Slap and Rotational Noise.”

    Harris has also heightened scientists’ understanding of the climate-altering effects of shock waves propagating upward from aircraft flying at supersonic speeds. He discovered that these high-speed airflows trigger chemical reactions among the carbon, oxides, nitrides, and sulfides in the atmosphere.

    For these and other contributions to aerospace engineering, Harris, a member of the American Institute of Aeronautics and Astronautics, was elected in 1995 to the National Academy of Engineering. In 2022, he was named the academy’s vice president.

    A model of educational leadership

    Despite his technical achievements, Harris says his greatest fulfillment comes from mentoring students. He takes immense pride in the four students who recently earned doctorates in hypersonics under his guidance, especially a Black woman who graduated this year.

    Harris’s commitment to nurturing young talent extends beyond his graduate students. For more than two decades, he has served as a housemaster at MIT’s New House residence hall, where he helps first-year undergraduate students successfully transition to campus life.

    “You must provide an environment that fosters the total development of the student, not just mastery of physics, chemistry, math, and economics,” Harris says.

    He takes great satisfaction in watching his students grow and succeed, knowing that he helped prepare them to make a positive impact on the world.

    Reflecting on his career, Harris acknowledges the profound impact of the mentors who guided him. Their lessons continue to influence his work and his unwavering commitment to mentoring the next generation.

    “I’ve always wanted to be like my high school teacher—a physicist who not only had deep knowledge of the scientific fundamentals but also compassion and love for Black folks,” he says.

    Through his work, Harris has not only advanced the field of aerospace engineering but has also paved the way for future generations to soar.

  • ICRA@40 Conference Celebrates 40 Years of IEEE Robotics
    by Evan Ackerman on 18. September 2024. at 11:30



    Four decades after the first IEEE International Conference on Robotics and Automation (ICRA) in Atlanta, robotics is bigger than ever. Next week in Rotterdam is the IEEE ICRA@40 conference, “a celebration of 40 years of pioneering research and technological advancements in robotics and automation.” There’s an ICRA every year, of course. Arguably the largest robotics research conference in the world, the 2024 edition was held in Yokohama, Japan back in May.

    ICRA@40 is not just a second ICRA conference in 2024. Next week’s conference is a single track that promises “a journey through the evolution of robotics and automation,” through four days of short keynotes from prominent roboticists from across the entire field. You can see for yourself, the speaker list is nuts. There are also debates and panels tackling big ideas, like: “What progress has been made in different areas of robotics and automation over the past decades, and what key challenges remain?” Personally, I’d say “lots” and “most of them,” but that’s probably why I’m not going to be up on stage.

    There will also be interactive research presentations, live demos, an expo, and more—the conference schedule is online now, and the abstracts are online as well. I’ll be there to cover it all, but if you can make it in person, it’ll be worth it.


    Forty years ago is a long time, but it’s not that long, so just for fun, I had a look at the proceedings of ICRA 1984 which are available on IEEE Xplore, if you’re curious. Here’s an excerpt of the forward from the organizers, which included folks from International Business Machines and Bell Labs:

    The proceedings of the first IEEE Computer Society International Conference on Robotics contains papers covering practically all aspects of robotics. The response to our call for papers has been overwhelming, and the number of papers submitted by authors outside the United States indicates the strong international interest in robotics.
    The Conference program includes papers on: computer vision; touch and other local sensing; manipulator kinematics, dynamics, control and simulation; robot programming languages, operating systems, representation, planning, man-machine interfaces; multiple and mobile robot systems.
    The technical level of the Conference is high with papers being presented by leading researchers in robotics. We believe that this conference, the first of a series to be sponsored by the IEEE, will provide a forum for the dissemination of fundamental research results in this fast developing field.

    Technically, this was “ICR,” not “ICRA,” and it was put on by the IEEE Computer Society’s Technical Committee on Robotics, since there was no IEEE Robotics and Automation Society at that time; RAS didn’t get off the ground until 1987.

    1984 ICR(A) had two tracks, and featured about 75 papers presented over three days. Looking through the proceedings, you’ll find lots of familiar names: Harry Asada, Ruzena Bajcsy, Ken Salisbury, Paolo Dario, Matt Mason, Toshio Fukuda, Ron Fearing, and Marc Raibert. Many of these folks will be at ICRA@40, so if you see them, make sure and thank them for helping to start it all, because 40 years of robotics is definitely something to celebrate.

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 09. February 2022. at 15:31



    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


    Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

    Andrew Ng on...

    The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

    Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

    When you say you want a foundation model for computer vision, what do you mean by that?

    Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

    What needs to happen for someone to build a foundation model for video?

    Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

    Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

    Back to top

    It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

    Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

    “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
    —Andrew Ng, CEO & Founder, Landing AI

    I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

    I expect they’re both convinced now.

    Ng: I think so, yes.

    Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

    Back to top

    How do you define data-centric AI, and why do you consider it a movement?

    Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

    When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

    The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

    You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

    Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

    When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

    Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

    “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
    —Andrew Ng

    For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

    Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

    Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

    One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

    When you talk about engineering the data, what do you mean exactly?

    Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

    For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

    Back to top

    What about using synthetic data, is that often a good solution?

    Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

    Do you mean that synthetic data would allow you to try the model on more data sets?

    Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

    “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
    —Andrew Ng

    Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

    Back to top

    To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

    Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

    One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

    How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

    Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

    In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

    So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

    Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

    Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

    Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

    Back to top

    This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 08. February 2022. at 14:00



    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

    Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

    But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

    How is AI currently being used to design the next generation of chips?

    Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

    Portrait of a woman with blonde-red hair smiling at the camera Heather GorrMathWorks

    Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

    What are the benefits of using AI for chip design?

    Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

    So it’s like having a digital twin in a sense?

    Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

    So, it’s going to be more efficient and, as you said, cheaper?

    Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

    We’ve talked about the benefits. How about the drawbacks?

    Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

    Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

    One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

    How can engineers use AI to better prepare and extract insights from hardware or sensor data?

    Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

    One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

    What should engineers and designers consider when using AI for chip design?

    Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

    How do you think AI will affect chip designers’ jobs?

    Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

    How do you envision the future of AI and chip design?

    Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 07. February 2022. at 16:12



    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

    IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

    Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

    “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

    The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

    Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

    Golden dilution refrigerator hanging vertically Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

    In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

    As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

    In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

    “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

    On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

    While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

    “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

    This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

    “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

    Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.