IEEE News

IEEE Spectrum IEEE Spectrum

  • Video Friday: UC Berkeley's Little Humanoid
    by Evan Ackerman on 02. August 2024. at 16:00



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
    IROS 2024: 14–18 October 2024, ABU DHABI, UAE
    ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
    Cybathlon 2024: 25–27 October 2024, ZURICH

    Enjoy today’s videos!

    We introduce Berkeley Humanoid, a reliable and low-cost mid-scale humanoid research platform for learning-based control. Our lightweight, in-house-built robot is designed specifically for learning algorithms with low simulation complexity, anthropomorphic motion, and high reliability against falls. Capable of omnidirectional locomotion and withstanding large perturbations with a compact setup, our system aims for scalable, sim-to-real deployment of learning-based humanoid systems.

    [ Berkeley Humanoid ]

    This article presents Ray, a new type of audio-animatronic robot head. All the mechanical structure of the robot is built in one step by 3-D printing... This simple, lightweight structure and the separatetendon-based actuation system underneath allow for smooth, fast motions of the robot. We also develop an audio-driven motion generation module that automatically synthesizes natural and rhythmic motions of the head and mouth based on the given audio.

    [ Paper ]

    CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.

    [ MIT News ]

    Okay, sign me up for this.

    [ Deep Robotics ]

    NEURA Robotics is among the first joining the early access NVIDIA Humanoid Robot Developer Program.

    This could be great, but there’s an awful lot of jump cuts in that video.

    [ Neura ] via [ NVIDIA ]

    I like that Unitree’s tagline in the video description here is “let’s have fun together.”

    Is that “please don’t do dumb stuff with our robots” at the end of the video new...?

    [ Unitree ]

    NVIDIA CEO Jensen Huang presented a major breakthrough on Project GR00T with WIRED’s Lauren Goode at SIGGRAPH 2024. In a two-minute demonstration video, NVIDIA explained a systematic approach they discovered to scale up robot data, addressing one of the most challenging issues in robotics.

    [ NVIDIA ]

    In this research, we investigated the innovative use of a manipulator as a tail in quadruped robots to augment their physical capabilities. Previous studies have primarily focused on enhancing various abilities by attaching robotic tails that function solely as tails on quadruped robots. While these tails improve the performance of the robots, they come with several disadvantages, such as increased overall weight and higher costs. To mitigate these limitations, we propose the use of a 6-DoF manipulator as a tail, allowing it to serve both as a tail and as a manipulator.

    [ Paper ]

    In this end-to-end demo, we showcase how MenteeBot transforms the shopping experience for individuals, particularly those using wheelchairs. Through discussions with a global retailer, MenteeBot has been designed to act as the ultimate shopping companion, offering a seamless, natural experience.

    [ Menteebot ]

    Nature Fresh Farms, based in Leamington, Ontario is one of North America’s largest greenhouse farms growing high-quality organics, berries, peppers, tomatoes, and cucumbers. In 2022, Nature Fresh partnered with Four Growers, a FANUC Authorized System Integrator, to develop a robotic system equipped with AI to harvest tomatoes in the greenhouse environment.

    [ FANUC ]

    Contrary to what you may have been led to believe by several previous Video Fridays, WVUIRL’s open source rover is quite functional, most of the time.

    [ WVUIRL ]

    Honeybee Robotics, a Blue Origin company, is developing Lunar Utility Navigation with Advanced Remote Sensing and Autonomous Beaming for Energy Redistribution, also known as LUNARSABER. In July 2024, Honeybee Robotics captured LUNARSABER’s capabilities during a demonstration of a scaled prototype.

    [ Honeybee Robotics ]

    Bunker Mini is a compact tracked mobile robot specifically designed to tackle demanding off-road terrains.

    [ AgileX ]

    In this video we present results of our lab from the latest field deployments conducted in the scope of the Digiforest EU project, in Stein am Rhein, Switzerland. Digiforest brings together various partners working on aerial and legged robots, autonomous harvesters, and forestry decision-makers. The goal of the project is to enable autonomous robot navigation, exploration, and mapping, both below and above the canopy, to create a data pipeline that can support and enhance foresters’ decision-making systems.

    [ ARL ]

  • The President-Elect Candidates’ Plans to Further IEEE’s Mission
    by Joanna Goodrich on 01. August 2024. at 18:00



    The annual IEEE election process begins this month, so be sure to check your mailbox for your ballot. To help you choose the 2025 IEEE president-elect, The Institute is publishing the official biographies and position statements of the three candidates, as approved by the IEEE Board of Directors. The candidates are IEEE Fellows Mary Ellen Randall, John Verboncoeur, and S.K. Ramesh.

    In June, IEEE President Tom Coughlin moderated the Meet the 2025 IEEE President-Elect Candidates Forum, where the candidates were asked pressing questions from IEEE members.

    IEEE Fellow Mary Ellen Randall

    A smiling woman standing in front of a blue background. Mary Ellen Randall

    Nominated by the IEEE Board of Directors

    Randall founded Ascot Technologies in 2000 in Cary, N.C. Ascot develops enterprise applications using mobile data delivery technologies. She serves as the award-winning company’s CEO.

    Before launching Ascot, she worked for IBM, where she held several technical and managerial positions in hardware and software development, digital video chips, and test design automation. She routinely managed international projects.

    Randall has served as IEEE treasurer, director of IEEE Region 3, chair of IEEE Women in Engineering, and vice president of IEEE Member and Geographic Activities.

    In 2016 she created the IEEE MOVE (Mobile Outreach VEhicle) program to assist with disaster relief efforts and for science, technology, engineering, and math educational purposes.

    The IEEE-Eta Kappa Nu honor society member has received several honors including the 2020 IEEE Haraden Pratt Award, which recognizes outstanding volunteer service to IEEE.

    She was named a top businesswoman in North Carolina’s Research Triangle Park area, and she made the 2003 Business Leader Impact 100 list.

    Candidate Statement

    Aristotle said, “the whole is greater than the sum of its parts.” Certainly, when looking at IEEE, this metaphysics phrase comes to my mind. In IEEE we have engineers and technical professionals developing, standardizing and utilizing technology from diverse perspectives. IEEE members around the world:

    • perform and share research, product development activities, and standard development
    • network and engage with each other and their communities
    • educate current and future technology professionals
    • measure performance and quality
    • formulate ethics choices
    • and many more – these are just a few examples!

    We perform these actions across a wide spectrum of in-depth subjects. It is our diversity, yet oneness, that makes me confident we have a positive future ahead. How do we execute on Aristotle’s vision? First, we need to unite on mission goals which span our areas of interest. This way we can bring multiple disciplines and perspectives together to accomplish those big goals. Our strategy will guide our actions in this regard.

    Second, we need to streamline our financing of new innovations and systematize the introduction of these programs.

    Third, we need to execute and support our best ideas on a continuing basis.

    As President, I pledge to:

    Institute innovative products and services to ensure our mutually successful future;

    Engage stakeholders (members, partners and communities) to unite on a comprehensive vision;

    Expand technology advancement and adoption throughout the world;

    Execute with excellence, ethics, and financial responsibility.

    Finally, I promise to lead by example with enthusiasm and integrity and I humbly ask for your vote.

    IEEE Fellow John Verboncoeur

    A photo of a man in a grey suit and multicolored tie. Steven Miller

    Nominated by the IEEE Board of Directors

    Verboncoeur is senior associate dean for research and graduate studies in Michigan State University’s (MSU) engineering college, in East Lansing.

    In 2001 he founded the computational engineering science program at the University of California, Berkeley, chairing it until 2010.

    In 2015 he cofounded the MSU computational mathematics, science, and engineering department.

    His area of interest is plasma physics, with over 500 publications and over 6,800 citations.

    He is on the boards of Physics of Plasmas, the American Center for Mobility, and the U.S. Department of Energy Fusion Energy Science Advisory Committee.

    Verboncoeur has led startups developing digital exercise and health systems and the consumer credit report. He also had a role in developing the U.S. Postal Service’s mail-forwarding system.

    His IEEE experience includes serving as 2023 vice president of Technical Activities, 2020 acting vice president of Publication Services and Products Board, 2019-2020 Division IV director, and 2015—2016 president of the Nuclear and Plasma Sciences Society.

    He received a Ph.D. in 1992 in nuclear engineering from UC Berkeley.

    Candidate Statement

    Ensure IEEE remains THE premier professional technical organization, deliver value via new participants, products and programs, including events, publications, and innovative personalized products and services, to enable our community to change the world. Key strategic programs include:

    Climate Change Technologies (CCT): Existential to humanity, addressing mitigation and adaptation must include technology R&D, local relevance for practitioners, university and K-12 students, the general public, media and policymakers and local and global standards.

    Smart Agrofood Systems (SmartAg): Smart technologies applied to the food supply chain from soil to consumer to compost.

    Artificial Intelligence (AI): Implications from technology to business to ethics. A key methodology for providing personalized IEEE products and services within our existing portfolio, and engaging new audiences such as technology decision makers in academia, government and technology finance by extracting value from our vast data to identify emerging trends.

    Organizational growth opportunities include scaling and coordinating our public policy strategy worldwide, building on our credibility to inform and educate. Global communications capability is critical to coordinate and amplify our impact. Lastly, we need to enhance our ability to execute IEEE-wide programs and initiatives, from investment in transformative tools and products to mission-based education, outreach and engagement. This can be accomplished by judicious use of resources generated by business activities through creation of a strategic program to invest in our future with the goal of advancing technology for humanity.

    With a passion for the nexus of technology with finance and public policy, I hope to earn your support.

    IEEE Fellow S.K. Ramesh

    A photo a smiling man in a dark suit and a red tie.  S.K. Ramesh

    Nominated by the IEEE Board of Directors

    Ramesh is a professor of electrical and computer engineering at California State University Northridge’s college of engineering and computer science, where he served as dean from 2006 to 2017.

    An IEEE volunteer for 42 years, he has served on the IEEE Board of Directors, the Publication Services and Products Board, Awards Board, and the Fellows Committee. Leadership positions he has held include vice president of IEEE Educational Activities, president of the IEEE-Eta Kappa Nu honor society, and chair of the IEEE Hearing Board.

    As the 2016–2017 vice president of IEEE Educational Activities, he championed several successful programs including the IEEE Learning Network and the IEEE TryEngineering Summer Institute.

    Ramesh served as the 2022–2023 president of ABET, the global accrediting organization for academic programs in applied science, computing, engineering, and technology.

    He received his bachelor’s degree in electronics and communication engineering from the University of Madras in India. He earned his master’s degree in EE and Ph.D. in molecular science from Southern Illinois University, in Carbondale.

    Candidate Statement

    We live in an era of rapid technological development where change is constant. My leadership experiences of four decades across IEEE and ABET have taught me some timeless values in this rapidly changing world: To be Inclusive, Collaborative, Accountable, Resilient and Ethical. Connection and community make a difference. IEEE’s mission is especially important, as the pace of change accelerates with advances in AI, Robotics and Biotechnology. I offer leadership that inspires others to believe and enable that belief to become reality. “I CARE”!

    My top priority is to serve our members and empower our technical communities worldwide to create and advance technologies to solve our greatest challenges.

    If elected, I will focus on three strategic areas:

    Member Engagement:

    • Broaden participation of Students, Young Professionals (YPs), and Women in Engineering (WIE).
    • Expand access to affordable continuing education programs through the IEEE Learning Network (ILN).

    Volunteer Engagement:

    • Nurture and support IEEE’s volunteer leaders to transform IEEE globally through a volunteer academy program that strengthens collaboration, inclusion, and recognition.
    • Incentivize volunteers to improve cross-regional collaboration, engagement and communications between Chapters and Sections.

    Industry Engagement:

    • Transform hybrid/virtual conferences, and open access publications, to make them more relevant to engineers and technologists in industry.
    • Focus on innovation, standards, and sustainable development that address skills needed for jobs of the future.

    Our members are the “heart and soul” of IEEE. Let’s work together as one IEEE to attract, retain, and serve our diverse global members. Thank you for your participation and support.

  • The Saga of AD-X2, the Battery Additive That Roiled the NBS
    by Allison Marsh on 01. August 2024. at 14:00



    Senate hearings, a post office ban, the resignation of the director of the National Bureau of Standards, and his reinstatement after more than 400 scientists threatened to resign. Who knew a little box of salt could stir up such drama?

    What was AD-X2?

    It all started in 1947 when a bulldozer operator with a 6th grade education, Jess M. Ritchie, teamed up with UC Berkeley chemistry professor Merle Randall to promote AD-X2, an additive to extend the life of lead-acid batteries. The problem of these rechargeable batteries’ dwindling capacity was well known. If AD-X2 worked as advertised, millions of car owners would save money.

    Black and white photo of a man in a suit holding an object in his hands and talking. Jess M. Ritchie demonstrates his AD-X2 battery additive before the Senate Select Committee on Small Business.National Institute of Standards and Technology Digital Collections

    A basic lead-acid battery has two electrodes, one of lead and the other of lead dioxide, immersed in dilute sulfuric acid. When power is drawn from the battery, the chemical reaction splits the acid molecules, and lead sulfate is deposited in the solution. When the battery is charged, the chemical process reverses, returning the electrodes to their original state—almost. Each time the cell is discharged, the lead sulfate “hardens” and less of it can dissolve in the sulfuric acid. Over time, it flakes off, and the battery loses capacity until it’s dead.

    By the 1930s, so many companies had come up with battery additives that the U.S. National Bureau of Standards stepped in. Its lab tests revealed that most were variations of salt mixtures, such as sodium and magnesium sulfates. Although the additives might help the battery charge faster, they didn’t extend battery life. In May 1931, NBS (now the National Institute of Standards and Technology, or NIST) summarized its findings in Letter Circular No. 302: “No case has been found in which this fundamental reaction is materially altered by the use of these battery compounds and solutions.”

    Of course, innovation never stops. Entrepreneurs kept bringing new battery additives to market, and the NBS kept testing them and finding them ineffective.

    Do battery additives work?

    After World War II, the National Better Business Bureau decided to update its own publication on battery additives, “Battery Compounds and Solutions.” The publication included a March 1949 letter from NBS director Edward Condon, reiterating the NBS position on additives. Prior to heading NBS, Condon, a physicist, had been associate director of research at Westinghouse Electric in Pittsburgh and a consultant to the National Defense Research Committee. He helped set up MIT’s Radiation Laboratory, and he was also briefly part of the Manhattan Project. Needless to say, Condon was familiar with standard practices for research and testing.

    Meanwhile, Ritchie claimed that AD-X2’s secret formula set it apart from the hundreds of other additives on the market. He convinced his senator, William Knowland, a Republican from Oakland, Calif., to write to NBS and request that AD-X2 be tested. NBS declined, not out of any prejudice or ill will, but because it tested products only at the request of other government agencies. The bureau also had a longstanding policy of not naming the brands it tested and not allowing its findings to be used in advertisements.

    Photo of a product box with directions printed on it. AD-X2 consisted mainly of Epsom salt and Glauber’s salt.National Institute of Standards and Technology Digital Collections

    Ritchie cried foul, claiming that NBS was keeping new businesses from entering the marketplace. Merle Randall launched an aggressive correspondence with Condon and George W. Vinal, chief of NBS’s electrochemistry section, extolling AD-X2 and the testimonials of many users. In its responses, NBS patiently pointed out the difference between anecdotal evidence and rigorous lab testing.

    Enter the Federal Trade Commission. The FTC had received a complaint from the National Better Business Bureau, which suspected that Pioneers, Inc.—Randall and Ritchie’s distribution company—was making false advertising claims. On 22 March 1950, the FTC formally asked NBS to test AD-X2.

    By then, NBS had already extensively tested the additive. A chemical analysis revealed that it was 46.6 percent magnesium sulfate (Epsom salt) and 49.2 percent sodium sulfate (Glauber’s salt, a horse laxative) with the remainder being water of hydration (water that’s been chemically treated to form a hydrate). That is, AD-X2 was similar in composition to every other additive on the market. But, because of its policy of not disclosing which brands it tests, NBS didn’t immediately announce what it had learned.

    The David and Goliath of battery additives

    NBS then did something unusual: It agreed to ignore its own policy and let the National Better Business Bureau include the results of its AD-X2 tests in a public statement, which was published in August 1950. The NBBB allowed Pioneers to include a dissenting comment: “These tests were not run in accordance with our specification and therefore did not indicate the value to be derived from our product.”

    Far from being cowed by the NBBB’s statement, Ritchie was energized, and his story was taken up by the mainstream media. Newsweek’s coverage pitted an up-from-your-bootstraps David against an overreaching governmental Goliath. Trade publications, such as Western Construction News and Batteryman, also published flattering stories about Pioneers. AD-X2 sales soared.

    Then, in January 1951, NBS released its updated pamphlet on battery additives, Circular 504. Once again, tests by the NBS found no difference in performance between batteries treated with additives and the untreated control group. The Government Printing Office sold the circular for 15 cents, and it was one of NBS’s most popular publications. AD-X2 sales plummeted.

    Ritchie needed a new arena in which to challenge NBS. He turned to politics. He called on all of his distributors to write to their senators. Between July and December 1951, 28 U.S. senators and one U.S. representative wrote to NBS on behalf of Pioneers.

    Condon was losing his ability to effectively represent the Bureau. Although the Senate had confirmed Condon’s nomination as director without opposition in 1945, he was under investigation by the House Committee on Un-American Activities for several years. FBI Director J. Edgar Hoover suspected Condon to be a Soviet spy. (To be fair, Hoover suspected the same of many people.) Condon was repeatedly cleared and had the public backing of many prominent scientists.

    But Condon felt the investigations were becoming too much of a distraction, and so he resigned on 10 August 1951. Allen V. Astin became acting director, and then permanent director the following year. And he inherited the AD-X2 mess.

    Astin had been with NBS since 1930. Originally working in the electronics division, he developed radio telemetry techniques, and he designed instruments to study dielectric materials and measurements. During World War II, he shifted to military R&D, most notably development of the proximity fuse, which detonates an explosive device as it approaches a target. I don’t think that work prepared him for the political bombs that Ritchie and his supporters kept lobbing at him.

    Mr. Ritchie almost goes to Washington

    On 6 September 1951, another government agency entered the fray. C.C. Garner, chief inspector of the U.S. Post Office Department, wrote to Astin requesting yet another test of AD-X2. NBS dutifully submitted a report that the additive had “no beneficial effects on the performance of lead acid batteries.” The post office then charged Pioneers with mail fraud, and Ritchie was ordered to appear at a hearing in Washington, D.C., on 6 April 1952. More tests were ordered, and the hearing was delayed for months.

    Back in March 1950, Ritchie had lost his biggest champion when Merle Randall died. In preparation for the hearing, Ritchie hired another scientist: Keith J. Laidler, an assistant professor of chemistry at the Catholic University of America. Laidler wrote a critique of Circular 504, questioning NBS’s objectivity and testing protocols.

    Ritchie also got Harold Weber, a professor of chemical engineering at MIT, to agree to test AD-X2 and to work as an unpaid consultant to the Senate Select Committee on Small Business.

    Life was about to get more complicated for Astin and NBS.

    Why did the NBS Director resign?

    Trying to put an end to the Pioneers affair, Astin agreed in the spring of 1952 that NBS would conduct a public test of AD-X2 according to terms set by Ritchie. Once again, the bureau concluded that the battery additive had no beneficial effect.

    However, NBS deviated slightly from the agreed-upon parameters for the test. Although the bureau had a good scientific reason for the minor change, Ritchie had a predictably overblown reaction—NBS cheated!

    Then, on 18 December 1952, the Senate Select Committee on Small Business—for which Ritchie’s ally Harold Weber was consulting—issued a press release summarizing the results from the MIT tests: AD-X2 worked! The results “demonstrate beyond a reasonable doubt that this material is in fact valuable, and give complete support to the claims of the manufacturer.” NBS was “simply psychologically incapable of giving Battery AD-X2 a fair trial.”

    Black and white photo of a man standing next to a row of lead-acid batteries. The National Bureau of Standards’ regular tests of battery additives found that the products did not work as claimed.National Institute of Standards and Technology Digital Collections

    But the press release distorted the MIT results.The MIT tests had focused on diluted solutions and slow charging rates, not the normal use conditions for automobiles, and even then AD-X2’s impact was marginal. Once NBS scientists got their hands on the report, they identified the flaws in the testing.

    How did the AD-X2 controversy end?

    The post office finally got around to holding its mail fraud hearing in the fall of 1952. Ritchie failed to attend in person and didn’t realize his reports would not be read into the record without him, which meant the hearing was decidedly one-sided in favor of NBS. On 27 February 1953, the Post Office Department issued a mail fraud alert. All of Pioneers’ mail would be stopped and returned to sender stamped “fraudulent.” If this charge stuck, Ritchie’s business would crumble.

    But something else happened during the fall of 1952: Dwight D. Eisenhower, running on a pro-business platform, was elected U.S. president in a landslide.

    Ritchie found a sympathetic ear in Eisenhower’s newly appointed Secretary of Commerce Sinclair Weeks, who acted decisively. The mail fraud alert had been issued on a Friday. Over the weekend, Weeks had a letter hand-delivered to Postmaster General Arthur Summerfield, another Eisenhower appointee. By Monday, the fraud alert had been suspended.

    What’s more, Weeks found that Astin was “not sufficiently objective” and lacked a “business point of view,” and so he asked for Astin’s resignation on 24 March 1953. Astin complied. Perhaps Weeks thought this would be a mundane dismissal, just one of the thousands of political appointments that change hands with every new administration. That was not the case.

    More than 400 NBS scientists—over 10 percent of the bureau’s technical staff— threatened to resign in protest. The American Academy for the Advancement of Science also backed Astin and NBS. In an editorial published in Science, the AAAS called the battery additive controversy itself “minor.” “The important issue is the fact that the independence of the scientist in his findings has been challenged, that a gross injustice has been done, and that scientific work in the government has been placed in jeopardy,” the editorial stated.

    Two black and white portrait photos of men in suits. National Bureau of Standards director Edward Condon [left] resigned in 1951 because investigations into his political beliefs were impeding his ability to represent the bureau. Incoming director Allen V. Astin [right] inherited the AD-X2 controversy, which eventually led to Astin’s dismissal and then his reinstatement after a large-scale protest by NBS researchers and others. National Institute of Standards and Technology Digital Collections

    Clearly, AD-X2’s effectiveness was no longer the central issue. The controversy was a stand-in for a larger debate concerning the role of government in supporting small business, the use of science in making policy decisions, and the independence of researchers. Over the previous few years, highly respected scientists, including Edward Condon and J. Robert Oppenheimer, had been repeatedly investigated for their political beliefs. The request for Astin’s resignation was yet another government incursion into scientific freedom.

    Weeks, realizing his mistake, temporarily reinstated Astin on 17 April 1953, the day the resignation was supposed to take effect. He also asked the National Academy of Sciences to test AD-X2 in both the lab and the field. By the time the academy’s report came out in October 1953, Weeks had permanently reinstated Astin. The report, unsurprisingly, concluded that NBS was correct: AD-X2 had no merit. Science had won.

    NIST makes a movie

    On 9 December 2023, NIST released the 20-minute docudrama The AD-X2 Controversy. The film won the Best True Story Narrative and Best of Festival at the 2023 NewsFest Film Festival. I recommend taking the time to watch it.

    The AD-X2 Controversy www.youtube.com

    Many of the actors are NIST staff and scientists, and they really get into their roles. Much of the dialogue comes verbatim from primary sources, including congressional hearings and contemporary newspaper accounts.

    Despite being an in-house production, NIST’s film has a Hollywood connection. The film features brief interviews with actors John and Sean Astin (of Lord of The Rings and Stranger Things fame)—NBS director Astin’s son and grandson.

    The AD-X2 controversy is just as relevant today as it was 70 years ago. Scientific research, business interests, and politics remain deeply entangled. If the public is to have faith in science, it must have faith in the integrity of scientists and the scientific method. I have no objection to science being challenged—that’s how science moves forward—but we have to make sure that neither profit nor politics is tipping the scales.

    Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

    An abridged version of this article appears in the August 2024 print issue as “The AD-X2 Affair.”

    References


    I first heard about AD-X2 after my IEEE Spectrum editor sent me a notice about NIST’s short docudrama The AD-X2 Controversy, which you can, and should, stream online. NIST held a colloquium on 31 July 2018 with John Astin and his brother Alexander (Sandy), where they recalled what it was like to be college students when their father’s reputation was on the line. The agency has also compiled a wonderful list of resources, including many of the primary source government documents.

    The AD-X2 controversy played out in the popular media, and I read dozens of articles following the almost daily twists and turns in the case in the New York Times, Washington Post, and Science.

    I found Elio Passaglia’s A Unique Institution: The National Bureau of Standards 1950-1969 to be particularly helpful. The AD-X2 controversy is covered in detail in Chapter 2: Testing Can Be Troublesome.

    A number of graduate theses have been written about AD-X2. One I consulted was Samuel Lawrence’s 1958 thesis “The Battery AD-X2 Controversy: A Study of Federal Regulation of Deceptive Business Practices.” Lawrence also published the 1962 book The Battery Additive Controversy.


  • Will This Flying Camera Finally Take Off?
    by Tekla S. Perry on 31. July 2024. at 12:00



    Ten years. Two countries. Multiple redesigns. Some US $80 million invested. And, finally, Zero Zero Robotics has a product it says is ready for consumers, not just robotics hobbyists—the HoverAir X1. The company has sold several hundred thousand flying cameras since the HoverAir X1 started shipping last year. It hasn’t gotten the millions of units into consumer hands—or flying above them—that its founders would like to see, but it’s a start.

    “It’s been like a 10-year-long Ph.D. project,” says Zero Zero founder and CEO Meng Qiu Wang. “The thesis topic hasn’t changed. In 2014 I looked at my cell phone and thought that if I could throw away the parts I don’t need—like the screen—and add some sensors, I could build a tiny robot.”

    I first spoke to Wang in early 2016, when Zero Zero came out of stealth with its version of a flying camera—at $600. Wang had been working on the project for two years. He started the project in Silicon Valley, where he and cofounder Tony Zhang were finishing up Ph.D.s in computer science at Stanford University. Then the two decamped for China, where development costs are far less.

    Flying cameras were a hot topic at the time; startup Lily Robotics demonstrated a $500 flying camera in mid-2015 (and was later charged with fraud for faking its demo video), and in March of 2016 drone-maker DJI introduced a drone with autonomous flying and tracking capabilities that turned it into much the same type of flying camera that Wang envisioned, albeit at the high price of $1400.

    Wang aimed to make his flying camera cheaper and easier to use than these competitors by relying on image processing for navigation—no altimeter, no GPS. In this approach, which has changed little since the first design, one camera looks at the ground and algorithms follow the camera’s motion to navigate. Another camera looks out ahead, using facial and body recognition to track a single subject.

    The current version, at $349, does what Wang had envisioned, which is, he told me, “to turn the camera into a cameraman.” But, he points out, the hardware and software, and particularly the user interface, changed a lot. The size and weight have been cut in half; it’s just 125 grams. This version uses a different and more powerful chipset, and the controls are on board; while you can select modes from a smart phone app, you don’t have to.

    I can verify that it is cute (about the size of a paperback book), lightweight, and extremely easy to use. I’ve never flown a standard drone without help or crashing but had no problem sending the HoverAir up to follow me down the street and then land on my hand.

    It isn’t perfect. It can’t fly over water—the movement of the water confuses the algorithms that judge speed through video images of the ground. And it only tracks people; though many would like it to track their pets, Wang says animals behave erratically, diving into bushes or other places the camera can’t follow. Since the autonomous navigation algorithms rely on the person being filmed to avoid objects and simply follows that path, such dives tend to cause the drone to crash.

    Since we last spoke eight years ago, Wang has been through the highs and lows of the startup rollercoaster, turning to contract engineering for a while to keep his company alive. He’s become philosophical about much of the experience.

    Here’s what he had to say.

    We last spoke in 2016. Tell me how you’ve changed.

    Meng Qiu Wang: When I got out of Stanford in 2014 and started the company with Tony [Zhang], I was eager and hungry and hasty and I thought I was ready. But retrospectively, I wasn’t ready to start a company. I was chasing fame and money, and excitement.

    Now I’m 42, I have a daughter—everything seems more meaningful now. I’m not a Buddhist, but I have a lot of Zen in my philosophy now.

    I was trying so hard to flip the page to see the next chapter of my life, but now I realize, there is no next chapter, flipping the page itself is life.

    You were moving really fast in 2016 and 2017. What happened during that time?

    Wang: After coming out of stealth, we ramped up from 60 to 140 people planning to take this product into mass production. We got a crazy amount of media attention—covered by 2,200 media outlets. We went to CES, and it seemed like we collected every trophy there was there.

    And then Apple came to us, inviting us to retail at all the Apple stores. This was a big deal; I think we were the first third party robotic product to do live demos in Apple stores. We produced about 50,000 units, bringing in about $15 million in revenue in six months.

    Then a giant company made us a generous offer and we took it. But it didn’t work out. It was a certainly lesson learned for us. I can’t say more about that, but at this point if I walk down the street and I see a box of pizza, I would not try to open it; there really is no free lunch.

    a black caged drone with fans and a black box in the middle This early version of the Hover flying camera generated a lot of initial excitement, but never fully took off.Zero Zero Robotics

    How did you survive after that deal fell apart?

    Wang: We went from 150 to about 50 people and turned to contract engineering. We worked with toy drone companies, with some industrial product companies. We built computer vision systems for larger drones. We did almost four years of contract work.

    But you kept working on flying cameras and launched a Kickstarter campaign in 2018. What happened to that product?

    Wang: It didn’t go well. The technology wasn’t really there. We filled some orders and refunded ones that we couldn’t fill because we couldn’t get the remote controller to work.

    We really didn’t have enough resources to create a new product for a new product category, a flying camera, to educate the market.

    So we decided to build a more conventional drone—our V-Coptr, a V-shaped bi-copter with only two propellers—to compete against DJI. We didn’t know how hard it would be. We worked on it for four years. Key engineers left out of total dismay, they lost faith, they lost hope.

    We came so close to going bankrupt so many times—at least six times in 10 years I thought I wasn’t going to be able to make payroll for the next month, but each time I got super lucky with something random happening. I never missed paying one dime—not because of my abilities, just because of luck.

    We still have a relatively healthy chunk of the team, though. And this summer my first ever software engineer is coming back. The people are the biggest wealth that we’ve collected over the years. The people who are still with us are not here for money or for success. We just realized along the way that we enjoy working with each other on impossible problems.

    When we talked in 2016, you envisioned the flying camera as the first in a long line of personal robotics products. Is that still your goal?

    Wang: In terms of short-term strategy, we are focusing 100 percent on the flying camera. I think about other things, but I’m not going to say I have an AI hardware company, though we do use AI. After 10 years I’ve given up on talking about that.

    Do you still think there’s a big market for a flying camera?

    Wang: I think flying cameras have the potential to become the second home robot [the first being the robotic vacuum] that can enter tens of millions of homes.

  • Gladys West: The Hidden Figure Behind GPS
    by Willie D. Jones on 30. July 2024. at 18:00



    Schoolchildren around the world are told that they have the potential to be great, often with the cheery phrase: “The sky’s the limit!”

    Gladys West took those words literally.

    While working for four decades as a mathematician and computer programmer at the U.S. Naval Proving Ground (now the Naval Surface Warfare Center) in Dahlgren, Va., she prepared the way for a satellite constellation in the sky that became an indispensable part of modern life: the Global Positioning System, or GPS.

    The second Black woman to ever work at the proving ground, West led a group of analysts who used satellite sensor data to calculate the shape of the Earth and the orbital routes around it. Her meticulous calculations and programming work established the flight paths now used by GPS satellites, setting the stage for navigation and positioning systems on which the world has come to rely.

    For decades, West’s contributions went unacknowledged. But she has begun receiving overdue recognition. In 2018 she was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame. In 2021 the International Academy of Digital Arts and Sciences presented her its Webby Lifetime Achievement Award, while the U.K. Royal Academy of Engineering gave her the Prince Philip Medal, the organization’s highest individual honor.

    West was presented the 2024 IEEE President’s Award for “mathematical modeling and development of satellite geodesy models that played a pivotal role in the development of the Global Positioning System.” The award is sponsored by IEEE.

    How the “hidden figure” overcame barriers

    West’s path to becoming a technology professional and an IEEE honoree was an unlikely one. Born in 1930 in Sutherland, Va., she grew up working on her family’s farm. To supplement the family’s income, her mother worked at a tobacco factory and her father was employed by a railroad company.

    Physical toil in the hot sun from daybreak until sundown with paltry financial returns, West says, made her determined to do something other than farming.

    Every day when she ventured into the fields to sow or harvest crops with her family, her thoughts were on the little red schoolhouse beyond the edge of the farm. She recalls gladly making the nearly 5-kilometer trek from her house, through the woods and over streams, to reach the one-room school.

    She knew that postsecondary education was her ticket out of farm life, so throughout her school years she made sure she was a standout student and a model of focus and perseverance.

    Her parents couldn’t afford to pay for her college education, but as valedictorian of her high school class, she earned a full-tuition scholarship from the state of Virginia. Money she earned as a babysitter paid for her room and board.

    West decided to pursue a degree in mathematics at Virginia State College (now Virginia State University), a historically Black school in Petersburg.

    At the time, the field was dominated by men. She earned a bachelor’s degree in the subject in 1952 and became a schoolteacher in Waverly, Va. After two years in the classroom, she returned to Virginia State to pursue a master’s degree in mathematics, which she earned in 1955.

    black and white image of a woman sitting at a desk writing on a pad of paper Gladys West at her desk, meticulously crunching numbers manually in the era before computers took over such tasks.Gladys West

    Setting the groundwork for GPS

    West began her career at the Naval Proving Ground in early 1956. She was hired as a mathematician, joining a cadre of workers who used linear algebra, calculus, and other methods to manually solve complex problems such as differential equations. Their mathematical wizardry was used to handle trajectory analysis for ships and aircraft as well as other applications.

    She was one of four Black employees at the facility, she says, adding that her determination to prove the capability of Black professionals drove her to excel.

    As computers were introduced into the Navy’s operations in the 1960s, West became proficient in Fortran IV. The programming language enabled her to use the IBM 7030—the world’s fastest supercomputer at the time—to process data at an unprecedented rate.

    Because of her expertise in mathematics and computer science, she was appointed director of projects that extracted valuable insights from satellite data gathered during NASA missions. West and her colleagues used the data to create ever more accurate models of the geoid—the shape of the Earth—factoring in gravitational fields and the planet’s rotation.

    One such mission was Seasat, which lasted from June to October 1978. Seasat was launched into orbit to test oceanographic sensors and gain a better understanding of Earth’s seas using the first space-based synthetic aperture radar (SAR) system, which enabled the first remote sensing of the Earth’s oceans.

    SAR can acquire high-resolution images at night and can penetrate through clouds and rain. Seasat captured many valuable 2D and 3D images before a malfunction caused the satellite to be taken down.

    Enough data was collected from Seasat for West’s team to refine existing geodetic models to better account for gravity and magnetic forces. The models were important for precisely mapping the Earth’s topography, determining the orbital routes that would later be used by GPS satellites, as well as documenting the spatial relationships that now let GPS determine exactly where a receiver is.

    In 1986 she published the “Data Processing System Specifications for the GEOSAT Satellite Radar Altimeter” technical report. It contained new calculations that could make her geodetic models more accurate. The calculations were made possible by data from the radio altimeter on the GEOSAT, a Navy satellite that went into orbit in March 1985.

    West’s career at Dahlgren lasted 42 years. By the time she retired in 1998, all 24 satellites in the GPS constellation had been launched to help the world keep time and handle navigation. But her role was largely unknown.

    A model of perseverance

    Neither an early bout of imposter syndrome nor the racial tensions that were an everyday element of her work life during the height of the Civil Rights Movement were able to knock her off course, West says.

    In the early 1970s, she decided that her career advancement was not proceeding as smoothly as she thought it should, so she decided to go to graduate school part time for another degree. She considered pursuing a doctorate in mathematics but realized, “I already had all the technical credentials I would ever need for my work for the Navy.” Instead, to solidify her skills as a manager, she earned a master’s degree in 1973 in public administration from the University of Oklahoma in Norman.

    After retiring from the Navy, she earned a doctorate in public administration in 2000 from Virginia Tech. Although she was recovering from a stroke at the time that affected her physical abilities, she still had the same drive to pursue an education that had once kept her focused on a little red schoolhouse.

    A formidable legacy

    West’s contributions have had a lasting impact on the fields of mathematics, geodesy, and computer science. Her pioneering efforts in a predominantly male and racially segregated environment set a precedent for future generations of female and minority scientists.

    West says her life and career are testaments to the power of perseverance, skill, and dedication—or “stick-to-it-iveness,” to use her parlance. Her story continues to inspire people who strive to push boundaries. She has shown that the sky is indeed not the limit but just the beginning.

  • The Doyen of the Valley Bids Adieu
    by Harry Goldstein on 30. July 2024. at 13:41



    When Senior Editor Tekla S. Perry started in this magazine’s New York office in 1979, she was issued the standard tools of the trade: notebooks, purple-colored pencils for making edits and corrections on page proofs, a push-button telephone wired into a WATS line for unlimited long distance calling, and an IBM Selectric typewriter, “the latest and greatest technology, from my perspective,” she recalled recently.

    And she put that typewriter through its paces. “In this period she was doing deep and outstanding reporting on major Silicon Valley startups, outposts, and institutions, most notably Xerox PARC,” says Editorial Director for Content Development Glenn Zorpette, who began his career at IEEE Spectrum five years later. “She did some of this reporting and writing with Paul Wallich, another staffer in the 1980s. Together they produced stories that hold up to this day as invaluable records of a pivotal moment in Silicon Valley history.”

    Indeed, the October 1985 feature story about Xerox PARC, which she cowrote with Wallich in 1985, ranks as Perry’s favorite article.

    “While now it’s widely known that PARC invented history-making technology and blew its commercialization—there have been entire books written about that—Paul Wallich and I were the first to really dig into what had happened at PARC,” she says. “A few of the key researchers had left and were open to talking, and some people who were still there had hit the point of being frustrated enough to tell their stories. So we interviewed a huge number of them, virtually all in person and at length. Think about who we met! Alan Kay, Larry Tesler, Alvy Ray Smith, Bob Metcalfe, John Warnock and Chuck Geschke, Richard Shoup, Bert Sutherland, Charles Simonyi, Lynn Conway, and many others.”

    “I know without a doubt that my path and those of my younger women colleagues have been smoothed enormously by the very fact that Tekla came before us and showed us the way.” –Jean Kumagai

    After more than seven years of reporting trips to Silicon Valley, Perry relocated there permanently as Spectrum’s first “field editor.”

    Over the course of more than four decades, Perry became known for her profiles of Valley visionaries and IEEE Medal of Honor recipients, most recently Vint Cerf and Bob Kahn. She established working relationships—and, in some cases, friendships—with some of the most important people in Northern California tech, including Kay and Smith, Steve Wozniak (Apple), Al Alcorn and Nolan Bushnell (Atari), Andy Grove (Intel), Judy Estrin (Bridge, Cisco, Packet Design), and John Hennessy (chairperson of Alphabet and former president of Stanford).

    Just as her interview subjects were regarded as pioneers in their fields, Perry herself ranks as a pioneer for women tech journalists. As the first woman editor hired at Spectrum and one of a precious few women journalists reporting on technology at the time, she blazed a trail that others have followed, including several current Spectrum staff members.

    “Tekla had already been at Spectrum for 20 years when I joined the staff,” Executive Editor Jean Kumagai told me. “I know without a doubt that my path and those of my younger women colleagues have been smoothed enormously by the very fact that Tekla came before us and showed us the way.”

    Perry is retiring this month after 45 years of service to IEEE and its members. We’re sad to see her go and I know many readers are, too—from personal experience. I met an IEEE Life Member for breakfast a few weeks ago. I asked him, as an avid Spectrum reader since 1964, what he liked most about it. He began talking about Perry’s stories, and how she inspired him through the years. The connections forged between reader and writer are rare in this age of blurbage and spew, but the way Perry connected readers to their peers was, well, peerless. Just like Perry herself.

    This article appears in the August 2024 print issue.

  • A Robot Dentist Might Be a Good Idea, Actually
    by Evan Ackerman on 30. July 2024. at 12:00



    I’ll be honest: when I first got this pitch for an autonomous robot dentist, I was like: “Okay, I’m going to talk to these folks and then write an article, because there’s no possible way for this thing to be anything but horrific.” Then they sent me some video that was, in fact, horrific, in the way that only watching a high speed drill remove most of a tooth can be.

    But fundamentally this has very little to do with robotics, because getting your teeth drilled just sucks no matter what. So the real question we should be asking is this: How can we make a dental procedure as quick and safe as possible, to minimize that inherent horrific-ness?And the answer, surprisingly, may be this robot from a startup called Perceptive.

    Perceptive is today announcing two new technologies that I very much hope will make future dental experiences better for everyone. While it’s easy to focus on the robot here (because, well, it’s a robot), the reason the robot can do what it does (which we’ll get to in a minute) is because of a new imaging system. The handheld imager, which is designed to operate inside of your mouth, uses optical coherence tomography (OCT) to generate a 3D image of the inside of your teeth, and even all the way down below the gum line and into the bone. This is vastly better than the 2D or 3D x-rays that dentists typically use, both in resolution and positional accuracy.

    A hand in a blue medical glove holds a black wand-like device with a circuit board visible. Perceptive’s handheld optical coherence tomography imager scans for tooth decay.Perceptive

    X-Rays, it turns out, are actually really bad at detecting cavities; Perceptive CEO Chris Ciriello tells us that the accuracy is on the order of 30 percent of figuring out the location and extent of tooth decay. In practice, this isn’t as much of a problem as it seems like it should be, because the dentist will just start drilling into your tooth and keep going until they find everything. But obviously this won’t work for a robot, where you need all of the data beforehand. That’s where the OCT comes in. You can think of OCT as similar to an ultrasound, in that it uses reflected energy to build up an image, but OCT uses light instead of sound for much higher resolution.

    A short video shows outlines of teeth in progressively less detail, but highlights some portions in blood red. Perceptive’s imager can create detailed 3D maps of the insides of teeth.Perceptive

    The reason OCT has not been used for teeth before is because with conventional OCT, the exposure time required to get a detailed image is several seconds, and if you move during the exposure, the image will blur. Perceptive is instead using a structure from motion approach (which will be familiar to many robotics folks), where they’re relying on a much shorter exposure time resulting in far fewer data points, but then moving the scanner and collecting more data to gradually build up a complete 3D image. According to Ciriello, this approach can localize pathology within about 20 micrometers with over 90 percent accuracy, and it’s easy for a dentist to do since they just have to move the tool around your tooth in different orientations until the scan completes.

    Again, this is not just about collecting data so that a robot can get to work on your tooth. It’s about better imaging technology that helps your dentist identify and treat issues you might be having. “We think this is a fundamental step change,” Ciriello says. “We’re giving dentists the tools to find problems better.”

    A silvery robotic arm with a small drill at the end. The robot is mechanically coupled to your mouth for movement compensation.Perceptive

    Ciriello was a practicing dentist in a small mountain town in British Columbia, Canada. People in such communities can have a difficult time getting access to care. “There aren’t too many dentists who want to work in rural communities,” he says. “Sometimes it can take months to get treatment, and if you’re in pain, that’s really not good. I realized that what I had to do was build a piece of technology that could increase the productivity of dentists.”

    Perceptive’s robot is designed to take a dental procedure that typically requires several hours and multiple visits, and complete it in minutes in a single visit. The entry point for the robot is crown installation, where the top part of a tooth is replaced with an artificial cap (the crown). This is an incredibly common procedure, and it usually happens in two phases. First, the dentist will remove the top of the tooth with a drill. Next, they take a mold of the tooth so that a crown can be custom fit to it. Then they put a temporary crown on and send you home while they mail the mold off to get your crown made. A couple weeks later, the permanent crown arrives, you go back to the dentist, and they remove the temporary one and cement the permanent one on.

    With Perceptive’s system, it instead goes like this: on a previous visit where the dentist has identified that you need a crown in the first place, you’d have gotten a scan of your tooth with the OCT imager. Based on that data, the robot will have planned a drilling path, and then the crown could be made before you even arrive for the drilling to start, which is only possible because the precise geometry is known in advance. You arrive for the procedure, the robot does the actually drilling in maybe five minutes or so, and the perfectly fitting permanent crown is cemented into place and you’re done.

    A silvery robotic arm with a small drill at the end. The arm is mounted on a metal cart with a display screen. The robot is still in the prototype phase but could be available within a few years.Perceptive

    Obviously, safety is a huge concern here, because you’ve got a robot arm with a high-speed drill literally working inside of your skull. Perceptive is well aware of this.

    The most important thing to understand about the Perceptive robot is that it’s physically attached to you as it works. You put something called a bite block in your mouth and bite down on it, which both keeps your mouth open and keeps your jaw from getting tired. The robot’s end effector is physically attached to that block through a series of actuated linkages, such that any motions of your head are instantaneously replicated by the end of the drill, even if the drill is moving. Essentially, your skull is serving as the robot’s base, and your tooth and the drill are in the same reference frame. Purely mechanical coupling means there’s no vision system or encoders or software required: it’s a direct physical connection so that motion compensation is instantaneous. As a patient, you’re free to relax and move your head somewhat during the procedure, because it makes no difference to the robot.

    Human dentists do have some strategies for not stabbing you with a drill if you move during a procedure, like putting their fingers on your teeth and then supporting the drill on them. But this robot should be safer and more accurate than that method, because of the rigid connection leading to only a few tens of micrometers of error, even on a moving patient. It’ll move a little bit slower than a dentist would, but because it’s only drilling exactly where it needs to, it can complete the procedure faster overall, says Ciriello.

    There’s also a physical counterbalance system within the arm, a nice touch that makes the arm effectively weightless. (It’s somewhat similar to the PR2 arm, for you OG robotics folks.) And the final safety measure is the dentist-in-the-loop via a foot pedal that must remain pressed or the robot will stop moving and turn off the drill.

    Ciriello claims that not only is the robot able to work faster, it also will produce better results. Most restorations like fillings or crowns last about five years, because the dentist either removed too much material from the tooth and weakened it, or removed too little material and didn’t completely solve the underlying problem. Perceptive’s robot is able to be far more exact. Ciriello says that the robot can cut geometry that’s “not humanly possible,” fitting restorations on to teeth with the precision of custom-machined parts, which is pretty much exactly what they are.

    A short video shows a d dental drill working on a tooth in a person's mouth. Perceptive has successfully used its robot on real human patients, as shown in this sped-up footage. In reality the robot moves slightly slower than a human dentist.Perceptive

    While it’s easy to focus on the technical advantages of Perceptive’s system, dentist Ed Zuckerberg (who’s an investor in Perceptive) points out that it’s not just about speed or accuracy, it’s also about making patients feel better. “Patients think about the precision of the robot, versus the human nature of their dentist,” Zuckerberg says. It gives them confidence to see that their dentist is using technology in their work, especially in ways that can address common phobias. “If it can enhance the patient experience or make the experience more comfortable for phobic patients, that automatically checks the box for me.”

    There is currently one other dental robot on the market. Called Yomi, it offers assistive autonomy for one very specific procedure for dental implants. Yomi is not autonomous, but instead provides guidance for a dentist to make sure that they drill to the correct depth and angle.

    While Perceptive has successfully tested their first-generation system on humans, it’s not yet ready for commercialization. The next step will likely be what’s called a pivotal clinical trial with the FDA, and if that goes well, Cirello estimates that it could be available to the public in “several years”. Perceptive has raised US $30 million in funding so far, and here’s hoping that’s enough to get them across the finish line.

  • Your Gateway to a Vibrant Career in the Expanding Semiconductor Industry
    by Douglas McCormick on 30. July 2024. at 10:00



    This sponsored article is brought to you by Purdue University.

    The CHIPS America Act was a response to a worsening shortfall in engineers equipped to meet the growing demand for advanced electronic devices. That need persists. In its 2023 policy report, Chipping Away: Assessing and Addressing the Labor Market Gap Facing the U.S. Semiconductor Industry, the Semiconductor Industry Association forecast a demand for 69,000 microelectronic and semiconductor engineers between 2023 and 2030—including 28,900 new positions created by industry expansion and 40,100 openings to replace engineers who retire or leave the field.

    This number does not include another 34,500 computer scientists (13,200 new jobs, 21,300 replacements), nor does it count jobs in other industries that require advanced or custom-designed semiconductors for controls, automation, communication, product design, and the emerging systems-of-systems technology ecosystem.

    Purdue University is taking charge, leading semiconductor technology and workforce development in the U.S. As early as Spring 2022, Purdue University became the first top engineering school to offer an online Master’s Degree in Microelectronics and Semiconductors.

    U.S. News & World Report has ranked the university’s graduate engineering program among America’s 10 best every year since 2012 (and among the top 4 since 2022)

    “The degree was developed as part of Purdue’s overall semiconductor degrees program,” says Purdue Prof. Vijay Raghunathan, one of the architects of the semiconductor program. “It was what I would describe as the nation’s most ambitious semiconductor workforce development effort.”

    A person dressed in a dark suit with a white shirt and red tie poses for a professional portrait against a dark background. Prof. Vijay Raghunathan, one of the architects of the online Master’s Degree in Microelectronics and Semiconductors at Purdue.Purdue University

    Purdue built and announced its bold high-technology online program while the U.S. Congress was still debating the $53 billion “Creating Helpful Incentives to Produce Semiconductors for America Act” (CHIPS America Act), which would be passed in July 2022 and signed into law in August.

    Today, the online Master’s in Microelectronics and Semiconductors is well underway. Students learn leading-edge equipment and software and prepare to meet the challenges they will face in a rejuvenated, and critical, U.S. semiconductor industry.

    Is the drive for semiconductor education succeeding?

    “I think we have conclusively established that the answer is a resounding ‘Yes,’” says Raghunathan. Like understanding big data, or being able to program, “the ability to understand how semiconductors and semiconductor-based systems work, even at a rudimentary level, is something that everybody should know. Virtually any product you design or make is going to have chips inside it. You need to understand how they work, what the significance is, and what the risks are.”

    Earning a Master’s in Microelectronics and Semiconductors

    Students pursuing the Master’s Degree in Microelectronics and Semiconductors will take courses in circuit design, devices and engineering, systems design, and supply chain management offered by several schools in the university, such as Purdue’s Mitch Daniels School of Business, the Purdue Polytechnic Institute, the Elmore Family School of Electrical and Computer Engineering, and the School of Materials Engineering, among others.

    Professionals can also take one-credit-hour courses, which are intended to help students build “breadth at the edges,” a notion that grew out of feedback from employers: Tomorrow’s engineering leaders will need broad knowledge to connect with other specialties in the increasingly interdisciplinary world of artificial intelligence, robotics, and the Internet of Things.

    “This was something that we embarked on as an experiment 5 or 6 years ago,” says Raghunathan of the one-credit courses. “I think, in hindsight, that it’s turned out spectacularly.”

    A researcher wearing a white lab coat, hairnet, and gloves works with scientific equipment, with a computer monitor displaying a detailed scientific pattern. A researcher adjusts imaging equipment in a lab in Birck Nanotechnology Center, home to Purdue’s advanced research and development on semiconductors and other technology at the atomic scale.Rebecca Robiños/Purdue University

    The Semiconductor Engineering Education Leader

    Purdue, which opened its first classes in 1874, is today an acknowledged leader in engineering education. U.S. News & World Report has ranked the university’s graduate engineering program among America’s 10 best every year since 2012 (and among the top 4 since 2022). And Purdue’s online graduate engineering program has ranked in the country’s top three since the publication started evaluating online grad programs in 2020. (Purdue has offered distance Master’s degrees since the 1980s. Back then, of course, course lectures were videotaped and mailed to students. With the growth of the web, “distance” became “online,” and the program has swelled.)

    Thus, Microelectronics and Semiconductors Master’s Degree candidates can study online or on-campus. Both tracks take the same courses from the same instructors and earn the same degree. There are no footnotes, asterisks, or parentheses on the diploma to denote online or in-person study.

    “If you look at our program, it will become clear why Purdue is increasingly considered America’s leading semiconductors university” —Prof. Vijay Raghunathan, Purdue University

    Students take classes at their own pace, using an integrated suite of proven online-learning applications for attending lectures, submitting homework, taking tests, and communicating with faculty and one another. Texts may be purchased or downloaded from the school library. And there is frequent use of modeling and analytical tools like Matlab. In addition, Purdue is also the home of national the national design-computing resources nanoHUB.org (with hundreds of modeling, simulation, teaching, and software-development tools) and its offspring, chipshub.org (specializing in tools for chip design and fabrication).

    From R&D to Workforce and Economic Development

    “If you look at our program, it will become clear why Purdue is increasingly considered America’s leading semiconductors university, because this is such a strategic priority for the entire university, from our President all the way down,” Prof. Raghunathan sums up. “We have a task force that reports directly to the President, a task force focused only on semiconductors and microelectronics. On all aspects—R&D, the innovation pipeline, workforce development, economic development to bring companies to the state. We’re all in as far as chips are concerned.”

  • Build a Radar Cat Detector
    by Stephen Cass on 29. July 2024. at 14:00



    You have a closed box. There may be a live cat inside, but you won’t know until you open the box. For most people, this situation is a theoretical conundrum that probes the foundations of quantum mechanics. For me, however, it’s a pressing practical problem, not least because physics completely skates over the vital issue of how annoyed the cat will be when the box is opened. But fortunately, engineering comes to the rescue, in the form of a new US $50 maker-friendly pulsed coherent radar sensor from SparkFun.

    Perhaps I should back up a little bit. Working from home during the pandemic, my wife and I discovered a colony of feral cats living in the backyards of our block in New York City. We reversed the colony’s growth by doing trap-neuter-return (TNR) on as many of its members as we could, and we purchased three Feralvilla outdoor shelters to see our furry neighbors through the harsh New York winters. These roughly cube-shaped insulated shelters allow the cats to enter via an opening in a raised floor. A removable lid on top allows us to replace straw bedding every few months. It’s impossible to see inside the shelter without removing the lid, meaning you run the risk of surprising a clawed predator that, just moments before, had been enjoying a quiet snooze.

    A set of components, including an enclosure with two large holes for LEDs and what looks like cat ears on top. The enclosure for the radar [left column] is made of basswood (adding cat ears on top is optional). A microcontroller [top row, middle column] processes the results from the radar module [top row, right column] and illuminates the LEDs [right column, second from top] accordingly. A battery and on/off switch [bottom row, left to right] make up the power supply.James Provost

    Feral cats respond to humans differently than socialized pet cats do. They see us as threats rather than bumbling servants. Even after years of daily feeding, most of the cats in our block’s colony will not let us approach closer than a meter or two, let alone suffer being touched. They have claws that have never seen a clipper. And they don’t like being surprised or feeling hemmed in. So I wanted a way to find out if a shelter was occupied before I popped open its lid for maintenance. And that’s where radar comes in.

    SparkFun’s pulsed coherent radar module is based on Acconeer’s low-cost A121 sensor. Smaller than a fingernail, the sensor operates at 60 gigahertz, which means its signal can penetrate many common materials. As the signal passes through a material, some of it is reflected back to the sensor, allowing you to determine distances to multiple surfaces with millimeter-level precision. The radar can be put into a “presence detector” mode—intended to flag whether or not a human is present—in which it looks for changes in the distance of reflections to identify motion.

    As soon as I saw the announcement for SparkFun’s module, the wheels began turning. If the radar could detect a human, why not a feline? Sure, I could have solved my is-there-a-cat-in-the-box problem with less sophisticated technology, by, say, putting a pressure sensor inside the shelter. But that would have required a permanent setup complete with weatherproofing, power, and some way of getting data out. Plus I’d have to perform three installations, one for each shelter. For information I needed only once every few months, that seemed a bit much. So I ordered the radar module, along with a $30 IoT RedBoard microcontroller. The RedBoard operates at the same 3.3 volts as the radar and can configure the module and parse its output.

    If the radar could detect a human, why not a feline?

    Connecting the radar to the RedBoard was a breeze, as they both have Qwiic 4-wire interfaces, which provides power along with an I2C serial connection to peripherals. SparkFun’s Arduino libraries and example code let me quickly test the idea’s feasibility by connecting the microcontroller to a host computer via USB, and I could view the results from the radar via a serial monitor. Experiments with our indoor cats (two defections from the colony) showed that the motion of their breathing was enough to trigger the presence detector, even when they were sound asleep. Further testing showed the radar could penetrate the wooden walls of the shelters and the insulated lining.

    The next step was to make the thing portable. I added a small $11 lithium battery and spliced an on/off switch into its power lead. I hooked up two gumdrop LEDs to the RedBoard’s input/output pins and modified SparkFun’s sample scripts to illuminate the LEDs based on the output of the presence detector: a green LED for “no cat” and red for “cat.” I built an enclosure out of basswood, mounted the circuit boards and battery, and cut a hole in the back as a window for the radar module. (Side note: Along with tending feral cats, another thing I tried during the pandemic was 3D-printing plastic enclosures for projects. But I discovered that cutting, drilling, and gluing wood was faster, sturdier, and much more forgiving when making one-offs or prototypes.)

    An outgoing sine-wave pulse from the radar is depicted on top. A series of returning pulses of lower amplitudes and at different distances are depicted on the bottom. The radar sensor sends out 60-gigahertz pulses through the walls and lining of the shelter. As the radar penetrates the layers, some radiation is reflected back to the sensor, which it detects to determine distances. Some materials will reflect the pulse more strongly than others, depending on their electrical permittivity. James Provost

    I also modified the scripts to adjust the range over which the presence detector scans. When I hold the detector against the wall of a shelter, it looks only at reflections coming from the space inside that wall and the opposite side, a distance of about 50 centimeters. As all the cats in the colony are adults, they take up enough of a shelter’s volume to intersect any such radar beam, as long as I don’t place the detector near a corner.

    I performed in-shelter tests of the portable detector with one of our indoor cats, bribed with treats to sit in the open box for several seconds at a time. The detector did successfully spot him whenever he was inside, although it is prone to false positives. I will be trying to reduce these errors by adjusting the plethora of available configuration settings for the radar. But in the meantime, false positives are much more desirable than false negatives: A “no cat” light means it’s definitely safe to open the shelter lid, and my nerves (and the cats’) are the better for it.

  • How LG and Samsung Are Making TV Screens Disappear
    by Alfred Poor on 29. July 2024. at 13:00



    A transparent television might seem like magic, but both LG and Samsung demonstrated such displays this past January in Las Vegas at CES 2024. And those large transparent TVs, which attracted countless spectators peeking through video images dancing on their screens, were showstoppers.

    Although they are indeed impressive, transparent TVs are not likely to appear—or disappear—in your living room any time soon. Samsung and LG have taken two very different approaches to achieve a similar end—LG is betting on OLED displays, while Samsung is pursuing microLED screens—and neither technology is quite ready for prime time. Understanding the hurdles that still need to be overcome, though, requires a deeper dive into each of these display technologies.

    How does LG’s see-through OLED work?

    OLED stands for organic light-emitting diode, and that pretty much describes how it works. OLED materials are carbon-based compounds that emit light when energized with an electrical current. Different compounds produce different colors, which can be combined to create full-color images.

    To construct a display from these materials, manufacturers deposit them as thin films on some sort of substrate. The most common approach arranges red-, green-, and blue-emitting (RGB) materials in patterns to create a dense array of full-color pixels. A display with what is known as 4K resolution contains a matrix of 3,840 by 2,160 pixels—8.3 million pixels in all, formed from nearly 25 million red, green, and blue subpixels.


    The timing and amount of electrical current sent to each subpixel determines how much light it emits. So by controlling these currents properly, you can create the desired image on the screen. To accomplish this, each subpixel must be electrically connected to two or more transistors, which act as switches. Traditional wires wouldn’t do for this, though: They’d block the light. You need to use transparent (or largely transparent) conductive traces.

    An image of an array of 15 transparent TVs, shot with a fish-eye lens and displaying white trees with pink and green swaths of color above them.    LG’s demonstration of transparent OLED displays at CES 2024 seemed almost magical. Ethan Miller/Getty Images

    A display has thousands of such traces arranged in a series of rows and columns to provide the necessary electrical connections to each subpixel. The transistor switches are also fabricated on the same substrate. That all adds up to a lot of materials that must be part of each display. And those materials must be carefully chosen for the OLED display to appear transparent.

    The conductive traces are the easy part. The display industry has long used indium tin oxide as a thin-film conductor. A typical layer of this material is only 135 nanometers thick but allows about 80 percent of the light impinging on it to pass through.

    The transistors are more of a problem, because the materials used to fabricate them are inherently opaque. The solution is to make the transistors as small as you can, so that they block the least amount of light. The amorphous silicon layer used for transistors in most LCD displays is inexpensive, but its low electron mobility means that transistors composed of this material can only be made so small. This silicon layer can be annealed with lasers to create low-temperature polysilicon, a crystallized form of silicon, which improves electron mobility, reducing the size of each transistor. But this process works only for small sheets of glass substrate.

    Faced with this challenge, designers of transparent OLED displays have turned to indium gallium zinc oxide (IGZO). This material has high enough electron mobility to allow for smaller transistors than is possible with amorphous silicon, meaning that IGZO transistors block less light.

    These tactics help solve the transparency problem, but OLEDs have some other challenges. For one, exposure to oxygen or water vapor destroys the light-emissive materials. So these displays need an encapsulating layer, something to cover their surfaces and edges. Because this layer creates a visible gap when two panels are placed edge to edge, you can’t tile a set of smaller displays to create a larger one. If you want a big OLED display, you need to fabricate a single large panel.

    The result of even the best engineering here is a “transparent” display that still blocks some light. You won’t mistake LG’s transparent TV for window glass: People and objects behind the screen appear noticeably darker than when viewed directly. According to one informed observer, the LG prototype appears to have 45 percent transparency.

    How does Samsung’s magical MicroLED work?

    For its transparent displays, Samsung is using inorganic LEDs. These devices, which are very efficient at converting electricity into light, are commonplace today: in household lightbulbs, in automobile headlights and taillights, and in electronic gear, where they often show that the unit is turned on.

    In LED displays, each pixel contains three LEDs, one red, one green, and one blue. This works great for the giant digital displays used in highway billboards or in sports-stadium jumbotrons, whose images are meant to be viewed from a good distance. But up close, these LED pixel arrays are noticeable.

    TV displays, on the other hand, are meant to be viewed from modest distances and thus require far smaller LEDs than the chips used in, say, power-indicator lights. Two years ago, these “microLED” displays used chips that were just 30 by 50 micrometers. (A typical sheet of paper is 100 micrometers thick.) Today, such displays use chips less than half that size: 12 by 27 micrometers.

    A wooden frame surrounds a transparent display featuring an advertisement for a Black Friday Sale and a large image of a smartwatch. While transparent displays are stunning, they might not be practical for home use as televisions. Expect to see them adopted first as signage in retail settings. AUO

    These tiny LED chips block very little light, making the display more transparent. The Taiwanese display maker AUO recently demonstrated a microLED display with more than 60 percent transparency.

    Oxygen and moisture don’t affect microLEDs, so they don’t need to be encapsulated. This makes it possible to tile smaller panels to create a seamless larger display. And the silicon coating on such small panels can be annealed to create polysilicon, which performs better than IGZO, so the transistors can be even smaller and block less light.

    But the microLED approach has its own problems. Indeed, the technology is still in its infancy, with costing a great deal to manufacture and requiring some contortions to get uniform brightness and color across the entire display.

    For example, individual OLED materials emit a well-defined color, but that’s not the case for LEDs. Minute variations in the physical characteristics of an LED chip can alter the wavelength of light it emits by a measurable—and noticeable—amount. Manufacturers have typically addressed this challenge by using a binning process: They test thousands of chips and then group them into bins of similar wavelengths, discarding those that don’t fit the desired ranges. This explains in part why those large digital LED screens are so expensive: Many LEDs created for their construction must be discarded.

    But binning doesn’t really work when dealing with microLEDs. The tiny chips are difficult to test and are so expensive that costs would be astronomical if too many had to be rejected.

    A person wearing a white shirt with red text and a name badge is placing his hand behind a transparent display screen. The screen shows an image of splashing liquid and fire. Though you can see through today’s transparent displays, they do block a noticeable amount of light, making the background darker than when viewed directly. Tekla S. Perry

    Instead, manufacturers test microLED displays for uniformity after they’re assembled, then calibrate them to adjust the current applied to each subpixel so that color and brightness are uniform across the display. This calibration process, which involves scanning an image on the panel and then reprogramming the control circuitry, can sometimes require thousands of iterations.

    Then there’s the problem of assembling the panels. Remember those 25 million microLED chips that make up a 4K display? Each must be positioned precisely, and each must be connected to the correct electrical contacts.

    The LED chips are initially fabricated on sapphire wafers, each of which contains chips of only one color. These chips must be transferred from the wafer to a carrier to hold them temporarily before applying them to the panel backplane. The Taiwanese microLED company PlayNitride has developed a process for creating large tiles with chips spaced less than 2 micrometers apart. Its process for positioning these tiny chips has better than 99.9 percent yields. But even at a 99.9 percent yield, you can expect about 25,000 defective subpixels in a 4K display. They might be positioned incorrectly so that no electrical contact is made, or the wrong color chip is placed in the pattern, or a subpixel chip might be defective. While correcting these defects is sometimes possible, doing so just adds to the already high cost.

    A person looks at a transparent micro led screen displaying splashes of liquid in red, yellow, and green. Samsung’s microLED technology allows the image to extend right up to the edge of the glass panel, making it possible to create larger displays by tiling smaller panels together. Brendan Smialowski/AFP/Getty Images

    Could MicroLEDs still be the future of flat-panel displays? “Every display analyst I know believes that microLEDs should be the ‘next big thing’ because of their brightness, efficiency, color, viewing angles, response times, and lifetime, “ says Bob Raikes, editor of the 8K Monitor newsletter. “However, the practical hurdles of bringing them to market remain huge. That Apple, which has the deepest pockets of all, has abandoned microLEDs, at least for now, and after billions of dollars in investment, suggests that mass production for consumer markets is still a long way off.”

    At this juncture, even though microLED technology offers some clear advantages, OLED is more cost-effective and holds the early lead for practical applications of transparent displays.

    But what is a transparent display good for?

    Samsung and LG aren’t the only companies to have demonstrated transparent panels recently.

    AUO’s 60-inch transparent display, made of tiled panels, won the People’s Choice Award for Best MicroLED-Based Technology at the Society for Information Display’s Display Week, held in May in San Jose, Calif. And the Chinese company BOE Technology Group demonstrated a 49-inch transparent OLED display at CES 2024.

    These transparent displays all have one feature in common: They will be insanely expensive. Only LG’s transparent OLED display has been announced as a commercial product. It’s without a price or a ship date at this point, but it’s not hard to guess how costly it will be, given that nontransparent versions are expensive enough. For example, LG prices its top-end 77-inch OLED TV at US $4,500.

    A diagram of the structure of a display pixel represented as a grey rectangle, which frames an open area labeled transmissive space, and three rectangular blocks labeled R, G, and B. Displays using both microLED technology [above] and OLED technology have some components in each pixel that block light coming from the background. These include the red, green, and blue emissive materials along with the transistors required to switch them on and off. Smaller components mean that you can have a larger transmissive space that will provide greater transparency. Illustration: Mark Montgomery; Source: Samsung

    Thanks to seamless tiling, transparent microLED displays can be larger than their OLED counterparts. But their production costs are larger as well. Much larger. And that is reflected in prices. For example, Samsung’s nontransparent 114-inch microLED TV sells for $150,000. We can reasonably expect transparent models to cost even more.

    Seeing these prices, you really have to ask: What are the practical applications of transparent displays?

    Don’t expect these displays to show up in many living rooms as televisions. And high price is not the only reason. After all, who wants to see their bookshelves showing through in the background while they’re watching Dune? That’s why the transparent OLED TV LG demonstrated at CES 2024 included a “contrast layer”—basically, a black cloth—that unrolls and covers the back of the display on demand.

    Transparent displays could have a place on the desktop—not so you can see through them, but so that a camera can sit behind the display, capturing your image while you’re looking directly at the screen. This would help you maintain eye contact during a Zoom call. One company—Veeo—demonstrated a prototype of such a product at CES 2024, and it plans to release a 30-inch model for about $3,000 and a 55-inch model for about $8,500 later this year. Veeo’s products use LG’s transparent OLED technology.

    Transparent screens are already showing up as signage and other public-information displays. LG has installed transparent 55-inch OLED panels in the windows of Seoul’s new high-speed underground rail cars, which are part of a system known as the Great Train eXpress. Riders can browse maps and other information on these displays, which can be made clear when needed for passengers to see what’s outside.

    LG transparent panels have also been featured in an E35e excavator prototype by Doosan Bobcat. This touchscreen display can act as the operator’s front or side window, showing important machine data or displaying real-time images from cameras mounted on the vehicle. Such transparent displays can serve a similar function as the head-up displays in some aircraft windshields.

    And so, while the large transparent displays are striking, you’ll be more likely to see them initially as displays for machinery operators, public entertainment, retail signage, and even car windshields. The early adopters might cover the costs of developing mass-production processes, which in turn could drive prices down. But even if costs eventually reach reasonable levels, whether the average consumer really want a transparent TV in their home is something that remains to be seen—unlike the device itself, whose whole point is not to be.

  • How India Is Starting a Chip Industry From Scratch
    by Samuel K. Moore on 28. July 2024. at 15:01



    In March, India announced a major investment to establish a semiconductor-manufacturing industry. With US $15 billion in investments from companies, state governments, and the central government, India now has plans for several chip-packaging plants and the country’s first modern chip fab as part of a larger effort to grow its electronics industry.

    But turning India into a chipmaking powerhouse will also require a substantial investment in R&D. And so the Indian government turned to IEEE Fellow and retired Georgia Tech professor Rao Tummala, a pioneer of some of the chip-packaging technologies that have become critical to modern computers. Tummala spoke with IEEE Spectrum during the IEEE Electronic Component Technology Conference in Denver, Colo., in May.

    Rao Tummala


    Rao Tummala is a pioneer of semiconductor packaging and a longtime research leader at Georgia Tech.

    What are you helping the government of India to develop?

    Rao Tummala: I’m helping to develop the R&D side of India’s semiconductor efforts. We picked 12 strategic research areas. If you explore research in those areas, you can make almost any electronic system. For each of those 12 areas, there’ll be one primary center of excellence. And that’ll be typically at an IIT (Indian Institute of Technology) campus. Then there’ll be satellite centers attached to those throughout India. So when we’re done with it, in about five years, I expect to see probably almost all the institutions involved.

    Why did you decide to spend your retirement doing this?

    Tummala: It’s my giving back. India gave me the best education possible at the right time.

    I’ve been going to India and wanting to help for 20 years. But I wasn’t successful until the current government decided they’re going to make manufacturing and semiconductors important for the country. They asked themselves: What would be the need for semiconductors, in 10 years, 20 years, 30 years? And they quickly concluded that if you have 1.4 billion people, each consuming, say, $5,000 worth of electronics each year, it requires billions and billions of dollars’ worth of semiconductors.

    “It’s my giving back. India gave me the best education possible at the right time.” —Rao Tummala, advisor to the government of India

    What advantages does India have in the global semiconductor space?

    Tummala: India has the best educational system in the world for the masses. It produces the very best students in science and engineering at the undergrad level and lots of them. India is already a success in design and software. All the major U.S. tech companies have facilities in India. And they go to India for two reasons. It has a lot of people with a lot of knowledge in the design and software areas, and those people are cheaper [to employ].

    What are India’s weaknesses, and is the government response adequate to overcoming them?

    Tummala: India is clearly behind in semiconductor manufacturing. It’s behind in knowledge and behind in infrastructure. Government doesn’t solve these problems. All that the government does is set the policies and give the money. This has given companies incentives to come to India, and therefore the semiconductor industry is beginning to flourish.

    Will India ever have leading-edge chip fabs?

    Tummala: Absolutely. Not only will it have leading-edge fabs, but in about 20 years, it will have the most comprehensive system-level approach of any country, including the United States. In about 10 years, the size of the electronics industry in India will probably have grown about 10 times.

    This article appears in the August 2024 print issue as “5 Questions for Rao Tummala.”

  • Try IEEE’s New Virtual Testbed for 5G and 6G Tech
    by Kathy Pretz on 26. July 2024. at 18:00



    Telecom engineers and researchers face several challenges when it comes to testing their 5G and 6G prototypes. One is finding a testbed where they can run experiments with their new hardware and software.

    The experimentation platforms, which resemble real-world conditions, can be pricey. Some have a time limit. Others may be used only by specific companies or for testing certain technologies.

    The new IEEE 5G/6G Innovation Testbed has eliminated many of those barriers. Built by IEEE, the platform is for those who want to try out their 5G enhancements, run trials of future 6G functions, or test updates for converged networks. Users may test and retest as many times as they want at no additional cost.

    Telecom operators can use the new virtual testbed, as can application developers, researchers, educators, and vendors from any industry.

    “The IEEE 5G/6G Innovation Testbed creates an environment where industry can break new ground and work together to develop the next generation of technology innovations,” says Anwer Al-Dulaimi, cochair of the IEEE 5G/6G Innovation Testbed working group. Al-Dulaimi, an IEEE senior member, is a senior strategy manager of connectivity and Industry 4.0 for Veltris, in Toronto.

    The testbed was launched this year with support from AT&T, Exfo, Eurecom, Veltris, VMWare, and Tech Mahindra.

    The subscription-based testbed is available only to organizations. Customers receive their own private, secure session of the testing platform in the cloud along with the ability to add new users.

    A variety of architectures and experiments

    The platform eliminates the need for customers to travel to a location and connect to physical hardware, Al-Dulaimi says. That’s because its digital hub is based in the cloud, allowing companies, research facilities, and organizations to access it. The testbed allows customers to upload their own software components for testing.

    “IEEE 5G/6G Innovation Testbed provides a unique platform for the service providers, and various vertical industries—including defense, homeland security, agriculture, and automotive—to experiment various use cases that can take advantage of advanced 5G technologies like ultra low latency, machine-to-machine type communications and massive broadband to help solve their pain points,” says IEEE Fellow Ashutosh Dutta, who is a cochair of the working group. Dutta works as chief 5G strategist at the Johns Hopkins University Applied Physics Laboratory, in Laurel, Md. He also heads the university’s Doctor of Engineering program.

    “The IEEE 5G/6G Innovation Testbed creates an environment where industry can break new ground and work together to develop the next generation of technology innovations.”

    The collaborative, secure, cloud-based platform also can emulate a 5G end-to-end network within the 3rd Generation Partnership Program (3GPP), which defines cellular communications standards.

    “Companies can use the platform for testing, but they can also use the environment as a virtual hands-on showcase of new products, services, and network functions,” Dutta says.

    In addition to the cloud-based end-to-end environment, the testbed supports other architectures including multiaccess edge computing for reduced latency, physical layer testing via 5G access points and phones installed at IEEE, and Open RAN (radio access network) environments where wireless radio functionality is disaggregated to allow for better flexibility in mixing hardware and software components.

    A variety of experiments can be conducted, Al-Dulaimi says, including:

    • Voice and video call emulation.
    • Authentication and encryption impact evaluation across different 5G platforms.
    • Network slicing.
    • Denial-of-service attacks and interoperability and overload incidents.
    • Verifying the functionality, compatibility, and interoperability of products.
    • Assessing conformity of networks, components, and products.

    The testbed group plans to release a new graphical user interface soon, as well as a test orchestration tool that contains hundreds of plug-and-play test cases to help customers quickly determine if their prototypes are working as intended across a variety of standards and scenarios. In addition to basic “sanity testing,” it includes tools to measure a proposed product’s real-time performance.

    The proofs of concept—lessons learned from experiments—will help advance existing standards and create new ones, Dutta says, and they will expedite the deployment of 5G and 6G technologies.

    The IEEE 5G/6G testbed is an asset that can be used by the academics, researchers, and R&D labs, he says, to help “close the gap between theory and practice. Students across the world can take advantage of this testbed to get hands-on experience as part of their course curriculum.”

    Partnership with major telecom companies

    The IEEE 5G/6G Innovation Testbed recently joined the Acceleration of Compatibility and Commercialization for Open RAN Deployments project. A public-private consortium, ACCORD includes AT&T, Verizon, Virginia Tech and the University of Texas at Dallas. The group is funded by the U.S. Department of Commerce’s National Telecommunications and Information Administration, whose programs and policymaking efforts focus on expanding broadband Internet access and adoption throughout the country.

    “The 3GPP-compliant end-to-end 5G network is built with a suite of open-source modules, allowing companies to customize the network architecture and tailor their testbed environment according to their needs,” Al-Dulaimi says.

    The testbed was made possible with a grant from the IEEE New Initiatives Committee, which funds potential IEEE services, products, and other creations that could significantly benefit members, the public, customers, or the technical community.

    To get a free trial of the testbed, complete this form.

    Watch this short demonstration of how the IEEE 5G/6G Innovation Testbed works. youtube

  • Video Friday: Robot Baby With a Jet Pack
    by Evan Ackerman on 26. July 2024. at 16:59



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
    IROS 2024: 14–18 October 2024, ABU DHABI, UAE
    ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
    Cybathlon 2024: 25–27 October 2024, ZURICH

    Enjoy today’s videos!

    If the Italian Institute of Technology’s iRonCub3 looks this cool while learning to fly, just imagine how cool it will look when it actually takes off!

    Hovering is in the works, but this is a really hard problem, which you can read more about in Daniele Pucci’s post on LinkedIn.

    [ LinkedIn ]

    Stanford Engineering and the Toyota Research Institute achieve the world’s first autonomous tandem drift. Leveraging the latest AI technology, Stanford Engineering and TRI are working to make driving safer for all. By automating a driving style used in motorsports called drifting—in which a driver deliberately spins the rear wheels to break traction—the teams have unlocked new possibilities for future safety systems.

    [ TRI ]

    Researchers at the Istituto Italiano di Tecnologia (Italian Institute of Technology) have demonstrated that under specific conditions, humans can treat robots as coauthors of the results of their actions. The condition that enables this phenomenon is a robot that behaves in a social, humanlike manner. Engaging in eye contact and participating in a common emotional experience, such as watching a movie, are key.

    [ Science Robotics ]

    If Aibo is not quite catlike enough for you, here you go.

    [ Maicat ] via [ RobotStart ]

    I’ve never been more excited for a sim-to-real gap to be bridged.

    [ USC Viterbi ]

    I’m sorry, but this looks exactly like a quadrotor sitting on a test stand.

    The 12-pound Quad-Biplane combines four rotors and two wings without any control surfaces. The aircraft takes off like a conventional quadcopter and transitions to a more-efficient horizontal cruise flight, similar to that of a biplane. This combines the simplicity of a quadrotor design, providing vertical flight capability, with the cruise efficiency of a fixed-wing aircraft. The rotors are responsible for aircraft control both in vertical and forward cruise flight regimes.

    [ AVFL ]

    Tensegrity robots are so weird, and I so want them to be useful.

    [ Suzumori Endo Lab ]

    Top-performing robots need all the help they can get.

    [ Team B-Human ]

    And now: a beetle nearly hit by an autonomous robot.

    [ WVUIRL ]

    Humans possess a remarkable ability to react to unpredictable perturbations through immediate mechanical responses, which harness the visco-elastic properties of muscles to maintain balance. Inspired by this behavior, we propose a novel design of a robotic leg utilizing fiber-jammed structures as passive compliant mechanisms to achieve variable joint stiffness and damping.

    [ Paper ]

    I don’t know what this piece of furniture is, but your cats will love it.

    [ ABB ]

    This video shows a dexterous avatar humanoid robot with VR teleoperation, hand tracking, and speech recognition to achieve highly dexterous mobile manipulation. Extend Robotics is developing a dexterous remote-operation interface to enable data collection for embodied AI and humanoid robots.

    [ Extend Robotics ]

    I never really thought about this, but wind turbine blades are hollow inside and need to be inspected sometimes, which is really one of those jobs where you’d much rather have a robot do it.

    [ Flyability ]

    Here’s a full, uncut drone-delivery mission, including a package pickup from our AutoLoader—a simple, nonpowered mechanical device that allows retail partners to utilize drone delivery with existing curbside-pickup workflows.

    [ Wing ]

    Daniel Simu and his acrobatic robot competed in “America’s Got Talent,” and even though his robot did a very robot thing by breaking itself immediately beforehand, the performance went really well.

    [ Acrobot ]

    A tour of the Creative Robotics Mini Exhibition at the Creative Computing Institute, University of the Arts London.

    [ UAL ]

    Thanks, Hooman!

    Zoox CEO Aicha Evans and cofounder and chief technology officer Jesse Levinson hosted a LinkedIn Live last week to reflect on the past decade of building Zoox and their predictions for the next 10 years of the autonomous-vehicle industry.

    [ Zoox ]

  • The Engineer Who Pins Down the Particles at the LHC
    by Edd Gent on 26. July 2024. at 13:00



    The Large Hadron Collider has transformed our understanding of physics since it began operating in 2008, enabling researchers to investigate the fundamental building blocks of the universe. Some 100 meters below the border between France and Switzerland, particles accelerate along the LHC’s 27-kilometer circumference, nearly reaching the speed of light before smashing together.

    The LHC is often described as the biggest machine ever built. And while the physicists who carry out experiments at the facility tend to garner most of the attention, it takes hundreds of engineers and technicians to keep the LHC running. One such engineer is Irene Degl’Innocenti, who works in digital electronics at the European Organization for Nuclear Research (CERN), which operates the LHC. As a member of CERN’s beam instrumentation group, Degl’Innocenti creates custom electronics that measure the position of the particle beams as they travel.

    Irene Degl’Innocenti


    Employer:

    CERN

    Occupation:

    Digital electronics engineer

    Education:

    Bachelor’s and master’s degrees in electrical engineering; Ph.D. in electrical, electronics, and communications engineering, University of Pisa, in Italy

    “It’s a huge machine that does very challenging things, so the amount of expertise needed is vast,” Degl’Innocenti says.

    The electronics she works on make up only a tiny part of the overall operation, something Degl’Innocenti is keenly aware of when she descends into the LHC’s cavernous tunnels to install or test her equipment. But she gets great satisfaction from working on such an important endeavor.

    “You’re part of something that is very huge,” she says. “You feel part of this big community trying to understand what is actually going on in the universe, and that is very fascinating.”

    Opportunities to Work in High-energy Physics

    Growing up in Italy, Degl’Innocenti wanted to be a novelist. Throughout high school she leaned toward the humanities, but she had a natural affinity for math, thanks in part to her mother, who is a science teacher.

    “I’m a very analytical person, and that has always been part of my mind-set, but I just didn’t find math charming when I was little,” Degl’Innocenti says. “It took a while to realize the opportunities it could open up.”

    She started exploring electronics around age 17 because it seemed like the most direct way to translate her logical, mathematical way of thinking into a career. In 2011, she enrolled in the University of Pisa, in Italy, earning a bachelor’s degree in electrical engineering in 2014 and staying on to earn a master’s degree in the same subject.

    At the time, Degl’Innocenti had no idea there were opportunities for engineers to work in high-energy physics. But she learned that a fellow student had attended a summer internship at Fermilab, the participle physics and accelerator laboratory in Batavia, Ill. So she applied for and won an internship there in 2015. Since Fermilab and CERN closely collaborate, she was able to help design a data-processing board for LHC’s Compact Muon Solenoid experiment.

    Next she looked for an internship closer to home and discovered CERN’s technical student program, which allows students to work on a project over the course of a year. Working in the beam-instrumentation group, Degl’Innocenti designed a digital-acquisition system that became the basis for her master’s thesis.

    Measuring the Position of Particle Beams

    After receiving her master’s in 2017, Degl’Innocenti went on to pursue a Ph.D., also at the University of Pisa. She conducted her research at CERN’s beam-position section, which builds equipment to measure the position of particle beams within CERN’s accelerator complex. The LHC has roughly 1,000 monitors spaced around the accelerator ring. Each monitor typically consists of two pairs of sensors positioned on opposite sides of the accelerator pipe, and it is possible to measure the beam’s horizontal and vertical positions by comparing the strength of the signal at each sensor.

    The underlying concept is simple, Degl’Innocenti says, but these measurements must be precise. Bunches of particles pass through the monitors every 25 nanoseconds, and their position must be tracked to within 50 micrometers.

    “We start developing a system years in advance, and then it has to work for a couple of decades.”

    Most of the signal processing is normally done in analog, but during her Ph.D., she focused on shifting as much of this work as possible to the digital domain because analog circuits are finicky, she says. They need to be precisely calibrated, and their accuracy tends to drift over time or when temperatures fluctuate.

    “It’s complex to maintain,” she says. “It becomes particularly tricky when you have 1,000 monitors, and they are located in an accelerator 100 meters underground.”

    Information is lost when analog is converted to digital, however, so Degl’Innocenti analyzed the performance of the latest analog-to-digital converters (ADCs) and investigated their effect on position measurements.

    Designing Beam-Monitor Electronics

    After completing her Ph.D. in electrical, electronics, and communications engineering in 2021, Degl’Innocenti joined CERN as a senior postdoctoral fellow. Two years later, she became a full-time employee there, applying the results of her research to developing new hardware. She’s currently designing a new beam-position monitor for the High-Luminosity upgrade to the LHC, expected to be completed in 2028. This new system will likely use a system-on-chip to house most of the electronics, including several ADCs and a field-programmable gate array (FPGA) that Degl’Innocenti will program to run a new digital signal-processing algorithm.

    She’s part of a team of just 15 who handle design, implementation, and ongoing maintenance of CERN’s beam-position monitors. So she works closely with the engineers who design sensors and software for those instruments and the physicists who operate the accelerator and set the instruments’ requirements.

    “We start developing a system years in advance, and then it has to work for a couple of decades,” Degl’Innocenti says.

    Opportunities in High-Energy Physics

    High-energy physics has a variety of interesting opportunities for engineers, Degl’Innocenti says, including high-precision electronics, vacuum systems, and cryogenics.

    “The machines are very large and very complex, but we are looking at very small things,” she says. “There are a lot of big numbers involved both at the large scale and also when it comes to precision on the small scale.”

    FPGA design skills are in high demand at all kinds of research facilities, and embedded systems are also becoming more important, Degl’Innocenti says. The key is keeping an open mind about where to apply your engineering knowledge, she says. She never thought there would be opportunities for people with her skill set at CERN.

    “Always check what technologies are being used,” she advises. “Don’t limit yourself by assuming that working somewhere would not be possible.”

    This article appears in the August 2024 print issue as “Irene Degl’Innocenti.”

  • Why a Technical Master’s Degree Can Accelerate Your Engineering Career
    by Douglas McCormick on 25. July 2024. at 13:00



    This sponsored article is brought to you by Purdue University.

    Companies large and small are seeking engineers with up-to-date, subject-specific knowledge in disciplines like computer engineering, automation, artificial intelligence, and circuit design. Mid-level engineers need to advance their skillsets to apply and integrate these technologies and be competitive.


    As applications for new technologies continue to grow, demand for knowledgeable electrical and computer engineers is also on the rise. According to the Bureau of Labor Statistics, job outlook for electrical and electronics engineers—as well as computer hardware engineers—is set to grow 5 percent through 2032. Electrical and computer engineers work in almost every industry. They design systems, work on power transmission and power supplies, run computers and communication systems, innovate chips for embedded and so much more.

    To take advantage of this job growth and get more return-on-investment, engineers are advancing their knowledge by going back to school. The 2023 IEEE-USA Salary and Benefits Survey Report shows that engineers with focused master’s degrees (e.g., electrical and computer engineering, electrical engineering, or computer engineering) earned median salaries almost US $27,000 per year higher than their colleagues with bachelors’ degrees alone.


    Purdue’s online MSECE program has been ranked in the top 3 of U.S. News and World Report’s Best Online Electrical Engineering Master’s Programs for five years running


    Universities like Purdue University work with companies and professionals to provide upskilling opportunities via distance and online education. Purdue has offered a distance Master of Science in Electrical and Computer Engineering (MSECE) since the 1980s. In its early years, the program’s course lectures were videotaped and mailed to students. Now, “distance” has transformed into “online,” and the program has grown with the web, expanding its size and scope. Today, the online MSECE has awarded master’s degrees to 190+ online students since the Fall 2021 semester.


    A person with shoulder-length brown hair is wearing a black blazer over a dark blouse. They have a silver necklace with a pendant. The background consists of a brick wall.


    “Purdue has a long-standing reputation of engineering excellence and Purdue engineers work worldwide in every company, including General Motors, Northrop Grumman, Raytheon, Texas Instruments, Apple, and Sandia National Laboratories among scores of others,” said Lynn Hegewald, the senior program manager for Purdue’s online MSECE. “Employers everywhere are very aware of Purdue graduates’ capabilities and the quality of the education they bring to the job.”


    Today, the online MSECE program continues to select from among the world’s best professionals and gives them an affordable, award-winning education. The program has been ranked in the top 3 of U.S. News and World Report’s Best Online Electrical Engineering Master’s Programs for five years running (2020, 2021, 2022, 2023, and 2024).


    The online MSECE offers high-quality research and technical skills, high-level analytical thinking and problem-solving skills, and new ideas to help innovate—all highly sought-after, according to one of the few studies to systematically inventory what engineering employers want (information corroborated on occupational guidance websites like O-Net and the Bureau of Labor Statistics).

    Remote students get the same education as on-campus students and become part of the same alumni network.

    “Our online MSECE program offers the same exceptional quality as our on-campus offerings to students around the country and the globe,” says Prof. Milind Kulkarni, Michael and Katherine Birck Head of the Elmore Family School of Electrical and Computer Engineering. “Online students take the same classes, with the same professors, as on-campus students; they work on the same assignments and even collaborate on group projects.


    “Our online MSECE program offers the same exceptional quality as our on-campus offerings to students around the country and the globe” —Prof. Milind Kulkarni, Purdue University


    “We’re very proud,” he adds, “that we’re able to make a ‘full-strength’ Purdue ECE degree available to so many people, whether they’re working full-time across the country, live abroad, or serve in the military. And the results bear this out: graduates of our program land jobs at top global companies, move on to new roles and responsibilities at their current organizations, or even continue to pursue graduate education at top PhD programs.”


    A person wearing a dark blazer over a light blue, patterned shirt is smiling at the camera and standing indoors with a modern background featuring large windows and wooden panels.


    Variety and Quality in Purdue’s MSECE

    As they study for their MSECE degrees, online students can select from among a hundred graduate-level courses in their primary areas of interest, including innovative one-credit-hour courses that extend the students’ knowledge. New courses and new areas of interest are always in the pipeline.

    Purdue MSECE Area of Interest and Course Options


    • Automatic Control
    • Communications, Networking, Signal and Image Processing
    • Computer Engineering
    • Fields and Optics
    • Microelectronics and Nanotechnology
    • Power and Energy Systems
    • VLSI and Circuit Design
    • Semiconductors
    • Data Mining
    • Quantum Computing
    • IoT
    • Big Data


    Heather Woods, a process engineer at Texas Instruments, was one of the first students to enroll and chose the microelectronics and nanotechnology focus area. She offers this advice: “Take advantage of the one credit-hour classes! They let you finish your degree faster while not taking six credit hours every semester.”


    Completing an online MSECE from Purdue University also teaches students professional skills that employers value like motivation, efficient time-management, high-level analysis and problem-solving, and the ability to learn quickly and write effectively.

    “Having an MSECE shows I have the dedication and knowledge to be able to solve problems in engineering,” said program alumnus Benjamin Francis, now an engineering manager at AkzoNobel. “As I continue in my career, this gives me an advantage over other engineers both in terms of professional advancement opportunity and a technical base to pull information from to face new challenges.”


    Finding Tuition Assistance

    Working engineers contemplating graduate school should contact their human resources departments and find out what their tuition-assistance options are. Does your company offer tuition assistance? What courses of study do they cover? Do they cap reimbursements by course, semester, etc.? Does your employer pay tuition directly, or will you pay out-of-pocket and apply for reimbursement?

    Prospective U.S. students who are veterans or children of veterans should also check with the U.S. Department of Veterans Affairs to see if they qualify to for tuition or other assistance.


    The MSECE Advantage

    In sum, the online Master’s degree in Electrical and Computer Engineering from Purdue University does an extraordinary job giving students the tools they need to succeed in school and then in the workplace: developing the technical knowledge, the confidence, and the often-overlooked professional skills that will help them excel in their careers.


  • The Rise of Groupware
    by Ernie Smith on 24. July 2024. at 15:00



    A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail.

    These days, computer users take collaboration software for granted. Google Docs, Microsoft Teams, Slack, Salesforce, and so on, are such a big part of many people’s daily lives that they hardly notice them. But they are the outgrowth of years of hard work done before the Internet became a thing, when there was a thorny problem: How could people collaborate effectively when everyone’s using a stand-alone personal computer?

    The answer was groupware, an early term for collaboration software designed to work across multiple computers attached to a network. At first, those computers were located in the same office, but the range of operation slowly expanded from there, forming the highly collaborative networked world of today. This post will trace some of this history, starting from early ideas formed at Stanford Research Institute by the team of famed computer pioneer Douglas Engelbart, to a smaller company, Lotus, that hit the market with its groupware program, Notes, at the right time, to Microsoft’s ill-fated attempt to enter the groupware market, including never before seen footage of Bill Gates on Broadway.

    A black and white photo of an old IBM PC on a desk next to computer manuals In the early days of the computing era, when IBM’s PC reigned supreme, collaboration was difficult. Ross Anthony Willis/Fairfax Media/Getty Images

    How the PC made us forget about collaboration for a while

    Imagine that it’s the early-to-mid-1980s and that you run a large company. You’ve invested a lot of money into personal computers, which your employees are now using—IBM PCs, Apple Macintoshes, clones, and the like. There’s just one problem: You have a bunch of computers, but they don’t talk to one another.

    If you’re in a small office and need to share a file, it’s no big deal: You can just hand a floppy disk off to someone on the other side of the room. But what if you’re part of an enterprise company and the person you need to collaborate with is on the other side of the country? Passing your colleague a disk doesn’t work.

    The new personal-computing technologies clearly needed to do more to foster collaboration. They needed to be able to take input from a large group of people inside an office, to allow files to be shared and distributed, and to let multiple users tweak and mash information with everyone being able to sign off on the final version.

    The hardware that would enable such collaboration software, or “groupware” as it tended to be called early on, varied by era. In the 1960s and ’70s, it was usually a mainframe-to-terminal setup, rather than something using PCs. Later, in the 1980s, it was either a token ring or Ethernet network, which were competing local-networking technologies. But regardless of the hardware used for networking, the software for collaboration needed to be developed.

    Black and white photo of a man talking from behind a desk. Stanford Research Institute engineer Douglas Engelbart is sometimes called “the father of groupware.”Getty Images

    Some of the basic ideas behind groupware were first forged at the Stanford Research Institute by a Douglas Engelbart–led team, in the 1960s, working on what they called an oN-Line System (NLS). An early version of NLS was presented in 1968 during what became known as the “Mother of All Demos.” It was essentially a coming-out party for many computing innovations that would eventually become commonplace. If you have 90 minutes and want to see something 20-plus years ahead of its time, watch this video.

    In the years that followed, on top of well-known innovations like the mouse, Engelbart’s team developed tools that anticipated groupware, including an “information center,” an early precursor of the server in a client-server architecture, and tracking edits made to text files by different people, an early precursor of version control.

    By the late 1980s, at a point when the PC had begun to dominate the workplace, Engelbart was less impressed with what had been gained than with what had been lost in the process. He wrote (with Harvey Lehtman) in Byte magazine in 1988:

    The emergence of the personal computer as a major presence in the 1970s and 1980s led to tremendous increases in personal productivity and creativity. It also caused setbacks in the development of tools aimed at increasing organizational effectiveness—tools developed on the older time-sharing systems.
    To some extent, the personal computer was a reaction to the overloaded and frustrating time-sharing systems of the day. In emphasizing the power of the individual, the personal computer revolution turned its back on those tools that led to the empowering of both co-located and distributed work groups collaborating simultaneously and over time on common knowledge work.
    The introduction of local- and wide-area networks into the personal computer environment and the development of mail systems are leading toward some of the directions explored on the earlier systems. However, some of the experiences of those earlier pioneering systems should be considered anew in evolving newer collaborative environments.

    Back to top

    Groupware comes of age

    Groupware finally started to catch on in the late 1980s, with tech companies putting considerable resources into developing collaboration software—perhaps taken in by the idea of “orchestrating work teams,” as an Infoworld piece characterized the challenge in 1988. The San Francisco Examiner reported, for example, that General Motors had invested in the technology, and was beginning to require its suppliers to accept purchase orders electronically.

    Focusing on collaboration software was a great way for independent software companies to stand out, this being an area that large companies—Microsoft in particular—had basically ignored. Today, Microsoft is the 800-pound gorilla of collaboration software, thanks to its combination of Teams and Office 365. But it took the tech giant a very long while to get there: Microsoft started taking the market seriously only around 1992.

    One company in particular was well-positioned to take advantage of the opening that existed in the 1980s. That was the Lotus Development Corporation, a Cambridge, Mass.–based software company that made its name with its Lotus 1-2-3 spreadsheet program for IBM PCs.

    Lotus did not invent groupware or coin the word—on top of Engelbart’s formative work at Stanford, the term had been around for years before Lotus Notes came on the scene. But it was the company that brought collaboration software to everyone’s attention.

    On the left, a black and white photo of a man in a field talking. On the right, a box with disks. Ray Ozzie [left] was primarily responsible for the development of Lotus Notes, the first popular groupware solution. Left: Ann E. Yow-Dyson/Getty Images; Right: James Keyser/Getty Images

    The person most associated with the development of Notes was Ray Ozzie, who was recruited to Lotus after spending time working on VisiCalc, an early spreadsheet program. Ozzie essentially built out what became Notes while working at Iris Associates, a direct offshoot of Lotus that Ozzie founded to develop the Notes application. After some years of development in stealth mode, the product was released in 1989.

    Ozzie explained his inspiration for Notes to Jessica Livingston, who described this history in her book, Founders At Work:

    In Notes, it was (and this is hard to imagine because it was a different time) the concept that we’d all be using computers on our desktops, and therefore we might want to use them as communication tools. This was a time when PCs were just emerging as spreadsheet tools and word processing replacements, still available only on a subset of desks, and definitely no networks. It was ’82 when I wrote the specs for it. It had been based on a system called PLATO [Programmed Logic for Automatic Teaching Operations] that I’d been exposed to at college, which was a large-scale interactive system that people did learning and interactive gaming on, and things like that. It gave us a little bit of a peek at the future—what it would be like if we all had access to interactive systems and technology.

    Building an application based on PLATO turned out to be the right idea at the right time, and it gave Lotus an edge in the market. Notes included email, a calendaring and scheduling tool, an address book, a shared database, and programming capabilities, all in a single front-end application.

    Lotus Notes on Computer Chronicles Fall 1989

    As an all-in-one platform built for scale, Notes gained a strong reputation as an early example of what today would be called a business-transformation tool, one that managed many elements of collaboration. It was complicated from an IT standpoint and required a significant investment to maintain. In a way, what Notes did that was perhaps most groundbreaking was that it helped turn PCs into something that large companies could readily use.

    As Fortune noted in 1994, Lotus had a massive lead in the groupware space, in part because the software worked essentially the same anywhere in a company’s network. We take that for granted now, but back then it was considered magical:

    Like Lotus 1-2-3, Notes is easy to customize. A sales organization, for instance, might use it to set up an electronic bulletin board that lets people pool information about prospective clients. If some of the info is confidential, it can be restricted so not everyone can call it up.
    Notes makes such homegrown applications and the data they contain accessible throughout an organization. The electronic bulletin board you consult in Singapore is identical to the one your counterparts see in Sioux City, Iowa. The key to this universality is a procedure called replication, by which Notes copies information from computer to computer throughout the network. You might say Ozzie figured out how to make the machines telepathic—each knows what the others are thinking.

    This article reported that around 4,000 major companies had purchased Notes, including Chase Manhattan, Compaq Computer, Delta Air Lines, Fluor, General Motors, Harley-Davidson, Hewlett-Packard, IBM, Johnson & Johnson, J.P. Morgan, Nynex, Sybase, and 3M. While it wasn’t dominant in the way Windows was, its momentum was hard to ignore.

    A 1996 commercial for Notes highlighted its use by FedEx. Other commercials would use the stand-up comedian Denis Leary or be highly conceptual. Rarely, if ever, would these television advertisements show the software.

    In the mid-1990’s, it was common for magazines to publish stories about how Notes reshaped businesses large and small. A 1996 Inc. piece, for example, described how a natural-foods company successfully produced a new product in just eight months, a feat the company directly credited to Notes.

    “It’s become our general manager,” Groveland Trading Co. president Steve McDonnell recalled.

    Notes wasn’t cheap (InfoWorld lists the price circa 1990 as US $62,000), and it was complicated to manage. But the positive results it enabled were immensely hard to ignore. IBM noticed and ended up buying Lotus in 1995, almost entirely to get ahold of Notes. Even earlier, Microsoft had realized that office collaboration was a big deal, and they wanted in.

    Back to top

    Microsoft jumps on the groupware bandwagon

    White old book on yellow background titled Microsoft Workgroup Add-on for Windows Microsoft’s first foray into collaboration software was its 1992 release of Windows for Workgroups. Despite great efforts to promote the release, the software was not a commercial success. Daltrois/Flickr

    Microsoft had high hopes for Windows for Workgroups, the networking-focused variant of its popular Windows 3.1 software suite. To create buzz for it, the company pulled out all the stops. Seriously.

    In the fall of 1992, Microsoft paid something like $2 million to put on a Broadway production with Bill Gates literally center stage, at New York City’s Gershwin Theater, one of the largest on Broadway. It was a wild show, and yet, somehow, there is no video of this event currently posted online—until now. The only person I know of who has a video recording of this extravaganza is, fittingly enough, Ray Ozzie, the groupware guru and Notes inventor. Ozzie later served as a top executive at Microsoft, famously replacing Bill Gates as Chief Software Architect in the mid-2000s, and he has shared this video with us for this post:


    The 1992 one-day event was not a hit. Watch to see why. (Courtesy of Ray Ozzie and the Microsoft Corporation)

    00:00 Opening number
    02:23 “My VGA can hardly wait for your CPU to reciprocate”
    05:17 Bill Gates enters the stage
    27:55 “Get ready, get set” musical number
    31:50 Bit with Mike Appe, Microsoft VP of sales
    58:30 Bill Gates does jumping jacks

    Back to top

    A 1992 Washington Post article describes the performance, which involved dozens of actors, some of whom were dressed like the Blues Brothers. At one point, Gates did jumping jacks. Gates himself later said, “That was so bad, I thought [then Microsoft CEO] Ballmer was going to retch.” For those who don’t have an extra hour to spend, here is a summary:

    To get a taste of the show, watch this news segment from channel 4. Courtesy of Microsoft Corporate Archives

    Despite all the effort to generate fanfare, Windows for Workgroups was not a hit. While Windows 3.1 was dominant, Microsoft had built a program that didn’t seem to capture the burgeoning interest in collaborative work in a real way. Among other things, it didn’t initially support the TCP/IP networking protocol, despite the fact that it was the networking technology that was winning the market and enabled the rise of the Internet.

    In its original version, Windows for Workgroups carried such a negative reputation in Microsoft’s own headquarters that the company nicknamed it Windows for Warehouses, referring to the company’s largely unsold inventory, according to Microsoft’s own expert on company lore, Raymond Chen.

    Unsuccessful as it was, the fact that it existed in the first place hinted at Microsoft’s general acknowledgement that perhaps this networking thing was going to catch on with its users.

    Launched in late 1992, a few months after Windows 3.1 itself, the product was Microsoft’s first attempt at integrated networking in a Windows package. The software enabled file-sharing across servers, printer sharing, and email—table stakes in the modern day but at the time a big deal.

    This video presents a very accurate view of what it was like to use Windows in 1994.

    Unfortunately, it was a big deal that came a few years late. Microsoft itself was so lukewarm on the product that the company had to update it to Windows for Workgroups 3.11 just a year later, whose marquee feature wasn’t improved network support but increased disk speed. Confusingly, the company had just released Windows NT by this point, a program that better matched the needs of enterprise customers.

    The work group terminology Microsoft introduced with Windows for Workgroups stuck around, though, and it is actually used in Windows to this day.

    In 2024, group-oriented software feels like the default paradigm, with single-user apps being the anomaly. Over time, groupware became so pervasive that people no longer think of it as groupware, though there are plenty of big, hefty, groupware-like tools out there, like Salesforce. Now, it’s just software. But no one should forget the long history of collaboration software or its ongoing value. It’s what got most of us through the pandemic, even if we never used the word “groupware” to describe it.

  • Quantum Leap: Sydney’s Leading Role in the Next Tech Wave
    by BESydney on 24. July 2024. at 00:00



    This is a sponsored article brought to you by BESydney.

    Australia plays a crucial role in global scientific endeavours, with a significant contribution recognized and valued worldwide. Despite comprising only 0.3 percent of the world’s population, it has contributed over 4 percent of the world’s published research.

    Renowned for collaboration, Australian scientists work across disciplines and with international counterparts to achieve impactful outcomes. Notably excelling in medical sciences, engineering, and biological sciences, Australia also has globally recognized expertise in astronomy, physics and computer science.

    As the country’s innovation hub and leveraging its robust scientific infrastructure, world-class universities and vibrant ecosystem, Sydney is making its mark on this burgeoning industry.

    The city’s commitment to quantum research and development is evidenced by its groundbreaking advancements and substantial government support, positioning it at the forefront of the quantum revolution.

    Sydney’s blend of academic excellence, industry collaboration and strategic government initiatives is creating a fertile ground for cutting-edge quantum advancements.

    Sydney’s quantum ecosystem

    Sydney’s quantum industry is bolstered by the Sydney Quantum Academy (SQA), a collaboration between four top-tier universities: University of NSW Sydney (UNSW Sydney), the University of Sydney (USYD), University of Technology Sydney (UTS), and Macquarie University. SQA integrates over 100 experts, fostering a dynamic quantum research and development environment.

    With strong government backing Sydney is poised for significant growth in quantum technology, with a projected A$2.2 billion industry value and 8,700 jobs by 2030. The SQA’s mission is to cultivate a quantum-literate workforce, support industry partnerships and accelerate the development of quantum technology.

    Professor Hugh Durrant-Whyte, NSW Chief Scientist and Engineer, emphasizes Sydney’s unique position: “We’ve invested in quantum for 20 years, and we have some of the best people at the Quantum Academy in Sydney. This investment and talent pool make Sydney an ideal place for pioneering quantum research and attracting global talent.”

    Key institutions and innovations

    UNSW’s Centre of Excellence for Quantum Computation and Communication Technology is at the heart of Sydney’s quantum advancements. Led by Scientia Professor Michelle Simmons AO, the founder and CEO of Silicon Quantum Computing, this centre is pioneering efforts to develop the world’s first practical supercomputer. This team is at the vanguard of precision atomic electronics, pioneering the fabrication of devices in silicon that are pivotal for both conventional and quantum computing applications and they have created the narrowest conducting wires and the smallest precision transistors.

    “We can now not only put atoms in place but can connect complete circuitry with atomic precision.” —Michelle Simmons, Silicon Quantum Computing

    Simmons was named 2018 Australian of the Year and won the 2023 Prime Minister’s Prize for Science for her work in creating the new field of atomic electronics. She is an Australian Research Council Laureate Fellow, a Fellow of the Royal Society of London, the American Academy of Arts and Science, the American Association of the Advancement of Science, the UK Institute of Physics, the Australian Academy of Technology and Engineering and the Australian Academy of Science.

    In response to her 2023 accolade, Simmons said: “Twenty years ago, the ability to manipulate individual atoms and put them where we want in a device architecture was unimaginable. We can now not only put atoms in place but can connect complete circuitry with atomic precision—a capability that was developed entirely in Australia.”

    Standing in a modern research lab with glass walls and wooden lab benches, a man grasps a cylindrical object attached to a robot arm's gripper while a woman operates a control touch-interface tablet. The Design Futures Lab at UNSW in Sydney, Australia, is a hands-on teaching and research lab that aims to inspire exploration, innovation, and research into fabrication, emerging technologies, and design theories.UNSW

    Government and industry support

    In April 2024, the Australian Centre for Quantum Growth program, part of the National Quantum Strategy, provided a substantial four-year grant to support the quantum industry’s expansion in Australia. Managed by the University of Sydney, the initiative aims to establish a central hub that fosters industry growth, collaboration, and research coordination.

    This centre will serve as a primary resource for the quantum sector, enhancing Australia’s global competitiveness by promoting industry-led solutions and advancing technology adoption both domestically and internationally. Additionally, the centre will emphasise ethical practices and security in the development and application of quantum technologies.

    Additionally, Sydney hosts several leading quantum startups, such as Silicon Quantum Computing, Quantum Brilliance, Diraq and Q-CTRL, which focus on improving the performance and stability of quantum systems.

    Educational excellence

    Sydney’s universities are globally recognized for their contributions to quantum research. They nurture future quantum leaders, and their academic prowess attracts top talent and fosters a culture of innovation and collaboration.

    Sydney hosts several leading quantum startups, such as Silicon Quantum Computing, Quantum Brilliance, Diraq, and Q-CTRL, which focus on improving the performance and stability of quantum systems.

    The UNSW Sydney is, one of Sydney’s universities, ranked among the world’s top 20 universities, and boasts the largest concentration of academics working in AI and quantum technologies in Australia.

    UNSW Sydney Professor Toby Walsh is Laureate Fellow and Scientia Professor of Artificial Intelligence at the Department of Computer Science and Engineering at the University of New South Wales. He explains the significance of this academic strength: “Our students and researchers are at the cutting edge of quantum science. The collaborative efforts within Sydney’s academic institutions are creating a powerhouse of innovation that is driving the global quantum agenda.”

    Sydney’s strategic investments and collaborative efforts in quantum technology have propelled the city to the forefront of this transformative field. With its unique and vibrant ecosystem, a blend of world-leading institutions, globally respected talent and strong government and industry support, Sydney is well-positioned to lead the global quantum revolution for the benefit of all. For more information on Sydney’s science and engineering industries visit besydney.com.au.

  • Elephant Robotics’ Mercury Humanoid Robot Empowers Embodied AI Research
    by Elephant Robotics on 23. July 2024. at 22:00



    This is a sponsored article brought to you by Elephant Robotics.

    Elephant Robotics has gone through years of research and development to accelerate its mission of bringing robots to millions of homes and a vision of “Enjoy Robots World”. From the collaborative industrial robots P-series and C-series, which have been on the drawing board since its establishment in 2016, to the lightweight desktop 6 DOF collaborative robot myCobot 280 in 2020, to the dual-armed, semi-humanoid robot myBuddy, which was launched in 2022, Elephant Robotics is launching 3-5 robots per year, and this year’s full-body humanoid robot, the Mercury series, promises to reshape the landscape of non-human workers, introducing intelligent robots like Mercury into research and education and even everyday home environments.

    A Commitment to Practical Robotics

    Elephant Robotics proudly introduces the Mercury Series, a suite of humanoid robots that not only push the boundaries of innovation but also embody a deep commitment to practical applications. Designed with the future of robotics in mind, the Mercury Series is poised to become the go-to choice for researchers and industry professionals seeking reliable, scalable, and robust solutions.


    Elephant Robotics

    The Genesis of Mercury Series: Bridging Vision With Practicality

    From the outset, the Mercury Series has been envisioned as more than just a collection of advanced prototypes. It is a testament to Elephant Robotics’ dedication to creating humanoid robots that are not only groundbreaking in their capabilities but also practical for mass production and consistent, reliable use in real-world applications.

    Mercury X1: Wheeled Humanoid Robot

    The Mercury X1 is a versatile wheeled humanoid robot that combines advanced functionalities with mobility. Equipped with dual NVIDIA Jetson controllers, lidar, ultrasonic sensors, and an 8-hour battery life, the X1 is perfect for a wide range of applications, from exploratory studies to commercial tasks requiring mobility and adaptability.

    Mercury B1: Dual-Arm Semi-Humanoid Robot

    The Mercury B1 is a semi-humanoid robot tailored for sophisticated research. It features 17 degrees of freedom, dual robotic arms, a 9-inch touchscreen, a NVIDIA Xavier control chip, and an integrated 3D camera. The B1 excels in machine vision and VR-assisted teleoperation, and its AI voice interaction and LLM integration mark significant advancements in human-robot communication.

    These two advanced models exemplify Elephant Robotics’ commitment to practical robotics. The wheeled humanoid robot Mercury X1 integrates advanced technology with a state-of-the-art mobile platform, ensuring not only versatility but also the feasibility of large-scale production and deployment.

    Embracing the Power of Reliable Embodied AI

    The Mercury Series is engineered as the ideal hardware platform for embodied AI research, providing robust support for sophisticated AI algorithms and real-world applications. Elephant Robotics demonstrates its commitment to innovation through the Mercury series’ compatibility with NVIDIA’s ISSACSIM, a state-of-the-art simulation platform that facilitates sim2real learning, bridging the gap between virtual environments and physical robot interaction.

    The Mercury Series is perfectly suited for the study and experimentation of mainstream large language models in embodied AI. Its advanced capabilities allow seamless integration with the latest AI research. This provides a reliable and scalable platform for exploring the frontiers of machine learning and robotics.

    Furthermore, the Mercury Series is complemented by the myArm C650, a teleoperation robotic arm that enables rapid acquisition of physical data. This feature supports secondary learning and adaptation, allowing for immediate feedback and iterative improvements in real-time. These features, combined with the Mercury Series’ reliability and practicality, make it the preferred hardware platform for researchers and institutions looking to advance the field of embodied AI.

    The Mercury Series is supported by a rich software ecosystem, compatible with major programming languages, and integrates seamlessly with industry-standard simulation software. This comprehensive development environment is enhanced by a range of auxiliary hardware, all designed with mass production practicality in mind.

    A set of images showing a robot in a variety of situations. Elephant Robotics

    Drive to Innovate: Mass Production and Global Benchmarks

    The “Power Spring” harmonic drive modules, a hallmark of the Elephant Robotics’ commitment to innovation for mass production, have been meticulously engineered to offer an unparalleled torque-to-weight ratio. These components are a testament to the company’s foresight in addressing the practicalities of large-scale manufacturing. The incorporation of carbon fiber in the design of these modules not only optimizes agility and power but also ensures that the robots are well-prepared for the rigors of the production line and real-world applications. The Mercury Series, with its spirit of innovation, is making a significant global impact, setting a new benchmark for what practical robotics can achieve.

    Elephant Robotics is consistently delivering mass-produced robots to a range of renowned institutions and industry leaders, thereby redefining the industry standards for reliability and scalability. The company’s dedication to providing more than mere prototypes is evident in the active role its robots play in various sectors, transforming industries that are in search of dependable and efficient robotic solutions.

    Conclusion: The Mercury Series—A Beacon for the Future of Practical Robotics

    The Mercury Series represents more than a product; it is a beacon for the future of practical robotics. Elephant Robotics’ dedication to affordability, accessibility, and technological advancement ensures that the Mercury Series is not just a research tool but a platform for real-world impact.

    Mercury Usecases | Explore the Capabilities of the Wheeled Humanoid Robot and Discover Its Precision youtu.be

    Elephant Robotics: https://www.elephantrobotics.com/en/

    Mercury Robot Series: https://www.elephantrobotics.com/en/mercury-humanoid-robot/

  • How Olympics Officials Try to Catch “Motor Doping”
    by Peter Fairley on 23. July 2024. at 12:01



    A French cycling official confronts a rider suspected of doping and ends up jumping onto the hood of a van making a high-speed getaway. This isn’t a tragicomedy starring Gérard Depardieu, sending up the sport’s well-earned reputation for cheating. This scenario played out in May at the Routes de l’Oise cycling competition near Paris, and the van was believed to contain evidence of a distinctly 21st-century cheat: a hidden electric motor.

    Cyclists call it “motor doping.” At the Paris Olympics opening on Friday, officials will be deploying electromagnetic scanners and X-ray imaging to combat it, as cyclists race for gold in and around the French capital. The officials’ prey can be quite small: Cycling experts say just 20 or 30 watts of extra power is enough to tilt the field and clinch a race.

    Motor doping has been confirmed only once in professional cycling, way back in 2016. And the sport’s governing body, the Union Cycliste Internationale (UCI), has since introduced increasingly sophisticated motor-detection methods. But illicit motors remain a scourge at high-profile amateur events like the Routes de l’Oise. Some top professionals, past and present, continue to raise an alarm.

    “It’s 10 years now that we’re speaking about this…. If you want to settle this issue you have to invest.” —Jean-Christophe Péraud, former Union Cycliste Internationale official

    Riders and experts reached by IEEE Spectrum say it’s unlikely that technological doping still exists at the professional level. “I’m confident it’s not happening any more. I think as soon as we began to speak about it, it stopped. Because at a high level it’s too dangerous for a team and an athlete,” says Jean-Christophe Péraud, an Olympic silver medalist who was UCI’s first Manager of Equipment and the Fight against Technological Fraud.

    But trust is limited. Cycling is still recovering from the scandals surrounding U.S. Olympian Lance Armstrong, whose extensive use of transfusions and drugs to boost blood-oxygen levels fueled allegations of collusion by UCI officials and threats to boot cycling out of the Olympics.

    Many—including Péraud—say more vigilance is needed. The solution may be next-generation detection tech: onboard scanners that provide continuous assurance that human muscle alone is powering the sport’s dramatic sprints and climbs.

    How Officials Have Hunted for Motor Doping in Cycling

    Rumors of hidden motors first swirled into the mainstream in 2010 after a Swiss cyclist clinched several European events with stunning accelerations. At the time the UCI lacked means of detecting concealed motors, and its technical director promised to “speed up” work on a “quick and efficient way” to do so.

    The UCI began with infrared cameras, but they are useless for pre- and post-race checks when a hidden motor is cold. Not until 2015, amidst further motor doping rumors and allegations of UCI inaction, did the organization begin beta testing a better tool: an iPad-based “magnetometric tablet” scanner.

    According to the UCI, an adapter plugged into one of these tablet scanners creates an ambient magnetic field. Then, a magnetometer and custom software register disruptions to the field that may indicate the presence of metal or magnets in and around a bike’s carbon-fiber frame.

    UCI’s tablets delivered in their debut appearance, at the 2016 Cyclocross World Championships held that year in Belgium. Scans of bikes at the rugged event—a blend of road and mountain biking—flagged a bike bearing the name of local favorite Femke Van den Driessche. Closer inspection revealed a motor and battery lodged within the hollow frame element that angles down from a bike’s saddle to its pedals, and wires connecting the seat tube’s hidden hardware to a push-button switch under the handlebars.

    person in biking gear pushing bike up a hill on muddy terrain In 2016, a concealed motor was found in a bike bearing Belgian cyclist Femke Van Den Driessche’s name at the world cyclo-cross championships. (Van Den Driessche is shown here with a different bike.)AFP/Getty Images

    Van den Driessche, banned from competition for six years, withdrew from racing while maintaining her innocence. (Giovambattista Lera, the amateur cyclist implicated earlier this year in France, also denies using electric assistance in competition.)

    The motor in Van den Driessche’s bike engaged with the bike’s crankshaft and added 200 W of power. The equipment’s Austrian manufacturer, Vivax Drive, is now defunct. But anyone with cash to spare can experience 200 W of extra push via a racer equipped by Monaco-based HPS-Bike, such as the HPS-equipped Lotus Type 136 racing bike from U.K. sports car producer Lotus Group, which starts at £15,199 (US $19,715).

    HPS founder & CEO Harry Gibbings says the company seeks to empower weekend riders who don’t want to struggle up steep hills or who need an extra boost here and there to keep up with the pack. Gibbings says the technology is not available for retrofits, and is thus off limits to would-be cheats. Still, the HPS Watt Assist system shows the outer bounds of what’s possible in discreet high-performance electric assist.

    The 30-millimeter-diameter, 300-gram motor, is manufactured by Swiss motor maker Maxon Group, and Gibbings says it uses essentially the same power-dense brushless design that’s propelling NASA’s Perseverance rover on Mars. HPS builds the motor into a bike’s downtube, the frame element angling up from a bike’s crank toward its handlebars.

    Notwithstanding persistent media speculation about electric motors built into rear hubs or solid wheels, Gibbings says only a motor placed in a frame’s tubes can add power without jeopardizing the look, feel, and performance of a racing bike.

    UCI’s New Techniques to Spot Cheating in Cycling

    Professional cycling got its most sophisticated detection systems in 2018, after criticism of UCI motor-doping policies helped fuel a change of leadership. Incoming President David Lappartient appointed Péraud to push detection to new levels, and five months later UCI announced its first X-ray equipment at a press conference in Geneva.

    Unlike the tablet scanners, which yield many false positives and require dismantling of suspect bikes, X-ray imaging is definitive. The detector is built into a shielded container and driven to events.

    UCI told the cycling press that its X-ray cabinet would “remove any suspicion regarding race results.” And it says it maintains a high level of testing, with close to 1,000 motor-doping checks at last year’s Tour de France.

    UCI declined to speak with IEEE Spectrum about its motor-detection program, including plans for the Paris Olympics. But it appears to have stepped up vigilance. Lappartient recently acknowledged that UCI’s controls are “not 100 percent secure” and announced a reward for whistleblowers who deliver evidence of motor fraud. In May, UCI once again appointed a motor-doping czar—a first since Péraud departed amidst budget cuts in 2020. Among other duties, former U.S. Department of Homeland Security criminal investigator Nicholas Raudenski is tasked with “development of new methods to detect technological fraud.”

    Unlike the tablet scanners, X-ray imaging is definitive.

    Péraud is convinced that only real-time monitoring of bikes throughout major races can prove that motor fraud is in the past, since big races provide ample opportunities to sneak in an additional bike and thus evade UCI’s current tools.

    UCI has already laid the groundwork for such live monitoring, partnering with France’s Alternative Energies and Atomic Energy Commission (Commissariat à l’énergie atomique et aux énergies alternatives, or CEA) to capitalize on the national lab’s deep magnetometry expertise. UCI disclosed some details at its 2018 Geneva press conference, where a CEA official presented its concept: an embedded, high-resolution magnetometer to detect a hidden motor’s electromagnetic signature and wirelessly alert officials via receivers on race support vehicles.

    As of June 2018, CEA researchers in Grenoble had identified an appropriate magnetometer and were evaluating the electromagnetic noise that could challenge the system—“from rotating wheels and pedals to passing motorcycles and cars.”

    Mounting detectors on every bike would not be cheap, but Péraud says he is convinced that cycling needs it: “It’s 10 years now that we’re speaking about this…. If you want to settle this issue you have to invest.”

  • iRobot’s Autowash Dock Is (Almost) Automated Floor Care
    by Evan Ackerman on 23. July 2024. at 11:00



    The dream of robotic floor care has always been for it to be hands-off and mind-off. That is, for a robot to live in your house that will keep your floors clean without you having to really do anything or even think about it. When it comes to robot vacuuming, that’s been more or less solved thanks to self-emptying robots that transfer debris into docking stations, which iRobot pioneered with the Roomba i7+ in 2018. By 2022, iRobot’s Combo j7+ added an intelligent mopping pad to the mix, which definitely made for cleaner floors but was also a step backwards in the sense that you had to remember to toss the pad into your washing machine and fill the robot’s clean water reservoir every time. The Combo j9+ stuffed a clean water reservoir into the dock itself, which could top off the robot with water by itself for a month.

    With the new Roomba Combo 10 Max, announced today, iRobot has cut out (some of) that annoying process thanks to a massive new docking station that self-empties vacuum debris, empties dirty mop water, refills clean mop water, and then washes and dries the mopping pad, completely autonomously.


    iRobot

    The Roomba part of this is a mildly upgraded j7+, and most of what’s new on the hardware side here is in the “multifunction AutoWash Dock.” This new dock is a beast: It empties the robot of all of the dirt and debris picked up by the vacuum, refills the Roomba’s clean water tank from a reservoir, and then starts up a wet scrubby system down under the bottom of the dock. The Roomba deploys its dirty mopping pad onto that system, and then drives back and forth while the scrubby system cleans the pad. All the dirty water from this process gets sucked back up into a dedicated reservoir inside the dock, and the pad gets blow-dried while the scrubby system runs a self-cleaning cycle.

    A round black vacuuming robot sits inside of a large black docking station that is partially transparent to show clean and dirty water tanks inside. The dock removes debris from the vacuum, refills it with clean water, and then uses water to wash the mopping pad.iRobot

    This means that as a user, you’ve only got to worry about three things: dumping out the dirty water tank every week (if you use the robot for mopping most days), filling the clean water tank every week, and then changing out the debris every two months. That is not a lot of hands-on time for having consistently clean floors.

    The other thing to keep in mind about all of these robots is that they do need relatively frequent human care if you want them to be happy and successful. That means flipping them over and getting into their guts to clean out the bearings and all that stuff. iRobot makes this very easy to do, and it’s a necessary part of robot ownership, so the dream of having a robot that you can actually forget completely is probably not achievable.

    The consequence for this convenience is a real chonker of a dock. The dock is basically furniture, and to the company’s credit, iRobot designed it so that the top surface is useable as a shelf—Access to the guts of the dock are from the front, not the top. This is fine, but it’s also kind of crazy just how much these docks have expanded, especially once you factor in the front ramp that the robot drives up, which sticks out even farther.

    A round black robot on a wooden floor approaches a dirty carpet and uses a metal arm to lift a wet mopping pad onto its back. The Roomba will detect carpet and lift its mopping pad up to prevent drips.iRobot

    We asked iRobot director of project management Warren Fernandez about whether docks are just going to keep on getting bigger forever until we’re all just living in giant robot docks, to which he said: “Are you going to continue to see some large capable multifunction docks out there in the market? Yeah, I absolutely think you will—but when does big become too big?” Fernandez says that there are likely opportunities to reduce dock size going forward through packaging efficiencies or dual-purpose components, but that there’s another option, too: Distributed docks. “If a robot has dry capabilities and wet capabilities, do those have to coexist inside the same chassis? What if they were separate?” says Fernandez.

    We should mention that iRobot is not the first in the robotic floor care robot space to have a self-cleaning mop, and it’s also not the first to think about distributed docks, although as Fernandez explains, this is a more common approach in Asia where you can also take advantage of home plumbing integration. “It’s a major trend in China, and starting to pop up a little bit in Europe, but not really in North America yet. How amazing could it be if you had a dock that, in a very easy manner, was able to tap right into plumbing lines for water supply and sewage disposal?”

    According to Fernandez, this tends to be much easier to do in China, both because the labor cost for plumbing work is far lower than in the United States and Europe, and also because it’s fairly common for apartments in China to have accessible floor drains. “We don’t really yet see it in a major way at a global level,” Fernandez tells us. “But that doesn’t mean it’s not coming.”

    A round black robot on a wooden floor approaches a dirty carpet and uses a metal arm to lift a wet mopping pad onto its back. The robot autonomously switches mopping mode on and off for different floor surfaces.iRobot

    We should also mention the Roomba Combo 10 Max, which includes some software updates:

    • The front-facing camera and specialized bin sensors can identify dirtier areas eight times as effectively as before.
    • The Roomba can identify specific rooms and prioritize the order they’re cleaned in, depending on how dirty they get.
    • A new cleaning behavior called “Smart Scrub” adds a back-and-forth scrubbing motion for floors that need extra oomph.

    And here’s what I feel like the new software should do, but doesn’t:

    • Use the front-facing camera and bin sensors to identify dirtier areas and then autonomously develop a schedule to more frequently clean those areas.
    • Activate Smart Scrub when the camera and bin sensors recognize an especially dirty floor.

    I say “should do” because the robot appears to be collecting the data that it needs to do these things but it doesn’t do them yet. New features (especially new features that involve autonomy) take time to develop and deploy, but imagine a robot that makes much more nuanced decisions about where and when to clean based on very detailed real-time data and environmental understanding that iRobot has already implemented.

    I also appreciate that even as iRobot is emphasizing autonomy and leveraging data to start making more decisions for the user, the company is also making sure that the user has as much control as possible through the app. For example, you can set the robot to mop your floor without vacuuming first, even though if you do that, all you’re going to end up with a much dirtier mop. Doesn’t make a heck of a lot of sense, but if that’s what you want, iRobot has empowered you to do it.

    A round black vacuuming robot sits inside of a large black docking station that is opened to show clean and dirty water tanks inside. The dock opens from the front for access to the clean- and dirty-water storage and the dirt bag.iRobot

    The Roomba Combo 10 Max will be launching in August for US $1,400. That’s expensive, but it’s also how iRobot does things: A new Roomba with new tech always gets flagship status and premium cost. Sooner or later it’ll be affordable enough that the rest of us will be able to afford it, too.

  • Biocompatible Mic Could Lead to Better Cochlear Implants
    by Rachel Berkowitz on 22. July 2024. at 12:00



    Cochlear implants—the neural prosthetic cousins of standard hearing aids—can be a tremendous boon for people with profound hearing loss. But many would-be users are turned off by the device’s cumbersome external hardware, which must be worn to process signals passing through the implant. So researchers have been working to make a cochlear implant that sits entirely inside the ear, to restore speech and sound perception without the lifestyle restrictions imposed by current devices.

    A new biocompatible microphone offers a bridge to such fully internal cochlear implants. About the size of a grain of rice, the microphone is made from a flexible piezoelectric material that directly measures the sound-induced motion of the eardrum. The tiny microphone’s sensitivity matches that of today’s best external hearing aids.

    Cochlear implants create a novel pathway for sounds to reach the brain. An external microphone and processor, worn behind the ear or on the scalp, collect and translate incoming sounds into electrical signals, which get transmitted to an electrode that’s surgically implanted in the cochlea, deep within the inner ear. There, the electrical signals directly stimulate the auditory nerve, sending information to the brain to interpret as sound.

    But, says Hideko Heidi Nakajima, an associate professor of otolaryngology at Harvard Medical School and Massachusetts Eye and Ear, “people don’t like the external hardware.” They can’t wear it while sleeping, or while swimming or doing many other forms of exercise, and so many potential candidates forgo the device altogether. What’s more, incoming sound goes directly into the microphone and bypasses the outer ear, which would otherwise perform the key functions of amplifying sound and filtering noise. “Now the big idea is instead to get everything—processor, battery, microphone—inside the ear,” says Nakajima. But even in clinical trials of fully internal designs, the microphone’s sensitivity—or lack thereof—has remained a roadblock.

    Nakajima, along with colleagues from MIT, Harvard, and Columbia University, fabricated a cantilever microphone that senses the motion of a bone attached behind the eardrum called the umbo. Sound entering the ear canal causes the umbo to vibrate unidirectionally, with a displacement 10 times as great as other nearby bones. The tip of the “UmboMic” touches the umbo, and the umbo’s movements flex the material and produce an electrical charge through the piezoelectric effect. These electrical signals can then be processed and transmitted to the auditory nerve. “We’re using what nature gave us, which is the outer ear,” says Nakajima.

    Why a cochlear implant needs low-noise, low-power electronics

    Making a biocompatible microphone that can detect the eardrum’s minuscule movements isn’t easy, however. Jeff Lang, a professor of electrical engineering at MIT who jointly led the work, points out that only certain materials are tolerated by the human body. Another challenge is shielding the device from internal electronics to reduce noise. And then there’s long-term reliability. “We’d like an implant to last for decades,” says Lang.

    An image showing cavernish hole with a small metal piece touching a small pink spot. In tests of the implantable microphone prototype, a laser beam measures the umbo’s motion, which gets transferred to the sensor tip. JEFF LANG & HEIDI NAKAJIMA

    The researchers settled on a triangular design for the 3-by-3-millimeter sensor made from two layers of polyvinylidene fluoride (PVDF), a biocompatible piezoelectric polymer, sandwiched between layers of flexible, electrode-patterned polymer. When the cantilever tip bends, one PVDF layer produces a positive charge and the other produces a negative charge—taking the difference between the two cancels much of the noise. The triangular shape provides the most uniform stress distribution within the bending cantilever, maximizing the displacement it can undergo before it breaks. “The sensor can detect sounds below a quiet whisper,” says Lang.

    Emma Wawrzynek, a graduate student at MIT, says that working with PVDF is tricky because it loses its piezoelectric properties at high temperatures, and most fabrication techniques involve heating the sample. “That’s a challenge especially for encapsulation,” which involves encasing the device in a protective layer so it can remain safely in the body, she says. The group had success by gradually depositing titanium and gold onto the PVDF while using a heat sink to cool it. That approach created a shielding layer that protects the charge-sensing electrodes from electromagnetic interference.

    The other tool for improving a microphone’s performance is, of course, amplifying the signal. “On the electronics side, a low-noise amp is not necessarily a huge challenge to build if you’re willing to spend extra power,” says Lang. But, according to MIT graduate student John Zhang, cochlear implant manufacturers try to limit power for the entire device to 5 milliwatts, and just 1 mW for the microphone. “The trade-off between noise and power is hard to hit,” Zhang says. He and fellow student Aaron Yeiser developed a custom low-noise, low-power charge amplifier that outperformed commercially available options.

    “Our goal was to perform better than or at least equal the performance of high-end capacitative external microphones,” says Nakajima. For leading external hearing-aid microphones, that means sensitivity down to a sound pressure level of 30 decibels—the equivalent of a whisper. In tests of the UmboMic on human cadavers, the researchers implanted the microphone and amplifier near the umbo, input sound through the ear canal, and measured what got sensed. Their device reached 30 decibels over the frequency range from 100 hertz to 6 kilohertz, which is the standard for cochlear implants and hearing aids and covers the frequencies of human speech. “But adding the outer ear’s filtering effects means we’re doing better [than traditional hearing aids], down to 10 dB, especially in speech frequencies,” says Nakajima.

    Plenty of testing lies ahead, at the bench and on sheep before an eventual human trial. But if their UmboMic passes muster, the team hopes that it will help more than 1 million people worldwide go about their lives with a new sense of sound.

    The work was published on 27 June in the Journal of Micromechanics and Microengineering.

  • AI Missteps Could Unravel Global Peace and Security
    by Raja Chatila on 21. July 2024. at 13:00



    This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum, The Institute, or IEEE.

    Many in the civilian artificial intelligence community don’t seem to realize that today’s AI innovations could have serious consequences for international peace and security. Yet AI practitioners—whether researchers, engineers, product developers, or industry managers—can play critical roles in mitigating risks through the decisions they make throughout the life cycle of AI technologies.

    There are a host of ways by which civilian advances of AI could threaten peace and security. Some are direct, such as the use of AI-powered chatbots to create disinformation for political-influence operations. Large language models also can be used to create code for cyberattacks and to facilitate the development and production of biological weapons.

    Other ways are more indirect. AI companies’ decisions about whether to make their software open-source and in which conditions, for example, have geopolitical implications. Such decisions determine how states or nonstate actors access critical technology, which they might use to develop military AI applications, potentially including autonomous weapons systems.

    AI companies and researchers must become more aware of the challenges, and of their capacity to do something about them.

    Change needs to start with AI practitioners’ education and career development. Technically, there are many options in the responsible innovation toolbox that AI researchers could use to identify and mitigate the risks their work presents. They must be given opportunities to learn about such options including IEEE 7010: Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-being, IEEE 7007-2021: Ontological Standard for Ethically Driven Robotics and Automation Systems, and the National Institute of Standards and Technology’s AI Risk Management Framework.

    If education programs provide foundational knowledge about the societal impact of technology and the way technology governance works, AI practitioners will be better empowered to innovate responsibly and be meaningful designers and implementers of regulations.

    What Needs to Change in AI Education

    Responsible AI requires a spectrum of capabilities that are typically not covered in AI education. AI should no longer be treated as a pure STEM discipline but rather a transdisciplinary one that requires technical knowledge, yes, but also insights from the social sciences and humanities. There should be mandatory courses on the societal impact of technology and responsible innovation, as well as specific training on AI ethics and governance.

    Those subjects should be part of the core curriculum at both the undergraduate and graduate levels at all universities that offer AI degrees.

    If education programs provide foundational knowledge about the societal impact of technology and the way technology governance works, AI practitioners will be empowered to innovate responsibly and be meaningful designers and implementers of AI regulations.

    Changing the AI education curriculum is no small task. In some countries, modifications to university curricula require approval at the ministry level. Proposed changes can be met with internal resistance due to cultural, bureaucratic, or financial reasons. Meanwhile, the existing instructors’ expertise in the new topics might be limited.

    An increasing number of universities now offer the topics as electives, however, including Harvard, New York University, Sorbonne University, Umeå University, and the University of Helsinki.

    There’s no need for a one-size-fits-all teaching model, but there’s certainly a need for funding to hire dedicated staff members and train them.

    Adding Responsible AI to Lifelong Learning

    The AI community must develop continuing education courses on the societal impact of AI research so that practitioners can keep learning about such topics throughout their career.

    AI is bound to evolve in unexpected ways. Identifying and mitigating its risks will require ongoing discussions involving not only researchers and developers but also people who might directly or indirectly be impacted by its use. A well-rounded continuing education program would draw insights from all stakeholders.

    Some universities and private companies already have ethical review boards and policy teams that assess the impact of AI tools. Although the teams’ mandate usually does not include training, their duties could be expanded to make courses available to everyone within the organization. Training on responsible AI research shouldn’t be a matter of individual interest; it should be encouraged.

    Organizations such as IEEE and the Association for Computing Machinery could play important roles in establishing continuing education courses because they’re well placed to pool information and facilitate dialogue, which could result in the establishment of ethical norms.

    Engaging With the Wider World

    We also need AI practitioners to share knowledge and ignite discussions about potential risks beyond the bounds of the AI research community.

    Fortunately, there are already numerous groups on social media that actively debate AI risks including the misuse of civilian technology by state and nonstate actors. There are also niche organizations focused on responsible AI that look at the geopolitical and security implications of AI research and innovation. They include the AI Now Institute, the Centre for the Governance of AI, Data and Society, the Distributed AI Research Institute, the Montreal AI Ethics Institute, and the Partnership on AI.

    Those communities, however, are currently too small and not sufficiently diverse, as their most prominent members typically share similar backgrounds. Their lack of diversity could lead the groups to ignore risks that affect underrepresented populations.

    What’s more, AI practitioners might need help and tutelage in how to engage with people outside the AI research community—especially with policymakers. Articulating problems or recommendations in ways that nontechnical individuals can understand is a necessary skill.

    We must find ways to grow the existing communities, make them more diverse and inclusive, and make them better at engaging with the rest of society. Large professional organizations such as IEEE and ACM could help, perhaps by creating dedicated working groups of experts or setting up tracks at AI conferences.

    Universities and the private sector also can help by creating or expanding positions and departments focused on AI’s societal impact and AI governance. Umeå University recently created an AI Policy Lab to address the issues. Companies including Anthropic, Google, Meta, and OpenAI have established divisions or units dedicated to such topics.

    There are growing movements around the world to regulate AI. Recent developments include the creation of the U.N. High-Level Advisory Body on Artificial Intelligence and the Global Commission on Responsible Artificial Intelligence in the Military Domain. The G7 leaders issued a statement on the Hiroshima AI process, and the British government hosted the first AI Safety Summit last year.

    The central question before regulators is whether AI researchers and companies can be trusted to develop the technology responsibly.

    In our view, one of the most effective and sustainable ways to ensure that AI developers take responsibility for the risks is to invest in education. Practitioners of today and tomorrow must have the basic knowledge and means to address the risk stemming from their work if they are to be effective designers and implementers of future AI regulations.

    Authors’ note: Authors are listed by level of contributions. The authors were brought together by an initiative of the U.N. Office for Disarmament Affairs and the Stockholm International Peace Research Institute launched with the support of a European Union initiative on Responsible Innovation in AI for International Peace and Security.

  • Next-Gen Brain Implant Uses a Graphene Chip
    by Dexter Johnson on 20. July 2024. at 13:00



    A Barcelona-based startup called Inbrain Neuroelectronics has produced a novel brain implant made of graphene and is gearing up for its first in-human test this summer.

    The technology is a type of brain-computer interface. BCIs have garnered interest because they record signals from the brain and transmit them to a computer for analysis. They have been used for medical diagnostics, as communication devices for people who can’t speak, and to control external equipment, including robotic limbs. But Inbrain intends to transform its BCI technology into a therapeutic tool for patients with neurological issues such as Parkinson’s disease.

    Because Inbrain’s chip is made of graphene, the neural interface has some interesting properties, including the ability to be used to both record from and stimulate the brain. That bidirectionality comes from addressing a key problem with the metallic chips typically used in BCI technology: Faradaic reactions. Faradaic reactions are a particular type of electrochemical processes that occurs between a metal electrode and an electrolyte solution. As it so happens, neural tissue is largely composed of aqueous electrolytes. Over time, these Faradaic reactions reduce the effectiveness of the metallic chips.

    That’s why Inbrain replaced the metals typically used in such chips with graphene, a material with great electrical conductivity. “Metals have Faraday reactions that actually make all the electrons interact with each other, degrading their effectiveness...for transmitting signals back to the brain,” said Carolina Aguilar, CEO and cofounder of Inbrain.

    Because graphene is essentially carbon and not a metal, Aguilar says the chip can inject 200 times as much charge without creating a Faradic reaction. As a result, the material is stable over the millions of pulses of stimulation required of a therapeutic tool. While Inbrain is not yet testing the chip for brain stimulation, the company expects to reach that goal in due time.

    The graphene-based chip is produced on a wafer using traditional semiconductor technology, according to Aguilar. At clean-room facilities, Inbrain fabricates a 10-micrometer-thick chip. The chip consists of what Aguilar terms “graphene dots” (not to be confused with graphene quantum dots) that range in size from 25 to 300 micrometers. “This micrometer scale allows us to get that unique resolution on the decoding of the signals from the brain, and also provides us with the micrometric stimulation or modulation of the brain,” added Aguilar.

    Testing the Graphene-Based BCI

    The first test of the platform in a human patient will soon be performed at the University of Manchester, in England, where it will serve as an interface during the resection of a brain tumor. When resecting a tumor, surgeons must ensure that they don’t damage areas like the brain’s language centers so the patient isn’t impaired after the surgery. “The chip is positioned during the tumor resection so that it can read, at a very high resolution, the signals that tell the surgeon where there is a tumor and where there is not a tumor,” says Aguilar. That should enable the surgeons to extract the tumor with micrometric precision while preserving functional areas like speech and cognition.

    Aguilar added, “We have taken this approach for our first human test because it is a very reliable and quick path to prove the safety of graphene, but also demonstrate the potential of what it can do in comparison to metal technology that is used today.”

    Aguilar stresses that the Inbrain team has already tested the graphene-based chip’s biocompatibility. “We have been working for the last three years in biocompatibility through various safety studies in large animals,” said Aguilar. “So now we can have these green lights to prove an additional level of safety with humans.”

    While this test of the chip at Manchester is aimed at aiding in brain tumor surgery, the same technology could eventually be used to help Parkinson’s patients. Toward this aim, Inbrain’s system was granted Breakthrough Device Designation last September from the U.S. Food & Drug Administration as an adjunctive therapy for treating Parkinson’s disease. “For Parkinson’s treatment, we have been working on different preclinical studies that have shown reasonable proof of superiority versus current commercial technology in the [reduction] of Parkinson’s disease symptoms,” said Aguilar.

    For treating Parkinson’s, Inbrain’s chip connects with the nigrostriatal pathway in the brain that is critical for movements. The chip will first decode the intention message from the brain that triggers a step or the lifting of the arm—something that a typical BCI can do. But Inbrain’s chip, with its micrometric precision, can also decode pathological biomarkers related to Parkinson’s symptoms, such as tremors, rigidity, and freezing of the gait.

    By determining these biomarkers with great precision, Inbrain’s technology can determine how well a patient’s current drug regimen is working. In this first iteration of the Inbrain chip, it doesn’t treat the symptoms of Parkinson’s directly, but instead makes it possible to better target and reduce the amount of drugs that are used in treatment.

    “Parkinson’s patients take huge amounts of drugs that have to be changed over time just to keep up with the growing resistance patients develop to the power of the drug,” said Aguilar. “We can reduce it at least 50 percent and hopefully in the future more as our devices become precise.”

  • Video Friday: Robot Crash-Perches, Hugs Tree
    by Evan Ackerman on 19. July 2024. at 19:45



    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

    ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
    IROS 2024: 14–18 October 2024, ABU DHABI, UAE
    ICSR 2024: 23–26 October 2024, ODENSE, DENMARK
    Cybathlon 2024: 25–27 October 2024, ZURICH

    Enjoy today’s videos!

    Perching with winged Unmanned Aerial Vehicles has often been solved by means of complex control or intricate appendages. Here, we present a method that relies on passive wing morphing for crash-landing on trees and other types of vertical poles. Inspired by the adaptability of animals’ and bats’ limbs in gripping and holding onto trees, we design dual-purpose wings that enable both aerial gliding and perching on poles.

    [ Nature Communications Engineering ]

    Pretty impressive to have low enough latency in controlling your robot’s hardware that it can play ping pong, although it makes it impossible to tell whether the robot or the human is the one that’s actually bad at the game.

    [ IHMC ]

    How to be a good robot when boarding an elevator.

    [ NAVER ]

    Have you ever wondered how insects are able to go so far beyond their home and still find their way? The answer to this question is not only relevant to biology but also to making the AI for tiny, autonomous robots. We felt inspired by biological findings on how ants visually recognize their environment and combine it with counting their steps in order to get safely back home.

    [ Science Robotics ]

    Team RoMeLa Practice with ARTEMIS humanoid robots, featuring Tsinghua Hephaestus (Booster Alpha). Fully autonomous humanoid robot soccer match with the official goal of beating the human WorldCup Champions by the year 2050.

    [ RoMeLa ]

    Triangle is the most stable shape, right?

    [ WVU IRL ]

    We propose RialTo, a new system for robustifying real-world imitation learning policies via reinforcement learning in “digital twin” simulation environments constructed on the fly from small amounts of real-world data.

    [ MIT CSAIL ]

    There is absolutely no reason to watch this entire video, but Moley Robotics is still working on that robotic kitchen of theirs.

    I will once again point out that the hardest part of cooking (for me, anyway) is the prep and the cleanup, and this robot still needs you to do all that.

    [ Moley ]

    B-Human has so far won 10 titles at the RoboCup SPL tournament. Can we make it 11 this year? Our RoboCup starts off with a banger game against HTWK Robots form Leipzig!

    [ Team B-Human ]

    AMBIDEX is a dual-armed robot with an innovative mechanism developed for safe coexistence with humans. Based on an innovative cable structure, it is designed to be both strong and stable.

    [ NAVER ]

    As NASA’s Perseverance rover prepares to ascend to the rim of Jezero Crater, its team is investigating a rock unlike any that they’ve seen so far on Mars. Deputy project scientist Katie Stack Morgan explains why this rock, found in an ancient channel that funneled water into the crater, could be among the oldest that Perseverance has investigated—or the youngest.

    [ NASA ]

    We present a novel approach for enhancing human-robot collaboration using physical interactions for real-time error correction of large language model (LLM) parameterized commands.

    [ Figueroa Robotics Lab ]

    Husky Observer was recently used to autonomously inspect solar panels at a large solar panel farm. As part of its mission, the robot navigated rows of solar panels, stopping to inspect areas with its integrated thermal camera. Images were taken by the robot and enhanced to detect potential “hot spots” in the panels.

    [ Clearpath Robotics ]

    Most of the time, robotic workcells contain just one robot, so it’s cool to see a pair of them collaborating on tasks.

    [ Leverage Robotics ]

    Thanks, Roman!

    Meet Hydrus, the autonomous underwater drone revolutionising underwater data collection by eliminating the barriers to its entry. Hydrus ensures that even users with limited resources can execute precise and regular subsea missions to meet their data requirements.

    [ Advanced Navigation ]

    Those adorable Disney robots have finally made their way into a paper.

    [ RSS 2024 ]

  • IEEE Learning Network Celebrates Five Years
    by Angelique Parashis on 18. July 2024. at 18:00



    Since its launch in 2019, the IEEE Learning Network (ILN) has been instrumental in advancing professional development through its diverse array of courses and programs. From specialized technical training to broader skill development, ILN online courses cater to professionals at every stage of their career and equip them with tools they need to succeed in today’s rapidly evolving landscape.

    ILN is also achieving its original goal of becoming a one stop shop for education from across IEEE. Now more than 40 organizational units of IEEE have listed over 1,400 educational opportunities in ILN that provide practical knowledge from, covering artificial intelligence, cybersecurity, renewable energy, career development, and many more topics.

    About 322,000 learners from more than 190 countries have completed ILN courses, with 83 percent saying in a satisfaction survey that they would recommend the program to their peers.

    “The ILN is the go-to location for high-quality e-learning content to stay abreast with the latest topics in engineering and technology.” —Jason K. Hui

    Many courses also allow users to earn digital certificates and badges bearing continuing-education units (CEUs) and professional development hours (PDHs). More than 65,000 digital certificates have been issued.

    Testimonials from the community

    “The introduction of ILN and the single platform of educational products by IEEE Educational Activities a few years ago was a hugely welcomed initiative for many in the industry and academia,” says Babak Beheshti, dean of the College of Engineering and Computing Sciences at New York Institute of Technology. “ILN provides a one-stop shop for the technical educational product search. My university engaged in a pilot to use several e-learning modules available on the ILN in several undergraduate and graduate engineering courses. The outcome was so positive that we purchased it.”

    “The ILN’s centralized and comprehensive catalog has enabled me to stay updated on the latest computer hardware and software technologies,” says IEEE Fellow Sorel Reisman, professor emeritus of information systems at California State University, Fullerton. “The availability of digital certificates upon course completion and the ability to earn CEUs and PDHs is particularly valuable to technology practitioners, and reinforces IEEE’s commitment to ongoing personal and professional development for both members and nonmembers of our international community of engineers and computer scientists.”

    “For me, the ILN is the go-to location for high-quality e-learning content to stay abreast with the latest topics in engineering and technology,” says Jason K. Hui, senior manager of engineering at Textron Systems in Wilmington, Mass.

    Discount available now

    In celebration of its five-year anniversary, during the month of July, ILN is offering US $5 off of select courses with the discount code ILN5.

    You can follow ILN on Facebook and LinkedIn to engage with others, share insights, and expand your professional network.

    To stay updated on courses, events, and more, sign up for ILN’s free weekly newsletter.

  • Robot Dog Cleans Up Beaches With Foot-Mounted Vacuums
    by Evan Ackerman on 18. July 2024. at 14:00



    Cigarette butts are the second most common undisposed-of litter on Earth—of the six trillion-ish cigarettes inhaled every year, it’s estimated that over 4 trillion of the butts are just tossed onto the ground, each one leeching over 700 different toxic chemicals into the environment. Let’s not focus on the fact that all those toxic chemicals are also going into people’s lungs, and instead talk about the ecosystem damage that they can do and also just the general grossness of having bits of sucked-on trash everywhere. Ew.

    Preventing those cigarette butts from winding up on the ground in the first place would be the best option, but it would require a pretty big shift in human behavior. Operating under the assumption that humans changing their behavior is a nonstarter, roboticists from the Dynamic Legged Systems unit at the Italian Institute of Technology (IIT), in Genoa, have instead designed a novel platform for cigarette-butt cleanup in the form of a quadrupedal robot with vacuums attached to its feet.

    IIT

    There are, of course, far more efficient ways of at least partially automating the cleanup of litter with machines. The challenge is that most of that automation relies on mobility systems with wheels, which won’t work on the many beautiful beaches (and many beautiful flights of stairs) of Genoa. In places like these, it still falls to humans to do the hard work, which is less than ideal.

    This robot, developed in Claudio Semini’s lab at IIT, is called VERO (Vacuum-cleaner Equipped RObot). It’s based around an AlienGo from Unitree, with a commercial vacuum mounted on its back. Hoses go from the vacuum down the leg to each foot, with a custom 3D-printed nozzle that puts as much suction near the ground as possible without tripping the robot up. While the vacuum is novel, the real contribution here is how the robot autonomously locates things on the ground and then plans how to interact with those things using its feet.

    First, an operator designates an area for VERO to clean, after which the robot operates by itself. After calculating an exploration path to explore the entire area, the robot uses its onboard cameras and a neural network to detect cigarette butts. This is trickier than it sounds, because there may be a lot of cigarette butts on the ground, and they all probably look pretty much the same, so the system has to filter out all of the potential duplicates. The next step is to plan its next steps: VERO has to put the vacuum side of one of its feet right next to each cigarette butt while calculating a safe, stable pose for the rest of its body. Since this whole process can take place on sand or stairs or other uneven surfaces, VERO has to prioritize not falling over before it decides how to do the collection. The final collecting maneuver is fine-tuned using an extra Intel RealSense depth camera mounted on the robot’s chin.

    A collage of six photos of a quadruped robot navigating different environments. VERO has been tested successfully in six different scenarios that challenge both its locomotion and detection capabilities.IIT

    Initial testing with the robot in a variety of different environments showed that it could successfully collect just under 90 percent of cigarette butts, which I bet is better than I could do, and I’m also much more likely to get fed up with the whole process. The robot is not very quick at the task, but unlike me it will never get fed up as long as it’s got energy in its battery, so speed is somewhat less important.

    As far as the authors of this paper are aware (and I assume they’ve done their research), this is “the first time that the legs of a legged robot are concurrently utilized for locomotion and for a different task.” This is distinct from other robots that can (for example) open doors with their feet, because those robots stop using the feet as feet for a while and instead use them as manipulators.

    So, this is about a lot more than cigarette butts, and the researchers suggest a variety of other potential use cases, including spraying weeds in crop fields, inspecting cracks in infrastructure, and placing nails and rivets during construction.

    Some use cases include potentially doing multiple things at the same time, like planting different kinds of seeds, using different surface sensors, or driving both nails and rivets. And since quadrupeds have four feet, they could potentially host four completely different tools, and the software that the researchers developed for VERO can be slightly modified to put whatever foot you want on whatever spot you need.

    VERO: A Vacuum‐Cleaner‐Equipped Quadruped Robot for Efficient Litter Removal, by Lorenzo Amatucci, Giulio Turrisi, Angelo Bratta, Victor Barasuol, and Claudio Semini from IIT, was published in the Journal of Field Robotics.

  • A New Specialized Train Is Ready to Haul Nuclear Waste
    by Shannon Cuthrell on 17. July 2024. at 12:00



    Nuclear power plants are on track to generate more than 140,000 tonnes of spent nuclear fuel (SNF) before 2060. Every year, 2,000 tonnes of radioactive heavy metal join the growing inventory of fuel removed from nuclear power reactors—both operating and decommissioned. In the coming decades, the U.S. Department of Energy (DOE) will need to transport that material to future storage facilities.

    The amount of existing spent fuel—90,000 tonnes—is already outgrowing currently available storage options. Storage casks and cooling pools have reached maximum capacity at many of the 75 plants that host SNF on-site. This material was never intended to stay at these sites long-term. But, with permanent storage efforts gridlocked since the early 2000s, a once-temporary fix became the status quo, leaving the DOE paying US $10.6 billion to cover utilities’ storage costs.

    However, regulators and lawmakers are finally moving the needle. Congress recently directed the DOE to seek an interim consolidated storage site to hold SNF until a permanent solution becomes available—likely a geologic repository located between 300 and 1,000 meters underground. Still, this future repository is at least a decade away.

    In the meantime, the DOE is tackling a secondary challenge: Modernizing existing railcars to accommodate the eventual scale-up of SNF shipments. The result is Atlas, a multi-car system designed to move about 217 tonnes of SNF and high-level radioactive waste to future storage and disposal destinations.

    After a decade and $33 million of development, the Association of American Railroads (AAR) recently cleared the 12-axle system to operate on all major freight railroads in the United States. Atlas’s main railcar bears an SNF container held in place by a 7-tonne cradle and two 10-tonne end stops. Two buffer railcars provide safe spacing between the main railcar and the two locomotives powering the train, as well as a rail escort vehicle (REV) caboose that carries armed security staff for surveillance. The U.S. Navy co-developed the escort vehicle, to replace its own aging REV fleet, which is used to escort naval SNF and classified ship components by rail. Atlas employs both cell and satellite communications and a mesh radio link to stay in touch with the cabs.

    Five workers in vests and hard hats stand in front of a flat rail car. Behind them looms a larger horizontal container. Engineers finished final testing on Atlas in September 2023 with a 2,700-kilometer trip from Colorado to Idaho. U.S. Department of Energy

    Historically, both trucks and trains have transferred thousands of shipments of irradiated nuclear fuel between DOE research sites, utility-owned reactors, and New Mexico’s Waste Isolation Pilot Plant, the nation’s only deep geologic repository for weapons-generated waste. While trucks’ legal weight limit is 36 tonnes, rail can efficiently handle high-capacity SNF casks and contaminated soil from cleanup sites in one shipment. Atlas’s advanced real-time monitoring system builds on these capabilities.

    Atlas comes as nuclear remains a key contributor of clean energy in the U.S., surpassing wind and solar to generate 18.6 percent of the nation’s electricity last year—enough for over 70 million homes. Despite their high capacity, nuclear reactors produce a relatively low volume of waste. Annual SNF outputs translate to less than half of an Olympic pool.

    After fuel is spent inside a reactor, plant operators immerse fuel assemblies in 40-foot concrete pools lined with steel to isolate radiation. Once it’s cooled for at least five years, SNF moves to steel canisters shielded by an outer layer of concrete, steel, or both. These dry casks can stay on-site for 40 years.

    A map of the United States showing spent nuclear fuel storage locations. Spent nuclear fuel is stored across the United States, with much of it thousands of kilometers away from existing and future storage sites.U.S. Government Accountability Office

    In the 1980s, the Nuclear Waste Policy Act mandated the DOE to start permanently disposing of SNF in an underground repository at Nevada’s Yucca Mountain. However, social and political opposition ultimately quashed the hotly contested project. The regulatory complexity of a permanent storage solution remains a critical barrier in SNF management, particularly amid uncertainty about the safety of long-term dry storage. As the DOE is in the early stages of siting a federal interim facility, SNF will likely remain at plants until the late 2030s.

    The DOE says Atlas’s development spanned 10 years due to the complexity of AAR’s S-2043, the strictest standard for freight railcars transporting SNF and high-level radioactive waste in North America. Atlas has a suite of sensors tracking 11 performance parameters required by S-2043, such as bearing conditions, speed, rocking, and braking. The integrated security and safety monitoring system features mechanisms to prevent derailments from equipment failure or degradation.

    The DOE initially envisioned Atlas as an eight-axle railcar. During the conceptual design phase, computer modeling indicated the train’s performance might not meet all S-2043 requirements. Around this time, the Nuclear Regulatory Commission (NRC) certified a new 190-tonne cask, which is too heavy for axle loadings on a smaller railcar. These circumstances inspired a 12-axle redesign.

    An illustration of the 2 locomotives, SNF railcars and container, and escort railcar. The Atlas railcars are separated from the locomotives and escort railcar by empty buffer cars to maintain a safe distance from the spent nuclear fuel.U.S. Department of Energy

    The railcars and locomotives completed a roughly 2,700-kilometer (1,680-mile) demonstration to ensure on-track compatibility and safety. Traveling smoothly from Colorado to Idaho, the test simulated the heaviest NRC-certified cask with steel dummy weights totaling almost 220 tonnes (480,000 pounds), accompanied by a REV, buffer cars, and Union Pacific Railroad locomotives.

    Heavier SNF containers demand the dozen axles that Atlas provides, but eight axles can move relatively lighter packages of at least 72 tonnes more efficiently. After Atlas transitioned to a 12-axle railcar, the DOE initiated an eight-axle project for smaller payloads. The AAR approved the design in 2021, and began prototype fabrication this year.

    Fortis, comprising the same payload attachment system, monitoring system, REV, and buffer vehicles, is expected to be completed in the late 2020s. “Both railcars will provide the DOE with flexibility to use the right rail equipment for the job,” a DOE spokesperson told IEEE Spectrum.

  • Andrew Ng: Unbiggen AI
    by Eliza Strickland on 09. February 2022. at 15:31



    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


    Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

    Andrew Ng on...

    The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

    Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

    When you say you want a foundation model for computer vision, what do you mean by that?

    Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

    What needs to happen for someone to build a foundation model for video?

    Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

    Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

    Back to top

    It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

    Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

    “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
    —Andrew Ng, CEO & Founder, Landing AI

    I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

    I expect they’re both convinced now.

    Ng: I think so, yes.

    Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

    Back to top

    How do you define data-centric AI, and why do you consider it a movement?

    Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

    When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

    The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

    You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

    Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

    When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

    Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

    “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
    —Andrew Ng

    For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

    Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

    Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

    One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

    When you talk about engineering the data, what do you mean exactly?

    Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

    For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

    Back to top

    What about using synthetic data, is that often a good solution?

    Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

    Do you mean that synthetic data would allow you to try the model on more data sets?

    Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

    “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
    —Andrew Ng

    Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

    Back to top

    To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

    Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

    One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

    How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

    Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

    In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

    So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

    Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

    Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

    Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

    Back to top

    This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

  • How AI Will Change Chip Design
    by Rina Diane Caballar on 08. February 2022. at 14:00



    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

    Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

    But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

    How is AI currently being used to design the next generation of chips?

    Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

    Portrait of a woman with blonde-red hair smiling at the camera Heather GorrMathWorks

    Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

    What are the benefits of using AI for chip design?

    Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

    So it’s like having a digital twin in a sense?

    Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

    So, it’s going to be more efficient and, as you said, cheaper?

    Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

    We’ve talked about the benefits. How about the drawbacks?

    Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

    Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

    One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

    How can engineers use AI to better prepare and extract insights from hardware or sensor data?

    Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

    One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

    What should engineers and designers consider when using AI for chip design?

    Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

    How do you think AI will affect chip designers’ jobs?

    Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

    How do you envision the future of AI and chip design?

    Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.

  • Atomically Thin Materials Significantly Shrink Qubits
    by Dexter Johnson on 07. February 2022. at 16:12



    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

    IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

    Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

    “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

    The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

    Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

    Golden dilution refrigerator hanging vertically Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

    In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

    As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

    In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

    “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

    On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

    While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

    “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

    This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

    “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

    Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.