Skip to content

Speakers – Spring 2026

IER Seminar Series Speakers

Upcoming Seminars

Peter Whitney
Peter Whitney – Hardware and Design Advances Towards Autonomous Robotic Manipulation

📅 February 25, 2026 | 11:45 AM – 1:15 PM ET

📍 Location: Churchill Hall 103

📋 RSVP Here

Abstract:
Autonomous robotic manipulation is rapidly advancing through new machine learning methods, new sources of data, and scaling efforts that mirror approaches from training large language models. Hardware engineering and mechanical design have important supporting and leading roles in these efforts, such as the design of adaptive grippers, the design and engineering of teleoperation systems, and the development of new methods to collect large-scale human example manipulation data, such as handheld UMI devices. I will discuss some UMI-type data collection and policy training work I contributed to while on sabbatical at the RAI Institute, and also highlight the efforts of Team Northeastern in the international ANA Avatar X-Prize Challenge, where our team placed 3rd, winning a $1,000,000 prize.
👉 Read more

Biosketch:

Peter Whitney is an Associate Professor in the Department of Mechanical and Industrial Engineering at Northeastern University and a core member of the Institute for Experiential Robotics. His research focuses on the design of human-safe robots, medical robotics, soft robotics, and microrobotics. He earned his Ph.D. in Engineering Sciences from Harvard University. Dr. Whitney’s lab develops lightweight, compliant robotic arms with remote-direct-drive actuation for contact-rich manipulation and teleoperation. His team placed third globally in the ANA Avatar XPRIZE competition, the highest-ranked U.S.-based team.
👉 Read more
M. Ani Hsieh
M. Ani Hsieh

📅 March 11, 2026 | 11:45 AM – 1:15 PM ET

📍 Location: Churchill Hall 103

Abstract:

To be determined.

Biosketch:

M. Ani Hsieh is an Associate Professor in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania and Deputy Director of the General Robotics, Automation, Sensing, and Perception (GRASP) Laboratory. Her research lies at the intersection of robotics, multi-agent systems, and dynamical systems theory, with a focus on designing algorithms for estimation, control, and planning for multi-robot systems with applications in environmental monitoring and collective behaviors. She received her B.S. in Engineering and B.A. in Economics from Swarthmore College and her Ph.D. in Mechanical Engineering from the University of Pennsylvania. Prior to Penn, she was an Associate Professor at Drexel University.
👉 Read more
Max Shepherd
Max Shepherd

📅 March 25, 2026 | 11:45 AM – 1:15 PM ET

📍 Location: Churchill Hall 103

Abstract:

To be determined.

Biosketch:

Max Shepherd is an Assistant Professor with a joint appointment in Mechanical and Industrial Engineering and Physical Therapy, Movement, and Rehabilitation Sciences at Northeastern University. His research seeks to improve the individualized design and control of robotic prosthetics and exoskeletons for people with mobility impairments, spanning gait biomechanics, machine learning, mechatronic design, and human motor control. He earned his Ph.D. in Biomedical Engineering from Northwestern University and was a postdoctoral researcher at Georgia Tech. He has previously worked at X (formerly Google X) and co-authored research published in Nature on task-agnostic exoskeleton control.
👉 Read more
Frank Dellaert
Frank Dellaert

📅 April 08, 2026 | 11:45 AM – 1:15 PM ET

📍 Location: Churchill Hall 103

Abstract:

To be determined.

Biosketch:

Frank Dellaert is a Professor in the School of Interactive Computing at the Georgia Institute of Technology. His research focuses on robotics and computer vision, with particular interests in Bayesian inference, simultaneous localization and mapping (SLAM), and factor graph-based optimization. He is the creator of GTSAM, a widely used library for smoothing and mapping in robotics. He co-developed the Monte Carlo localization algorithm, now a standard tool in mobile robotics. He earned his Ph.D. in Computer Science from Carnegie Mellon University and has held industry roles as Chief Scientist at Skydio and Technical Project Lead at Facebook Reality Labs.
👉 Read more

Past Seminars

Derya Aksaray
Derya Aksaray – Resilient Autonomy Using Formal Methods

📅 February 11, 2026 | 11:45 AM – 1:15 PM ET

📍 Location: Churchill Hall 103

Abstract:
Autonomous robots operating in open-world environments must handle situations that cannot be fully anticipated a priori since new obstacles, tasks, or environmental changes can emerge during execution. While robustness focuses on maintaining performance under bounded disturbances, resilience demands a different capability: the ability to react through reasoning about failure, replanning, and adaptation when the robot encounters situations that invalidate its current plan.

In this talk, I will present some of our recent work on resilient autonomy using formal methods, where high-level robot missions are specified using temporal logics and plans are synthesized with correctness guarantees. When changes in the environment trigger replanning (or make the original mission infeasible), we introduce a principled methodology for minimally relaxing the mission, which allows robots to adapt their spatial objectives, temporal constraints, or logical task structure while preserving formal guarantees. By leveraging techniques from formal methods, this work enables autonomous robots to replan and adapt their missions while maintaining performance guarantees in open-world environments.
👉 Read more

Biosketch:

Derya Aksaray is an Assistant Professor in the Department of Electrical and Computer Engineering at Northeastern University and a core member of the Institute for Experiential Robotics. Previously, she was an Assistant Professor in the Department of Aerospace Engineering and Mechanics at the University of Minnesota and held post-doctoral researcher positions at the Massachusetts Institute of Technology and Boston University.

She received her Ph.D. degree in Aerospace Engineering from the Georgia Institute of Technology. Her research interests lie in robotics, formal methods, control theory, and machine learning.
👉 Read more
Nicholas Barbara
Nicholas Barbara – Neural Networks in the Loop: Learning Controllers with Stability and Robustness Guarantees

📅 February 03, 2026 | 11:00 AM – 12:15 PM ET

📍 Location: EXP-610

Abstract:
Deep reinforcement learning (RL) is a powerful tool for robotic control design. It relies on simple, gradient-based optimisation schemes, and typically parametrises controllers using black-box deep neural networks (NNs). However, black-box approaches such as deep RL suffer from a fundamental lack of closed-loop stability and robustness. In this talk, we tackle the problem of learning NN controllers with built-in stability and robustness guarantees. Our approach leverages recent developments in robust NNs, which are NNs that automatically satisfy internal stability and robustness properties of their own. We present two novel methods: (1) parametrising controllers with robust (Lipschitz-bounded) NNs to improve their empirical robustness in open loop; and (2) parametrising controllers by combining robust Recurrent Equilibrium Networks (RENs) with a nonlinear version of the classical Youla-Kučera parametrisation to achieve closed-loop stability (contraction) and robustness (Lipschitz) guarantees. Our resulting “Youla-REN” controller class automatically satisfies these closed-loop guarantees, making it compatible with standard gradient-based optimisation pipelines such as deep RL. We summarise our recent theoretical results, provide illustrative numerical examples, and pose directions for future work applying robust NN controllers to real-world robotic systems.
👉 Read more

Biosketch:

Nicholas Barbara is a postdoctoral research associate in the Centre for Complex Systems at the University of Sydney. He received the B. Eng. (Hons 1, University Medal) and B. Sci degrees (2021), and the Ph.D. degree (2025) from the University of Sydney, completing his Ph.D. as a member of the Australian Centre for Robotics. His research involves developing new ML tools to control, predict, and model complex dynamical systems. Nicholas has a broad range of research interests including deep learning for robot control, robust machine learning, time-series analysis, and spacecraft GNC.
👉 Read more
Conor Walsh
Conor Walsh – Soft Wearable Robotics for Augmenting and Restoring Human Performance

📅 January 28, 2026 | 11:45 AM – 1:15 PM ET

📍 Location: Snell Engineering 108

Abstract:
Wearable robotic systems have the potential to enhance, assist, and restore human mobility across a wide range of applications, from industrial work and rehabilitation to everyday life. This talk will present recent advances in the design and deployment of lightweight, soft, and human-centered wearable robots developed in the Harvard Biodesign Lab. Dr. Walsh will discuss key challenges in sensing, actuation, control, and human–robot interaction, as well as insights gained from translating laboratory prototypes into real-world systems. The talk will highlight how interdisciplinary approaches that integrate engineering, biomechanics, and user-centered design can enable wearable robots that effectively work with, rather than against, the human body.
👉 Read more

Biosketch:

Conor Walsh is the Paul A. Maeder Professor of Engineering and Applied Sciences at Harvard University and a founding core faculty member of the Wyss Institute for Biologically Inspired Engineering. He directs the Harvard Biodesign Lab, where his research focuses on the development of wearable robotic and biomechatronic systems that augment, assist, and restore human movement. His work spans soft robotics, human–robot interaction, biomechanics, and translational research aimed at real-world impact. He is a co-founder of multiple startup companies translating wearable robotic technologies into clinical and commercial use.
👉 Read more
IER Seminar Series – Previous Speakers

Speakers Fall 2025

Hangbo Zhao

Hangbo Zhao – Model-Based 3D Shape Reconstruction of Soft Robots Enabled by Soft Strain Sensors Proprioception—the robot’s sense of internal state—depends on knowledge of body configuration in space. Three-dimensional (3D) shape sensing provides this configuration for proprioception in soft robots. Soft strain sensors, which can be integrated directly into soft robot bodies, enable 3D shape sensing without external vision or altering intrinsic properties. Dr. Zhao will present the development of stretchable, low-hysteresis strain sensors and a model-based framework that reconstructs soft robot shapes during deformation. The talk highlights applications in soft grippers, bioinspired robotic arms, and soft actuators with distributed sensing.

Biosketch: Hangbo Zhao is an Assistant Professor and the Philip and Cayley MacDonald Endowed Early Career Chair in the Department of Aerospace and Mechanical Engineering at the University of Southern California. His research focuses on soft electronics, bioinspired robotics, and intelligent materials for biomedical and robotic applications. Before joining USC, he was a postdoctoral researcher at Northwestern University and earned his Ph.D. in Mechanical Engineering from MIT. His work bridges materials science and robotics to create soft, intelligent systems with advanced proprioceptive capabilities.

Karthik Ramani

Karthik Ramani – Spatial Computing: When Physical-AI Comes Alive The convergence of sensors, spatial interfaces, and large visual-language AI models are transforming how we perceive, understand, and act in the physical world. Unlike traditional computing paradigms, embodied systems share our viewpoint and real-time context—enabling seamless spatial interaction. In this talk, Dr. Ramani presents three themes from his research on the future of Physical AI—where human experience, spatial computing, and intelligent systems converge to augment physical understanding and action. First, he discusses authoring environments that empower non-programmers to easily create immersive extended-reality applications using agentAR, a system enabling subject-matter experts to author spatial learning experiences via voice and gesture. Second, he highlights designfromX, a platform integrating vision and language models to transform verbal prompts and sketches into 3D designs, allowing humans and AI to co-create 3D models. Third, he presents applications of embodied Physical AI in task performance and skill augmentation, including avaTTar, an extended-reality table-tennis-playing coach, and a humanoid table-tennis robot that illustrates the future of embodied AI. Together, these systems point to a future where Physical AI enhances how we design, train, and learn—expanding human potential across engineering, production, sports, and beyond.

Biosketch: Karthik Ramani is the Donald W. Feddersen Distinguished Professor of Mechanical Engineering at Purdue University, with additional appointments in Electrical and Computer Engineering and the College of Education. He leads the Convergence Design Lab, where his research brings AI into the physical world by blending human-centered AI with spatial intelligence to create immersive, real-time solutions for design, manufacturing, sports training, surgery, and hands-on learning. His work spans augmented spatial interactions, symbiotic human-AI collaboration, computational design thinking and prototyping, and scalable upskilling platforms for production. He has published in CVPR, ECCV, ICCV, CHI, UIST, NIPS, and ICLR, and founded VizSeek and ZeroUI. He holds degrees from IIT Madras, Ohio State, and Stanford.

Ian Abraham

Ian Abraham – Optimality and Robustness in Robotic Exploration Effective exploration is a vital component in robotic applications such as ocean and space exploration, environmental monitoring, and search-and-rescue tasks. This talk presents a novel formulation of exploration that permits optimality criteria and performance guarantees for robotic exploration tasks. The framework treats exploration as a coverage problem on continuous spaces using ergodic theory and derives control methods that satisfy key notions of optimality and robustness such as asymptotic coverage, set-invariance, time-optimality, and reachability. The approach will be demonstrated across several robotic systems and future directions for robot learning will be discussed.

Biosketch: Ian Abraham is an Assistant Professor in Mechanical Engineering with a courtesy appointment in Computer Science at Yale University. His research focuses on real-time optimal control and data-efficient robotic learning for autonomous systems. Before joining Yale, he was a postdoctoral researcher at the Carnegie Mellon Robotics Institute, and he earned his Ph.D. in Mechanical Engineering from Northwestern University. His contributions to robust model-based control and exploration have been recognized with the NSF CAREER Award (2023)

Kris Dorsey

Kris Dorsey – Soft system seeking sensor soulmate Physically soft actuators have applications to human-machine interfaces, wearable devices for human health, robotics, and bioinspired locomotion. Due to their continuum-like structures, a key challenge is creating sensors for self- and external sensing to facilitate control and reconfigurability. Kris will present her recent work in origami robots and wearable devices, and discuss the outlook for such sensors in wearable healthcare applications, soft robotics, and beyond.

Biosketch: Kris Dorsey is an Associate Professor in Electrical and Computer Engineering and Physical Therapy, Movement, and Rehabilitation Sciences, and a core faculty member at the Institute for Experiential Robotics at Northeastern University. Her research focuses on reconfigurable and active soft sensors for wearable medical and robotic applications. Dorsey’s work has been recognized by the NSF CAREER Award and the Emerging Leader ABIE Award in honor of Denice Denton.

Chad Jenkins

Odest Chadwicke Jenkins – Defining the Discipline of Robotics for Excellence and Equity through Humanoid Robotics What is the best major for a student to become a roboticist? At the University of Michigan, Professor Jenkins led efforts to answer this question through the design of an undergraduate Robotics Major that unites theory, design, and societal impact. His talk discusses the curricular innovation that defines robotics as a discipline centered on excellence and equity, the development of humanoid mobile manipulation robots such as Agility Robotics’ Digit, and Michigan’s Distributed Teaching Collaboratives model that connects Historically Black Colleges and Universities with R1 research institutions to broaden pathways in robotics and AI.

Biosketch: Odest Chadwicke Jenkins is a Professor of Robotics and Electrical Engineering and Computer Science at the University of Michigan. His research explores robot learning from demonstration and human-robot interaction, particularly in dexterous mobile manipulation and perception. He serves as Vice President for Educational Activities for the IEEE Robotics and Automation Society and Program Chair for AAAI-26, and was the founding Editor-in-Chief of the ACM Transactions on Human-Robot Interaction. Professor Jenkins is a Fellow of the AAAI and AAAS and recipient of the ACM/CMD-IT Richard A. Tapia Achievement Award for Scientific Scholarship and Diversifying Computing.

Chris Atkeson

Chris Atkeson – Robot Learning in Reality This talk explores how robots can learn complex dynamic tasks efficiently in the real world without relying on extensive simulation or precise models. Dr. Atkeson will present several case studies—including swing-up control, food preparation, and exoskeleton assistance—that highlight robot learning from a small number of real-world trials. The talk discusses methods for achieving high data efficiency and adaptability when training robots directly on physical systems.

Biosketch: Chris Atkeson is a Professor in the Robotics Institute and the Human-Computer Interaction Institute at Carnegie Mellon University. His research focuses on developing robots capable of learning new motor skills with human-like adaptability. He has contributed extensively to humanoid robotics, assistive exoskeletons, and machine learning for control. Dr. Atkeson’s group maintains an active online presence sharing research and demonstrations on YouTube.

Jeffrey Lipton

Jeffrey Lipton – Robots with a Twist Robot bodies derive their abilities from the materials they are made of. Soft robotics aims to rethink this foundation by introducing flexibility and compliance into robotic design. In this talk, Dr. Lipton presents how metamaterial robotics enables robots to grasp, walk, and interact safely with people while maintaining precise actuation and control. The discussion will cover the complete design pipeline—from mathematical modeling and material development to functional robots—with applications in manufacturing and human-robot collaboration.

Biosketch: Jeffrey Lipton is an Assistant Professor in the Department of Mechanical and Industrial Engineering at Northeastern University. His research explores the intersection of robotics, materials, and digital fabrication, focusing on 3D printing and metamaterial design. Previously, he served on the faculty at the University of Washington and completed postdoctoral research at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Dr. Lipton’s work has influenced the development of 3D-printed foods, adaptive manufacturing systems, and new approaches to robotic form and function.

Speakers Spring 2025

Stefanie Tellex

Stefanie Tellex – Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space Station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets. Existing approaches use action-based representations that do not capture the goal-based meaning of a language expression and do not generalize to partially observed environments. The aim of my research program is to create autonomous robots that can understand complex goal-based commands and execute those commands in partially observed, dynamic environments. I will describe demonstrations of object-search in a POMDP setting with information about object locations provided by language, and mapping between English and Linear Temporal Logic, enabling a robot to understand complex natural language commands in city-scale environments. These advances represent steps toward robots that interpret complex natural language commands in partially observed environments using a decision-theoretic framework.

Biosketch: Stefanie Tellex is an Associate Professor of Computer Science at Brown University. Her group, the Humans To Robots Lab, creates robots that seamlessly collaborate with people to meet their needs using language, gesture, and probabilistic inference, aiming to empower every person with a collaborative robot. She completed her Ph.D. at the MIT Media Lab in 2010, where she developed models for the meanings of spatial prepositions and motion verbs. Her postdoctoral work at MIT CSAIL focused on creating robots that understand natural language. She has published at SIGIR, HRI, RSS, AAAI, IROS, ICAPs and ICMI, winning Best Student Paper at SIGIR and ICMI, Best Paper at RSS, and an award from the CCC Blue Sky Ideas Initiative. Her awards include being named one of IEEE Spectrum’s AI’s 10 to Watch in 2013, the Richard B. Salomon Faculty Research Award at Brown University, a DARPA Young Faculty Award in 2015, a NASA Early Career Award in 2016, a 2016 Sloan Research Fellowship, and an NSF Career Award in 2017. Her work has been featured in National Public Radio, BBC, MIT Technology Review, Wired, and the New Yorker. She was named one of Wired UK’s Women Who Changed Science in 2015 and listed as one of MIT Technology Review’s Ten Breakthrough Technologies in 2016.

Tobias Fischer

Tobias Fischer – This seminar presents novel deep-learning approaches that reduce label dependency for automated analysis of imagery collected by robotic underwater and surface vehicles. With the increasing use of robotics to study coral reefs and seagrass meadows, vast amounts of imagery are generated. Traditionally, analyzing this data has been challenging, time-consuming, and expensive due to heavy reliance on marine experts. Our research addresses this challenge through several key contributions: (i) Development of efficient data collection and annotation methodologies, (ii) Implementation of seagrass segmentation techniques using only image-level labels, (iii) Using large language models as a supervisory signal in marine domain applications, and (iv) creation of human-in-the-loop labelling regimes that minimize expert intervention. I will also discuss our current research with the Reef Restoration and Adaptation Program and the Australian Institute of Marine Science, focusing on the automated deployment of coral re-seeding devices to optimal seafloor locations for survival. Finally, I will present our work on long-term monitoring of dynamic underwater environments through image-based relocalisation techniques, demonstrating how these methods support sustained ecological observation.

Biosketch: Dr Tobias Fischer is a Senior Lecturer (US: Associate Professor) and ARC DECRA Fellow at the Queensland University of Technology. He conducts research in robot localisation and underwater perception, blending neuroscience and robotics to push the boundaries of intelligent systems operating under resource constraints. Dr Fischer has secured over $3M in competitive funding, and currently is a Chief Investigator of the $1.3M Reef Restoration and Adaptation Program and the $700k Queensland Quantum Technologies Talent Building Program, with previous grants from Intel Labs and Amazon. He obtained a PhD from Imperial College and has published 50 papers in prestigious venues including PAMI, TRO, CVPR, ECCV, ICCV, IJCAI, ICRA, and IROS. Dr Fischer received the UK Best PhD in Robotics Award and several best paper and best poster awards. He has been serving as an Area Chair / Associate Editor for leading conferences (ICRA/IROS/RSS) and journals (RAL). As a co-chair of the IEEE-RAS Women in Engineering Committee, he actively promotes gender diversity in STEM. His PhD students have gone on to successful careers as Assistant Professor, Research Scientists, and Research Fellows. Website: https://www.tobiasfischer.info

Nima Fazeli

Nima Fazeli – Dexterous tool manipulation is a dance between tool motion, deformation, and force transmission choreographed by the robot’s end-effector. Take for example the use of a spatula. How should the robot reason jointly over the tool’s geometry and forces imparted to the environment through vision and touch? In this talk, I will present our recent progress on touch-centric approaches to dexterous tool manipulation: multimodal compliant tool representations via neural implicit representations and our recent progress on tactile control with high-resolution and highly deformable tactile sensors. Our methods seek to address two fundamental challenges in object manipulation. First, the frictional interactions between these objects and their environment is governed by complex non-linear mechanics, making it challenging to model and control their behavior. Second, perception of these objects is challenging due to both self-occlusions and occlusions that occur at the contact location (e.g., when wiping a table with a sponge, the contact is occluded). We will demonstrate how implicit functions can seamlessly integrate with robotic sensing modalities to produce high-quality tool deformation and contact patches and how high-resolution tactile controllers can enable robust tool-use behavior despite the complex dynamics induced by the sensor mechanical substrate. We’ll conclude the talk by discussing future directions for dexterous tool-use.

Biosketch: Nima Fazeli is an Assistant Professor of Robotics at the University of Michigan (2020-Present) and affiliate Faculty of Computer Science and Engineering (CSE) in EECS and Mechanical Engineering at UM. Nima is also the director of the Manipulation and Machine Intelligence (MMint) Lab. Nima’s primary research interest is enabling intelligent and dexterous robotic manipulation with emphasis on the tight integration of mechanics, perception, controls, learning, and planning. Nima received his PhD from MIT (2019) and completed his postdoctoral training (2020) working with Prof. Alberto Rodriguez. He received his MSc from the University of Maryland at College Park (2014) where he spent most of his time developing models of the human (and, on occasion, swine) arterial tree for cardiovascular disease, diabetes, and cancer diagnoses. His research has been supported by the NSF CAREER, National Robotics Initiative, and Advanced Manufacturing, the Rohsenow Fellowship and featured in outlets such as The New York Times, CBS, CNN, and the BBC.

Michael Everett

Michael Everett – This talk will cover some of our group’s recent work on large-scale mapping, motion planning in high-risk environments, and proving that learned controllers on expensive robots might not be as bad of an idea as some people think. Here are the papers that I’ll focus on:arXiv 2410.02961, arXiv 2412.09777, arXiv 2403.03314

Biosketch: I am currently an Assistant Professor at Northeastern University, with a joint appointment in the Department of Electrical & Computer Engineering and the Khoury College of Computer Sciences. I direct the Autonomy & Intelligence Laboratory at Northeastern University. Previously, I was a Visiting Faculty Researcher with Google’s People + AI Research (PAIR) team, developing novel techniques for explainable and trustworthy AI. Before that, I was a Research Scientist and Postdoctoral Associate at the MIT Department of Aeronautics and Astronautics. I received the PhD (2020), SM (2017), and SB (2015) degrees from MIT in Mechanical Engineering.

Kevin Chen

Kevin Chen – Flapping-wing flight at the insect-scale is incredibly challenging. Insect muscles not only power flight but also absorb in-flight collisional impact, making these tiny flyers simultaneously agile and robust. In contrast, existing aerial robots have not demonstrated these properties. Rigid robots are fragile against collisions, while soft-driven systems suffer limited speed, precision, and controllability. In this talk, I will describe our effort in developing a new class of bio-inspired micro-flyers, ones that are powered by high bandwidth soft actuators and equipped with rigid appendages. We constructed the first heavier-than-air aerial robot powered by soft artificial muscles, which can demonstrate a 1000-second hovering flight. In addition, our robot can recover from in-flight collisions and perform somersaults within 0.10 seconds. This work demonstrates for the first time that soft aerial robots can achieve agile and robust flight capabilities absent in rigid-powered micro-aerial vehicles, thus showing the potential of a new class of hybrid soft-rigid robots. I will also discuss our recent progress in incorporating on board sensors, electronics, and batteries.

Biosketch: Kevin Chen is an associate professor at the Department of Electrical Engineering and Computer Science, MIT, USA. He received his PhD in Engineering Sciences at Harvard University in2017 and his bachelor’s degree in Applied and Engineering Physics from Cornell University in 2012. His research interests include high bandwidth soft actuators, micro robotics, and aerial robotics. He has published in top journals including Nature, Science Robotics, Advanced Materials, PNAS, Nature Communications, IEEE TRO, and Journal of Fluid Mechanics. He is a recipient of the Steven Vogel Young Investigator Award, the NSF CAREER Award, the Office of Naval Research Young Investigator Award, multiple best paper awards (TRO 21, RAL 20, IROS 15), and the Ruth and Joel Spira Teaching Excellence Award.

Hanumant Singh

Hanumant Singh – In recent years, researchers in our lab have spent time in the Arctic, Antarctic, at the bottom of the ocean and driving on the streets of Boston. We have also been at the forefront of perception tasks related to SLAM, autonomous driving and flying in cluttered environments. In this talk I highlight some of our field work, how it meshes with algorithmic advances related to 3D structure from motion both from a geometric and machine learning standpoint, and some thoughts on the challenging problems that need to be addressed in the years to come.

Biosketch: Hanumant Singh is a professor with joint appointments in the ECE and MIE departments at Northeastern University. He received his Ph.D. from the MIT WHOI Joint Program in 1995 after which he worked on the Staff at Woods Hole Oceanographic Institution until 2016 when he joined Northeastern. His group has designed and built the Seabed AUV, as well as the Jetyak Autonomous Surface Vehicle, dozens of which are in use for scientific and academic research across the globe. He also has strong interests in the development and use of small Unmanned Aerial Systems (UAS) and Autonomous Cars. He has participated in 65 expeditions in all of the world’s oceans in support of Marine Geology, Marine Biology, Deep Water Archaeology, Chemical Oceanography, Polar Studies, and Coral Reef Ecology. His work has been featured in National Geographic Magazine, the BBC, the New York Times, Wired Magazine, Discover Magazine and other news and television outlets around the world. At Northeastern, he is co-director of the interdisciplinary MS Robotics program and Director of Northeastern Institute for Experiential Robotics. In collaboration with his students his awards include the ICRA Best Student Paper Award, the IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award and Best Paper Awards at the Oceans Conference and at AGU. He is a Fellow of the IEEE and has received the IEEE Oceanic Engineering Society Lifetime Achievement Award for his contributions to the design and use of Autonomous Marine Systems.

Speakers 2024

Sheila Russo

Sheila Russo – Sheila Russo is an Assistant Professor in the Department of Mechanical Engineering and the Division of Materials Science and Engineering at Boston University (BU). She received her Ph.D. degree at the BioRobotics Institute, Sant’Anna School of Advanced Studies, Italy. She completed her postdoctoral training at the Harvard John A. Paulson School of Engineering and Applied Sciences and the Wyss Institute for Biologically Inspired Engineering. She is the founder and director of the Material Robotics Laboratory at BU. Her research interests include medical and surgical robotics, soft robotics, origami-inspired mechanisms, sensing and actuation, and meso- and micro-scale manufacturing techniques. In 2020 she received the NIH Trailblazer Award for New and Early Stage Investigators.

Robert Howe

Robert D. Howe – Robert D. Howe is Abbott and James Lawrence Professor of Engineering at the Harvard Paulson School of Engineering and Applied Sciences, and Founding Co-Director of the Harvard MS/MBA Degree Program. Dr. Howe founded the Harvard BioRobotics Laboratory in 1990, which investigates the roles of sensing and mechanical design and motor control, in both humans and robots. His research interests focus on manipulation, the sense of touch, and human-machine interfaces. Biomedical applications of this work include of robotic and image-guided surgery. Dr. Howe earned a bachelors degree in physics from Reed College, then worked as a design engineer in the electronics industry in Silicon Valley. He received a doctoral degree in mechanical engineering from Stanford University in 1990, and then joined the faculty at Harvard. Dr. Howe is a Fellow of the IEEE and the AIMBE, and has received Best Paper Awards at mechanical engineering, robotics, and surgery conferences. (Lab Website).

Lerrel Pinto

Lerrel Pinto – Lerrel Pinto is an Assistant Professor of Computer Science at NYU. His research focuses on machine learning for robots. He received a Ph.D. degree from CMU after which he did a Postdoc at UC Berkeley. His research on robot learning has received the best paper awards at ICRA 2016 and RSS 2023, and finalist at IROS 2019, and CoRL 2022. Lerrel has received the Packard Fellowship and was named a TR35 innovator under 35 for 2023. Several of his works have been featured in popular media such as The Wall Street Journal, TechCrunch, MIT Tech Review, Wired, and BuzzFeed among others. His recent work can be found on (www.lerrelpinto.com).

Gregory Stein

Gregory J. Stein – Greg is an Assistant Professor of Computer Science at George Mason University, where he runs the Robotic Anticipatory Intelligence & Learning (RAIL) Group and is the director of the GMU Autonomous Robotics Lab. His research, at the intersection of robotics, planning, and machine learning, is centered around developing representations for planning and learning that allow robots to better understand the impact of their actions, so that they may plan quickly, intelligently, and reliably in a dynamic and uncertain world. Before joining Mason, he received his PhD in 2020 from MIT’s Department of Electrical Engineering and Computer Science and previously graduated summa cum laude from Cornell University with a B.S. in Applied and Engineering Physics. His work was a finalist for Best Paper at the 2018 Conference on Robot Learning, at which he was additionally awarded Best Oral Presentation.

Dr. Nare Karapetyan

Dr. Nare Karapetyan – Dr. Nare Karapetyan is a Tenure Track Assistant Scientist at Woods Hole Oceanographic institution (WHOI). Her research focuses on planning and exploration problems with heterogeneous multi-agent systems, with applications in the aquatic domain. She aims to develop more efficient, task-oriented exploration techniques for environmental monitoring and survey operations. She often draws inspiration from human expertise in performing specific tasks and strives to incorporate similar reasoning into the algorithms she develops for surface and underwater robots. Dr. Karapetyan was a postdoctoral associate at Maryland Robotics Center at University of Maryland (UMD). She received her PhD in Computer Science from the University of South Carolina (UofSC), where she worked at the Autonomous Field Robotics Laboratory (AFRL). She was named Breakthrough Graduate Scholar 2022 by the UofSC and was selected as RSS Pioneers 2023. Since 2022, she is serving as an Associate Editor (AE) for the RA-L, IROS and ICRA Editorial Boards.

Robert Katzschmann

Robert Katzschmann – Robert Katzschmann is an Assistant Professor of Robotics at ETH Zurich, where he leads the Soft Robotics Lab. He is associated with the Center for Robotics (RobotX), the ETH AI Center, and the Center for Learning Systems, a collaboration between ETH and the Max Planck Institute (MPI). His research primarily focuses on developing musculoskeletal robots that effectively combine soft, rigid, and living materials to perform complex tasks in real-world scenarios. Before he started his tenure at ETH Zurich, he served as the CTO of Dexai Robotics and as a Senior Applied Scientist at Amazon Robotics in the USA. He earned his Ph.D. in Mechanical Engineering from the Massachusetts Institute of Technology (MIT) in 2018 and his Diplom from the Karlsruhe Institute of Technology, Germany, in 2013. His work has been published in leading journals and conferences such as Nature, Nature Communications, Science Advances, and Science Robotics, as well as at prominent robotics conferences including ICRA, IROS, CoRL, ICLR, ICML, and RoboSoft. In addition to his research, he contributes as an editor for the International Journal of Robotics Research (IJRR) and has organized several workshops for the RoboSoft conference. He also works as an associate editor for ICRA, IROS, RoboSoft, and RSS, and he is an editorial board. member of npj Robotics. His research has been featured in premier news outlets such as the New York Times, Wall Street Journal, and BBC.

Speakers 2023

Dean Molinaro

Dean Molinaro – Dean Molinaro is an applied scientist at the AI Institute where he works on the control of robots during dynamic behaviors. He received his PhD in Robotics from the Georgia Institute of Technology, advised by Aaron Young, focusing on generalized control of lower-limb exoskeletons. His mission is to blend robotics and AI to develop robotic systems capable of augmenting our way of life.

Frederike Dumbgen

Frederike Dümbgen – Frederike Dümbgen is currently a postdoctoral researcher at the Robotics Institute of University of Toronto, working with Prof. Tim Barfoot. She received her Ph.D. in 2021 from the Laboratory of AudioVisual Communications (LCAV) with Prof. Martin Vetterli and Dr. Adam Scholefield in Computer Science at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. Before that, she obtained her B.Sc. and M.Sc. in Mechanical Engineering from EPFL in 2013 and 2016, respectively, with a minor in Computational Science and Engineering, and Master’s thesis at the Autonomous Systems Lab of ETH Zürich. Her research has ranged from novel localization methods, in particular acoustic, radio-frequency and ultra-wideband localization, to, most recently, global optimization for robotics.

Maani Ghaffari

Maani Ghaffari – Maani Ghaffari received the Ph.D. degree from the Centre for Autonomous Systems (CAS), University of Technology Sydney, NSW, Australia, in 2017. He is currently an Assistant Professor at the Department of Naval Architecture and Marine Engineering and the Department of Robotics, University of Michigan, Ann Arbor, MI, USA, where he directs the Computational Autonomy and Robotics Laboratory (CURLY). His work on sparse, globally optimal kinodynamic motion planning on Lie groups received the best paper award finalist title at the 2023 Robotics: Science and Systems conference. He is the recipient of the 2021 Amazon Research Awards. His research interests lie in the theory and applications of robotics and autonomous systems.

Vikash Kumar

Vikash Kumar – Vikash Kumar is an Adjunct Professor at CMU. His research focuses on understanding the fundamentals of embodied (physiological as well as robotic) movements. He finished his Ph.D. at the University of Washington with Prof. Sergey Levine and Prof. Emo Todorov and his M.S. and B.S. from the Indian Institute of Technology (IIT), Kharagpur. He has also spent time as Sr. Research Scientist at FAIR-MetaAI, and Research Scientist at Google-Brain and OpenAI. His research leverages data-driven techniques to develop efficient and generalizable paradigms for embodied intelligence. Applications of his work have led to human-level dexterity on anthropomorphic robotic hands as well as physiological digital twins, low-cost scalable systems capable of contact-rich behaviors, skilled multi-task multi-skill robotic agents, etc. His recent focus is on building foundation models for physiological as well as robotic embodied intelligence, primarily using off-domain data. He is the lead creator of MyoSuite, RoboHive, and a founding member of the MuJoCo physics engine, now widely used in the fields of Robotics and Machine Learning. His works have been recognized with the best Master’s thesis award, best manipulation paper at ICRA’16, best workshop paper ICRA’22, CIFAR AI chair ’20 (declined), and have been widely covered in a wide variety of media outlets such as NewYorkTimes, Reuters, ACM, WIRED, MIT Tech reviews, IEEE Spectrum, etc. (Website).

We use cookies to improve your experience on our sites. By continuing to use our sites, you agree to our Privacy Statement.