I have worked on some cool robotics projects during my years at the University of Pennsylvania, and I continue to do so in my spare time.
We have recently published our work on Embeddability of Modular Robot Designs, jointly written with prof. Sanjeev Khanna of the Computer Science Dept., and Tarik Tosun and prof. Mark Yim of the Mechanical Engineering Dept., in IEEE's International Conference on Robotics and Automation. This is a flagship conference in the field and is considered a premier international forum for robotics researchers to present cutting edge advancements.
In our paper, which is succinctly summarized in the video on the left, we address the problem of deciding automatically whether a given modular robot design can simulate the functionality of a seemingly different design. We first introduced a novel graph representation for modular robots and formalized the notion of embedding through topological and kinematic conditions. Subsequently, we developed an efficient algorithm that decides embeddability when the two involved designs have tree topologies. The algorithm performs two passes and involves dynamic programming and maximum cardinality matching and is provided as an open source tool to the research community. You can find links to the code and paper here.
I have also developed a suite for autonomous navigation of quadrotors in synthetic 3D environments as part of a graduate-level course on advanced robotics. Not terribly novel, but a lot of fun: your quadrotor is equipped with a pair of stereo cameras and you need to find your way through various mazes.
Among others, I learned about and implemented SURF to determine feature correspondence between stereo images, RANSAC to discover which points best described the translation and the rotation of the cameras, A* and space discretization to find a collision-free path through the maze, and fusion of noisy velocity and acceleration (sensor) info with info stemming from a vision-based global localization scheme to estimate position and yaw. The result was then fed to a controller that emerged from the linearization of the motion and motor models in near-hover state and... voilà!