For robots to effectively operate in our world, they must master the skills of dynamic interaction. Autonomous cars must safely negotiate their trajectories with other vehicles and pedestrians as they drive to their destinations. UAVs must avoid collisions with other aircraft, as well as dynamic obstacles on the ground. Disaster response robots must coordinate to explore and map new disaster sites. In this talk I will describe recent work in my lab using distributed optimization to obtain algorithms for robots to cooperate, and game theoretic methods to obtain algorithms for robots to compete. I will present an algorithm for fleets of autonomous cars to cooperatively track a large number of vehicles and pedestrians in a city, an algorithm for multiple robots to manipulate an object to a goal while avoiding collisions, and a distributed multi-robot SLAM algorithm, all derived using the same underlying distributed optimization framework. I will also discuss algorithms based on the theory of dynamic games, in which each actor has its own objective and constraints. I will describe examples in autonomous drone racing, car racing, and autonomous driving that use game theoretic principles to solve for Nash equilibrium trajectories in real-time, in a receding horizon fashion. Throughout the talk, I will show results from hardware experiments with ground robots, autonomous cars, and quadrotor UAVs collaborating and competing in the scenarios above.