Faculty Q&A: Lotzi Boloni
January 14, 2020
Professor Lotzi Boloni joined the UCF Department of Computer Science in 2017. Since then, he has worked on a number of projects related to robotics and artificial intelligence, cognitive architectures, distributed and grid computing and wireless networking. Below he discusses his research and what his team plan to work on in the future.
Q: Lotzi, I had seen on your lab’s channel several videos about robots cleaning boxes with towels, picking up objects and straightening pliers https://www.youtube.com/watch?v=AqQFzoVsJfA . I was not aware that your team is working in robotics.
A: That is right, our work used to concern with agents as software, or maybe virtual entities. However, the deep learning revolution opened new possibilities to extend our work to the physical world. This is very exciting for me and my students.
Q: What possibilities are these exactly?
A: We are doing end-to-end learning. Traditionally, a robotics system would require multiple levels of engineering specializations from high level decision making, trajectory planning, low-level control and so on. We are replacing all these with a single neural network: video input and natural language command goes in, direct robot control comes out.
Q: I heard that deep learning requires tens of millions of learning iterations.
A: Projects such as AlphaZero from DeepMind use deep reinforcement learning, and indeed require tens of millions of iterations. This is why systems coming out from DeepMind and OpenAI usually involve games or simulation. This was clearly not an option for us (but it is also not an option for humans!). The robotics systems you see in the videos use deep imitation learning, maybe with homeostatic quantities of reinforcement learning for final adjustment. We typically require one or two hours of demonstrations to learn a task, and we are working to reduce that.
Q: I liked the video with the robot that appears to be frustrated when you yank away the towel it was manipulating https://www.youtube.com/watch?v=armz9CfjYRg
A: It looks like that, isn’t it? And it goes after it and picks it up again. Too many robotics systems are designed to operate in conditions of industrial cleanliness or maybe benign laboratory setups. If we are to deploy robots to the homes of disabled people, there will be all kind of disturbances and accidents. The robots need to be more robust in their behaviors, be able to recover from physical disturbances or even from adversarial events.
Q: What is your team working on now?
A: We are trying to make imitation learning more general. Our robot learns in about two hours to wipe an orange box with a white towel.
This is not that different from what a human child would need.
However, once a human child learns this, she would also be able to wipe a blue plate with a red silk cloth. The robot would have to start the learning it from scratch. We are working on meta-learning (so called “learning-to-learn”) techniques to prepare the robot to learn whole families of tasks with a single demonstration or from human command only.