| How to build an intelligent machine has fascinated researchers for more than half a century. To tackle this problem, scientists have taken one of the four approaches: knowledge-based, learning-based, behavior-based and evolution-based. However, autonomous mental development (AMD) of robots did not receive sufficient attention.; Motivated by animal and human autonomous mental development from infancy to adulthood, a developmental paradigm for robots has recently been proposed. With this paradigm, a robot develops its mental skills through real-time, online interactions with the environment. We call such a robot a developmental robot.; My work focuses on motivational system and sensorimotor learning. The major contributions of this work include: (1) Proposed and implemented the Developmental, Observation driven, Self-Aware, Self-Effecting. Markov Decision Process (DOSASE MDP) model, which integrates multimodal sensing, action-imposed learning, reinforcement learning, and communicative learning. (2) Integrated novelty and reinforcement learning into a robotic value system (motivational system) for the first time. As an important part of AMD, a value system signals the occurrence of salient sensory inputs, modulates the mapping from sensory inputs to action outputs, and evaluates candidate actions. (3) Proposed and implemented the Locally Balanced Incremental Hierarchical Discriminant Regression (IHDR) algorithm as the engine of cognitive mapping for learning in non-stationary environments.; Based upon the above architecture and basic techniques, we have designed and implemented a prototype robot that learns the following cognitive behaviors: (1) Visual attention via novelty and rewards. We treat visual attention as a behavior guided by the value system. No salient feature is predefined but instead novelty based on experience, which is applicable to any task. (2) Covert perceptual capability development for vision-based navigation. An agent develops its covert capability via reinforcement learning through interactions with trainers. (3) Cross-task learning in developmental settings. A developmental robot learns multiple tasks incrementally and uses acquired knowledge to speed up learning new tasks. (4) Audio/Visual User presence detection. The developmental learning paradigm has been applied to a developmental agent, which detects human activities in an office using multimodal context information.; The work reported in this thesis serves as a starting point of the on-going research on developmental learning. |