EPFL-IST collaboration
Download
Report
Transcript EPFL-IST collaboration
ROBOTICS
“EPFL-IST : a proposal for a collaboration program”
1
Robotics initiative
ISR/IST - Who we are
Networked Cognitive Systems
Human action understanding and surveillance
Cognitive robots (humanoids)
Distributed decision taking and planning
Collaboration instruments
2
Institute for Systems and Robotics
ISR/IST
Director: Profª Isabel Ribeiro
3
Institute for Systems and Robotics
ISR/IST
University based R&D institution founded in
1992, located at IST
Since 2002 – status of Associate Laboratory
Multidisciplinary advanced research activities in
Robotics and Information Processing
SCIENTIFIC DISCIPLINES
Systems and Control Theory
Signal Processing
Computer Vision
Image and Video Processing
Optimization
AI and Intelligent Systems
Biomedical Engineering
APPLICATIONS
Autonomous Ocean Robotics
Land Mobile Robotics
Search and Rescue / Surveillance
Satellite Formation for Space Exploration
Service and Companion Robotics
3D reconstruction
Mobile Communications and Multimedia
4
Institute for Systems and Robotics
ISR/IST
38 - Senior Researchers with
PhD (26 IST faculty)
Portugal
Spain
Netherlands
Germany
Italy
USA
Servia
~8 Post-docs
45 PhD students
24 M.Sc students
28 Research Engineers +
Undergraduate students
~ 140 members
Portugal
Iran
Brasil
Colombia
Sweden
China
Romenia
Netherlands
Spain
28%
with a scholarship
from the Portuguese
Science and
Technology
Foundation
5
Institute for Systems and Robotics
ISR/IST
RESEARCH LABS:
• COMPUTER AND ROBOT VISION (VISLab)
• DYNAMICAL SYSTEMS AND OCEAN ROBOTICS (DSORLab)
• INTELLIGENT SYSTEMS (ISLab)
• MOBILE ROBOTICS (MRLab)
• SIGNAL PROCESSING (SPLab)
• EVOLUTIONARY SYSTEMS AND BIOMEDICAL ENG. (LaSEEB)
6
Institute for Systems and Robotics
ISR/IST
José Santos-Victor – VisLab
Alexandre Bernardino – VisLab
Manuel Lopes – Vislab
Thematic Area
Mattijs Spaan – ISLab
Robotic Monitoring
and Surveillance
Jorge Salvador Marques – SigProc Lab
Isabel Ribeiro – MRLab
Pedro Lima – IS Lab
João Sequeira - MRLab
7
Robotics initiative
ISR/IST - Who we are
Networked Cognitive Systems
Human action understanding and surveillance
Cognitive robots (humanoids)
Distributed decision making and planning
Collaboration instruments
8
Networked Cognitive Systems
distributed networked robots and sensors (capable of
sensing + acting + computing) able to observe, map and
operate in possibly dynamic environments and interacting
with humans
distributed
sensors
(cameras)
ubiquitous
comms
networked
personal
assistants
Robot
assistants
(cognitive,
social)
9
CMU | Portugal
Robotics initiative
ISR/IST - Who we are
Networked Cognitive Systems
Human action understanding and surveillance
Cognitive robots (humanoids)
Distributed decision making and planning
Collaboration instruments
10
Understanding human activity
Example
EU-Project CAVIAR (U. Edinburgh, IST-Lisbon, INRIA)
11
Understanding human activity
Cameras: 275+
Images: 352x288 pixels
@ 25Hz
EU-Project CAVIAR (U. Edinburgh, IST-Lisbon, INRIA)
12
Understanding human activity
Hierarchical classifier:
1
Active
Inactive
Walking
Running
Fighting
2
3
Active
Active
Inactive
Walking
Running
Fighting
Inactive
4
Walking
Running
Fighting
Recognition rate :
98,8%
Walking
Running
13
Understanding human activity
Johansson G (1973)
“Visual perception of
biological motion and a
model for its analysis.”
Perception and
Psychophysics 14:201–
211”
14
Understanding human activity
Frank Pollick
Dpt Psychology, University of Glasgow
15
Mirror Neurons
[Gallese, Fadiga, Fogassi and Rizzolati, Brain, 1996]
Active during observation of another
monkey’s or experimenter’s hands
interacting with objects.
Observed & executed actions
are the same:
Observed & executed action
are NOT the same (tool):
16
Action observation/execution
ressonance
Individual A
Individual B
17
“Motor” Gesture recognition
Training Set:
24 sequences
15 visual features;
15 joint angles
Test Set:
96 seqs.
18
Robotics initiative
ISR/IST - Who we are
Networked Cognitive Systems
Human action understanding and surveillance
Cognitive robots (humanoids)
Distributed decision making and planning
Collaboration instruments
19
Humanoid Robotics
VISLAB – IST/ISR
Challenges
Daily Life Environments
Unstructured and Dynamic
Scenarios
Friendly Interaction
Advanced Recognition and
Expressive Capabilities
Easy Programming and
Adaptation
Learning by exploration and
imitation
21
The ROBOT-CUB Project
- Design and construction of a humanoid robotic platform
for research in cognition and cognitive development.
-Consortium: roboticists + neuroscientists + psychologists + ..
2.5 yr child, ~23kg
~50 DOFs
iCub
22
A Developmental
Approach
Self-Awareness:
Learning about the self
Auto-observation
World-Awareness:
Learning about the world
Object affordances
Imitation:
learning about others
View point transformation
Task imitation metrics
A Developmental
Approach
Self-Awareness:
Learning about the self
Auto-observation
World-Awareness:
Learning about the world
Object affordances
Imitation:
learning about others
View point transformation
Task imitation metrics
Control and learning with redundant robotic systems
Learn sensory-motor maps
Function approximation methods
Jacobian estimation methods
Control
Optimal control
I - Sensory-motor maps
Sensory
appearance/position/velocity
position/velocity
Motor
joint
Reconstruct (backward model)
Predict (forward model)
Static vs Incremental
To be used in open-loop or closed-loop control
Full vs Partial
Restricting all or part of the available degrees of freedom
Geometric vs Radiometric
Geometric or other kind of features
Head Gaze Control
Head Gaze Control
A Developmental
Approach
Self-Awareness:
Learning about the self
Auto-observation
World-Awareness:
Learning about the world
Object affordances
Imitation:
learning about others
View point transformation
Task imitation metrics
Affordances
Affordances as models for
prediction, action selection and
execution …
“action possibilities” on a
certain object, with
reference to the actor’s
capabilities [James J.
Gibson, 1979]
links Actions, Objects
and the consequences of
acting on objects
(Effects).
Grounded of the particular
experience and
capabilities of the agent.
30
Example: Grasp, Tap & Touch
Objects have:
Two different shapes
Two sizes
Three colors
Effects:
Contact
Object Motion
31
Exploring the space of actions
32
Using the affordances
Probabilistic inference & planning for recognition, prediction and
decision making
Imitation, action clustering
Hierarchical organization for sequences
33
A Developmental
Approach
Self-Awareness:
Learning about the self
Auto-observation
World-Awareness:
Learning about the world
Object affordances
Imitation:
learning about others
View point transformation
Task imitation metrics
Affordance based imitation
Affordances
Model & Learning
Imitation framework
Combining affordances
& imitation
Imitation framework
1. Observe the demonstration.
2. Use affordances to interpret what actions
would give the same effect.
3. Create a function r that describes the task
4. Select the actions to accomplish the task r
5. Perform the imitation
Experiments
Robot
Kick the balls
out of the table
Touch the
large ball
Drop the
boxes in the
pile
Inaccurate and incomplete
demonstration
Imitation
Future
Biomimetic Control
Interaction
Behaviour Control
Learning
Attention Modulation
Setups
iCub
Baltazar
Sensors: Data Glove, Flock of
Birds, Tobii, ….
41
Robotics initiative
ISR/IST - Who we are
Networked Cognitive Systems
Human action understanding and surveillance
Cognitive robots (humanoids)
Distributed decision making and planning
Collaboration instruments
42
Distributed decision making
Research Activities
Distributed Autonomous Sensor and Robot
Networks
Cooperative perception
Cooperative navigation
Cooperative plan representation and task coordination
Modeling, analysis and control of robot swarms
43
Distributed decision making
Research Activities
Distributed Autonomous Sensor and Robot
Networks
Cooperative perception
I’m “dead”,
and I can’t see
the ball…
Bayesian strategies to fuse uncertain information from spatially distributed sensors
Handling disagreement
Taking decisions (e.g., using POMDPs) to move mobile sensors to suitable locations to
improve perception
“Probabilistic language” which takes into account observation models, dependence on
sensor location w.r.t. object, robot location uncertainty
Possible applications:
• soccer robots
• rescue robot fleets (with aerial and land robots)
44
• tracking moving objects in distributed sensor networks
Distributed decision making
Research Activities
Distributed Autonomous Sensor and Robot
Networks
Cooperative navigation
Cooperative self-localization
Formation control
Decentralized low-communication formation full state estimation
Possible applications:
• formation flying spacecraft
• rescue robot fleets (with aerial and land robots)
45
Distributed decision making
Research Activities
Distributed Autonomous Sensor and Robot
Networks
Plan representation and task coordination
Decentralized Sequential Decision Making methods (MDPs, POMDPs)
Multi-Robot Reinforcement Learning (especially in POMDPs)
Deterministic Discrete Event and Hybrid Systems modeling, analysis and synthesis
Possible applications:
• any multi-robot team with a small number of robots (e.g., up to 10)
46
Distributed decision making
Research Activities
Distributed Autonomous Sensor and Robot
Networks
Modeling, analysis and control of robot swarms
Stochastic Discrete Event and Hybrid Systems modeling, analysis and synthesis
Bio-inspired models and methodologies (e.g., from the immune system)
Possible applications:
a)
• any multi-robot team with a large number of robots (e.g., larger
than 100)
• cell population dynamics
• surveillance by networks of sensors + robots
Source 2
Source 3
Source 1
Population
x1
x2
b)
=/4
=0
=-/4
x2
x1
x1
x1 v cos(0)
x2 v sin(0)
x1
21
q=1
x1 v cos( 4)
x 2 v sin( 4)
y [ x1
x2 ]T
y [ x1
x2 ]T
12 32
q=2
23
q=3
x1 v cos( 4)
x2 v sin( 4)
y [ x1 x2 ]T
47
Robotics initiative
ISR/IST - Who we are
Networked Cognitive Systems
Human action understanding and surveillance
Cognitive robots (humanoids)
Distributed decision making and planning
Collaboration instruments
48
Networked Cognitive Systems
Potential collaborators @ EPFL
Prof. Aude Billard, LASA - Learning Algorithms and Systems
Laboratory, Learning and Dynamical Systems, Neural Computation
and Modelling, Human-Machine Interaction, Humanoids Robotics,
Mechatronics, Design of Therapeutic and Educational Robotic
Systems – EU project Robotcub
Prof. Auke Jan Ijspeert, BIRG - Biologically Inspired Robotics
Group, Articulated and biologically inspired robotics, Modular
robotics, Humanoid robotics, Control of locomotion and of
coordinated movements in robots, Computational neuroscience,
neural networks, sensorimotor coordination in animals – EU project
Robotcub
Prof. Alcherio Martinoli, MICS - Mobile Information and
Communication Systems, swarm-intelligence, networked robotic
systems, swarm robotics, multi-robot systems, sensor & actuator
networks – existing personal contacts
49
Networked Cognitive Systems
Potential collaborators @ EPFL
Prof. Dario Floreano, Laboratory of Intelligent Systems (I2S Institut d'Ingénierie des Systèmes, Faculté STI Sciences et
Techniques de l'Ingénieur), Evolutionary systems, Bio-inspired
robots, robot swarms
Prof. Thomas Henzinger, Models and Theory of Computation
Laboratory (IIF - Institute of Core Computing Science, IC School of Computer and Communication Sciences), Hybrid
Automata Verification, Systems Biology
Prof. Herve Bourlard, LIDIAP/ EPFL and IDIAP - Dalle Molle
Institute for Perceptual Artificial Intelligence, Speech
Processing, Computer Vision, Information Retrieval, Biometric
Authentication, Multimodal Interaction and Machine Learning.
EU Project Submission
50
Contact:
José Santos-Victor
[email protected]
Credits
Alexandre Bernardino,
Manuel Cabido Lopes,
Luis Montesano,
Ricardo Beira,
Luis Vargas.
URL
http://vislab.isr.ist.utl.pt
www.robotcub.org
51