Emotional analysis on facial expressions
Facial expressions are one of the most important channels of communication of the human being. Automatic Facial Expression Analysis it’s a challenging task which tries to allow the computers to analyse and understand these expressions. This systems could have many potential applications designing systems that understand and react to the emotional states of people.
Universal facial expression recognition
Body activity analysis
The body is involved in most of the human activities. In order to understand automacally the human behaviour, we need to create intelligent systems which understand the motion of the body parts and their relations.
Articulated object tracking and automatic pose recognition
Violin player analysis
Human 3D modeling
The capture of 3D information is getting a high importance during the last years. We propose a fast system in order to obtain the 3D model of a person using only one single 3D camera and 2 mirrors. This disposition allows the obtention of the model without the necessity of rotating the camera or the person, and allows a reduction in the space requirements.
Human body pose inference from monocular images
Not only detecting the presence of persons on an image, but also estimating their body pose configuration is a challenging task, even more when only monocular images are available, involving the lack of contextual cues such as motion or time repetitive patterns. We propose the usage of Pattern Recognition techniques and part-based detection methods, together with probabilistic models in order to infer the human body pose from 2D visual information, regarding prior knowledge models
Spoken term detection search into acoustic documents for specific acoustic patterns. In our query by example approach We focus in a totally unsupervised search of an uttered acoustic pattern inside a bigger acoustic document. We present a second level approach for building an unsupervised acoustic model. This acoustic model is used to represent and effectively compare acoustic frames. Posteriorly, we use segmental dynamic time warping for subsequence matching. Resulting alignments are normalized and filtered to produce the final putative hits.