Even if the extraction of salient and useful information, i.e. observation, is an elementary task for human and animals, its simulation is still an open problem in computer vision.
I define a process to derive specific and optimal laws to extract visual information and by the way model information without any constraints or a priori. I used a framework in which we develop an ecological inspired approach to model visual information extraction. I theoretically demonstrate some results previously presented, for instance, in spites of being fast and highly configurable, our model is as plausible as existing models designed for high biological fidelity, or that it proposes an adjustable trade-off between nondeterministic attentional behavior and properties of stability, reproducibility and reactiveness.
I position our model in a benchmark data set containing 300 natural images with eye tracking data from 39 observers. According to the author of this benchmark, this is the largest data set with so many viewers per image.
Regarding application, I have recently focused my works on document analysis.
The document digitization process becomes a crucial economical issue in our society. Then, it becomes necessary to be able to organize this huge amount of documents.
I have propose a new method to automatically classify document using a saliency-based segmentation process on one hand, and a terminology extraction and annotation on the other hand. The saliency- based segmentation is used to extract salient regions and by the way logo, while the terminology approach is used to annotate them and to automatically classify the docu- ment.
The approach does not require human expertise, and use Google Images as a knowledge database. The results obtained on a real database of 1766 documents show the relevance of the approach.
I also work on robotic application, because there is no doubt that robots are our future. But to be realistic they have to develop competences and abilities to interact with us. This paper introduces an attentive computational model for robots. Actually, attention if the ﬁrst step to interaction. I propose to enhance and implement an existing real time computational model. Intensity, color and orientation are usually used but we have added information related to depth and isolation. We have built a robotic system based on Lego Mindstorm and Kinect, that is able to take a picture of the most interesting part of the scene.