{"id":186,"date":"2017-08-24T15:29:19","date_gmt":"2017-08-24T14:29:19","guid":{"rendered":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/?p=186"},"modified":"2017-08-24T15:29:19","modified_gmt":"2017-08-24T14:29:19","slug":"visual-content-indexing-and-retrieval-with-psycho-visual-models","status":"publish","type":"post","link":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/?p=186","title":{"rendered":"Visual Content Indexing and Retrieval with Psycho-Visual Models"},"content":{"rendered":"<p>I&rsquo;m very pround to announce my chapter \u00a0\u00bb <strong>Information \u2013 theoretical model for saliency prediction. Application to Attentive CBIR<\/strong>\u00a0 \u00a0\u00bb in<\/p>\n<h1><a href=\"https:\/\/www.bookdepository.com\/Visual-Content-Indexing-Retrieval-with-Psycho-Visual-Models-Jenny-Benois-Pineau\/9783319576862\" target=\"_blank\" rel=\"noopener noreferrer\">Visual Content Indexing and Retrieval with Psycho-Visual Models<\/a><\/h1>\n<div class=\"author-info hidden-md\">Edited by\u00a0 <a href=\"https:\/\/www.bookdepository.com\/author\/Jenny-Benois-Pineau\">Jenny Benois-Pineau<\/a> , Edited by\u00a0 <a href=\"https:\/\/www.bookdepository.com\/author\/Patrick-Le-Callet\">Patrick Le Callet<\/a><\/div>\n<div class=\"author-info hidden-md\"><\/div>\n<div class=\"author-info hidden-md\">Abstract : \u00ab\u00a0This work presents an original informational approach to extract visual information, model attention and evaluate the efficiency of the results.<br \/>\nEven if the extraction of salient and useful information, i.e. observation, is an elementary task for human and animals, its simulation is still an open problem in computer vision. In this article, we define a process to derive specific and optimal laws to extract visual information and by the way model information without any constraints or a priori. Starting from saliency definition and measure through the prism of information theory, we present a framework in which we develop an ecological inspired approach to model visual information extraction. We theoretically demonstrate some results previously presented, for instance, in spites of being fast and highly configurable, our model is as plausible as existing models designed for high biological fidelity. It proposes an adjustable trade-off between nondeterministic attentional behavior and properties of stability, reproducibility and reactiveness. We apply this approach to enhance the performance in an object recognition task.<br \/>\nAs a conclusion, this article proposes a theoritical framework to derive an optimal model validated by many experimentations.\u00a0\u00bb<\/div>\n<div class=\"author-info hidden-md\"><\/div>\n<div class=\"author-info hidden-md\"><a href=\"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/wp-content\/uploads\/2017\/08\/9783319576862.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-medium wp-image-187\" src=\"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/wp-content\/uploads\/2017\/08\/9783319576862-200x300.jpg\" alt=\"\" width=\"200\" height=\"300\" srcset=\"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/wp-content\/uploads\/2017\/08\/9783319576862-200x300.jpg 200w, https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/wp-content\/uploads\/2017\/08\/9783319576862.jpg 266w\" sizes=\"(max-width: 200px) 100vw, 200px\" \/><\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>I&rsquo;m very pround to announce my chapter \u00a0\u00bb Information \u2013 theoretical model for saliency prediction. Application to Attentive CBIR\u00a0 \u00a0\u00bb in Visual Content Indexing and Retrieval with Psycho-Visual Models Edited by\u00a0 Jenny Benois-Pineau , Edited by\u00a0 Patrick Le Callet Abstract : \u00ab\u00a0This work presents an original informational approach to extract visual information, model attention and &hellip; <a href=\"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/?p=186\" class=\"more-link\">Continuer la lecture de <span class=\"screen-reader-text\">Visual Content Indexing and Retrieval with Psycho-Visual Models<\/span> <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[],"_links":{"self":[{"href":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/index.php?rest_route=\/wp\/v2\/posts\/186"}],"collection":[{"href":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=186"}],"version-history":[{"count":1,"href":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/index.php?rest_route=\/wp\/v2\/posts\/186\/revisions"}],"predecessor-version":[{"id":188,"href":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/index.php?rest_route=\/wp\/v2\/posts\/186\/revisions\/188"}],"wp:attachment":[{"href":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=186"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=186"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pageperso.univ-lr.fr\/vincent.courboulay\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=186"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}