Principal Investigator Pawan Sinha
Perceptual tasks such as estimation of three-dimensional structure, edge detection and image segmentation are considered to be low-level or mid-level vision problems and are traditionally approached in a bottom-up, generic and hard-wired way. However, as described above, we have found experimental evidence that suggests a top-down, learning-based scheme. To complement our empirical results, we have developed a simple computational model for incorporating learned expectations in perceptual tasks. The results generated by our model when tested on edge-detection and view-prediction tasks for three-dimensional objects are consistent with human perception and are more tolerant to input degradations than conventional bottom-up strategies. This lends support to the idea that even some of the supposedly ‘hard-wired’ perceptual skills in the human visual system might, in fact, incorporate learned top-down influences.