Abstract
A neural network model with adaptive structure for image annotation is proposed in this paper. The adaptive structure enables the proposed model to utilize both global and regional visual features, as well as correlative information of annotated keywords for annotation. In order to achieve an approximate global optimum rather than a local optimum, both genetic algorithm and traditional back-propagation algorithm, are combined for model training. The neural network model is experimented on a synthetic image dataset with controllable parameters, which has not been used in previous image annotation experiments. Experimental results demonstrate the effectiveness of the proposed model. Copyright © 2010 IEEE.
Original language | English |
---|---|
Title of host publication | 2010 11th International Conference on Control, Automation, Robotics and Vision (ICARCV 2010) |
Place of Publication | Piscataway, NJ |
Publisher | IEEE |
Pages | 1865-1870 |
Volume | 3 |
ISBN (Electronic) | 9781424478156, 9781424478132 |
ISBN (Print) | 9781424478149 |
DOIs | |
Publication status | Published - 2010 |
Citation
Chen, Z., Fu, H., Chi, Z., & Feng, D. (2010). A neural network model with adaptive structure for image annotation. In 2010 11th International Conference on Control, Automation, Robotics and Vision (ICARCV 2010) (Vol. 3, pp. 1865-1870). Piscataway, NJ: IEEE.Keywords
- Image annotation
- Synthetic image dataset
- Neural networks
- Genetic algorithm
- Back-propagation training algorithm