A Graph-based Bottom-up Salient Object Detection Framework Using Objectness Cues and Multilayer Cellular Automata

Authors

  • S. Vanitha Sivagami
  • G Ananthi

Keywords:

Graph-based saliency, Image segmentation, Multilayer cellular automata, Objectness cues, Saliency map fusion, Salient object detection, Superpixel geodesic distance

Abstract

In the digital era, salient object detection (SOD), which focuses on locating and extracting visually prominent objects from an image, plays a crucial role in various computer vision applications, including image segmentation, object recognition, and scene understanding. In this proposed work, an effective saliency detection framework is presented by fusing multiple complementary cues to improve detection accuracy. Initially, saliency maps are generated using a bottom-up saliency detection approach combined with a graph-based salient object detection method and objectness cues, which help in highlighting potential object regions. A graph is constructed by computing the geodesic distances among all superpixels derived from the saliency map, enabling the modeling of global and local relationships between image regions. To further enhance the consistency and accuracy of the detected salient regions, saliency optimization is performed using a multilayer cellular automata mechanism. This optimization process refines the saliency values by propagating information across multiple layers, leading to well-defined object boundaries and improved foreground-background separation. Extensive experimental evaluations conducted on benchmark datasets demonstrate that the proposed method consistently outperforms several existing bottom-up saliency detection approaches in terms of precision, recall, and visual quality.

References

J. Carreira and C. Sminchisescu, “Constrained parametric min-cuts for automatic object segmentation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 2010, pp. 3241–3248.

T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, and X. Tang, “Learning to detect a salient object,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 2, pp. 353–367, Feb. 2011.

P. Manipoonchelvi and K. Muneeswaran, “Region-based saliency detection,” IET Image Processing, vol. 8, no. 9, pp. 519–527, 2014.

J. Pan, E. Sayrol, X. Giro-i-Nieto, K. McGuinness, and N. E. O’Connor, “Shallow and deep convolutional networks for saliency prediction,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 598–606.

Y. Xie, H. Lu, and M.-H. Yang, “Bayesian saliency via low and mid-level cues,” IEEE Transactions on Image Processing, vol. 22, no. 5, pp. 1689–1698, 2013.

Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille, “The secrets of salient object segmentation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 2014, pp. 280–287.

S. Arivazhagan and N. Shebiah, “Object recognition using wavelet-based salient points,” The Open Signal Processing Journal, vol. 2, no. 1, pp. 14–20, Sep. 2009.

D. Zhang, J. Han, and Y. Zhang, “Supervision by fusion: Towards unsupervised learning of a deep salient object detector,” in Proc. IEEE Int. Conf. Computer Vision (ICCV), 2017, pp. 4048–4056.

N. Liu, J. Han, D. Zhang, S. Wen, and T. Liu, “Predicting eye fixations using convolutional neural networks,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 362–370.

N. Liu and J. Han, “DHSNet: Deep hierarchical saliency network for salient object detection,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 678–686.

V. Sivagami and N. Muthupriya, “Image classification using object attention model,” Journal of Advanced Research in Dynamical & Control Systems, vol. 11, no. 4, pp. 1109–1118, 2019.

A. Gullapelly and B. G. Banik, “Exploring the techniques for object detection, classification, and tracking in video surveillance for crowd analysis,” Indian Journal of Computer Science and Engineering, vol. 11, no. 4, pp. 321–326, 2020.

F. Joy and V. Vijayakumar, “Multiple object detection in surveillance video with domain adaptive incremental Fast R-CNN algorithm,” Indian Journal of Computer Science and Engineering, vol. 12, no. 4, pp. 1018–1026, 2021.

R. Achanta, S. Hemami, F. Estrada, and S. Süsstrunk, “Frequency-tuned salient region detection,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 2009, pp. 1597–1604.

C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 2013, pp. 3166–3173.

R. Achanta and S. Süsstrunk, “Saliency detection using maximum symmetric surround,” in Proc. IEEE Int. Conf. Image Processing (ICIP), Hong Kong, China, 2010, pp. 2653–2656.

M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S.-M. Hu, “Global contrast-based salient region detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 569–582, 2015.

Y. Qin, H. Lu, Y. Xu, and H. Wang, “Saliency detection via cellular automata,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp. 110–119.

R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels,” EPFL Technical Report 149300, Jun. 2010.

Published

2026-02-02

How to Cite

Vanitha Sivagami, S., & Ananthi, G. (2026). A Graph-based Bottom-up Salient Object Detection Framework Using Objectness Cues and Multilayer Cellular Automata. Journal of Intelligent Data Analysis and Computational Statistics (p-ISSN: 3049-3056 E-ISSN: 3048-7080), 3(1), 1–15. Retrieved from https://matjournals.net/engineering/index.php/JoIDACS/article/view/3056