Document Type : Research Paper

Authors

1 Department of Biosystem Engineering, Faculty of Agriculture, University of Tabriz, Tabriz, Iran.

2 PhD student in computer engineering, Tabriz University

3 Doctoral student of electrical engineering at Tabriz University

Abstract

Introduction In many countries, on average, more than 50% of people's food comes from grains, and nearly 70% of the cultivated area of one billion hectares of the world is dedicated to grains. A variety of weeds grow along with cereals in the fields, which can reduce crop yield due to competition for light, water and nutrients. To eliminate weeds accurately and with minimal problems, timely detection with high accuracy and speed is required. be done.

In the field of agriculture, it is controlling and eliminating weeds in grain fields. Weeds are one of the most important factors affecting the production of agricultural products, which are their most important competitors in conventional agriculture, they spray the entire field to eliminate weeds, while weeds appear scattered and patchy in the field. which shows the necessity of using precise agriculture to solve this type of heterogeneity. In addition to causing economic damage, the conventional method of fighting can cause pollution of the environment and even the human food chain. Research shows that the losses caused by pests, diseases and weeds can reach 40% of the global crop every year and it is predicted that this percentage will increase significantly in the coming years. Besides, according to the research of Goktoan et al., the annual cost of weeds for The Australian economy is estimated to be around $4 billion as a loss in agricultural income.

Materials and Methods Among the new methods in this field is the use of machine vision technology and related methods such as deep learning object detection algorithms and convolutional neural networks (CNN). The steps related to the implementation of the project include preparing data for training and evaluating networks, using new object detection algorithms, using different convolutional neural networks with different characteristics to extract image features in algorithms, and using the Feature Pyramid Network (FPN) method in object detection algorithms. Was. The output of the networks was evaluated in terms of the number of detections, the exact location of detection and the time of detection in the field. ViTs is based on the Transformer architecture that was originally developed for NLP tasks. Transformers use self-awareness mechanisms that allow the model to capture complex relationships between elements in a sequence. In the case of ViTs, sequence elements are image patches. In using the transformer architecture for visual data, it is dividing the image into small and non-interfering parts. Each patch typically consists of a grid of pixels. These patches are considered the "words" of the image sequence. Spatial embeddings are added to image patches to provide spatial information to the model. Spatial embeddings are necessary because transformers do not have built-in notions of order or spatial relationships. ViTs use multi-series self-awareness mechanisms to capture relationships between different image patches, and the representation of each patch is updated by attention to other patches. Data separation is very important in data watch transformers for two reasons a) the model needs data to learn and b) we need data to measure the model because the model may not be able to extract the information correctly.

Results and Discussion The best network in terms of positioning accuracy was the transform model (ViTs) with an average accuracy of 0.95. In addition to this, the network considered in this research managed to recognize 503 of the 535 target weeds, and this means that our network is able to recognize 95% of these weeds. The presented method has been able to reach the highest accuracy compared to other existing methods and has been able to detect existing grasses in a much shorter period of time. Compared to other methods, the reset50 algorithm has been able to detect more than 88%, although its execution time is about 2.5 times that of the proposed method.

In comparing the efficiency of algorithms, execution time is as important as accuracy. By making comparisons and considering 70% of the data as training data and 30% as test data, the presented algorithm has been able to detect the weeds in the field with an accuracy of over 90% in just 13 seconds.

Conclusion: Today, deep learning methods are much more efficient than other methods, so we can use the new methods available in deep learning in the field of agriculture.

Keywords: Optimization, accuracy, agriculture, weed, deep learning

All right reserved.

Keywords

Main Subjects

  1. Ahmad, A., Saraswat, D., Aggarwal, V., Etienne, A., and Hancock, B. 2021. Performance of deep learning models for classifying and detecting common weeds in corn and soybean production systems. Computers and Electronics in Agriculture, 184, 106081.
  2. Ahmadi, M., Hooshmand, A. R., BoroomandNasab, S., and Sharifi, M. A. 2019. Calibration and Validation of WOFOST Model for Wheat in Qazvin Plain. Iranian Journal of Soil and Water Research, 50(2) 329-338. (In Persian with English Abstract.)
  3. Alom, M. Z. Taha, T. M., Yakopcic, C., Westberg, S., Sidike, P., Nasrin M. S., and Asari, V. K. 2019. A state- of-the-art survey on deep learning theory and architectures. Electronics, 8(3): 292.
  4. Alzubaidi, L., Zhang, J., Humaidi. A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O.,... and Farhan, L. 2021. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. Journal of big Data, 8(1): 1-74.
  5. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  6. Asad, M. H., and Bais, A. 2020. Weed detection in canola fields using maximum likelihood classification and deep convolutional neural network. Information Processing in Agriculture, 7(4) : 535-545.
  7. Ashok Kumar, D., and Prema, P. 2013. A review on crop and weed segmentation based on digital images. Multimedia processing, communication and computing applications, 213: 279-291.
  8. Barker J, Sarathy S and Tao A .2016. "DetectNet Deep Neural Network for Object Detection in DIGITS". Nvidia, (retrieved: 2016-11-30).
  9. dos Santos Ferreira, A., Freitas, D. M., da Silva, G. G., Pistori, H., and Folhes, M. T. 2017. Weed detection in soybean crops using Cony Nets. Computers and Electronics in Agriculture, 143: 314-324.
  10. Dyrmann, M., Jørgensen, R. N., and Midtiby, II. S. 2017. RoboWeedSupport-Detection of weed locations in leaf occluded cereal crops using a fully convolutional neural network. Advances in Animal Biosciences, 8(2) : 842-847.
  11. Eyre, M. D., Critchley, C. N. R., Leifert, C., and Wilcockson, S. J. 2011. Crop sequence, crop protection and fertility management effects on weed cover in an organic٫conventional farm management trial. European Journal of Agronomy, (3)153-162.
  12. Ferentinos, K. P. 2018. Deep learning models for plant disease detection and diagnosis. Computers and electronics in agriculture. 145: 311-318.
  13. Gao, J., French, A. P., Pound, M. P., He, Y., Pridmore, T. P., and Pieters, J. G. 2020. networks for image-based Convolvulus sepium detection in sugar beet fields. Plant Methods, 16(1) : 1-12.
  14. Ghosh, A., Sufian, A., Sultana, F., Chakrabarti, A., and De, D. 2020. Fundamental concepts of convolutional neural network. In Recent trends and advances in artificial intelligence and Internet of Things, 172: 519-567.
  15. Gianessi, L. P., and Reigner, N. P. 2007. The value of herbicides in US crop production. Weed Technology, 21(2) : 559-566.
  16. Gupta, G. R., Oomman, N., Grown, C., Conn, K., Hawkes, S., Shawar, Y. R., ... and Darmstadt, G. L. 2019. Gender equality and gender norms: framing the opportunities for health. The Lancet, 393(10190) : 2550-2562.
  17. Hamuda, E., Glavin, M., and Jones, E. 2016. A survey of image processing techniques for plant extraction and segmentation in the field. Computers and electronics in agriculture, 125: 184-199 .
  18. He, K., Zhang, X., Ren, S., and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770-778.
  19. Hernández-Hernández, J. L., Garcia-Mateos, G., González-Esquiva, J. M., Escarabajal-Henarejos, D., Ruiz-Canales. A., and Molina-Martinez, J. M. 2016. Optimal color space selection method for plant٫soil segmentation in agriculture. Computers and Electronics in Agriculture, 122: 124-132 .
  20. Ioffe, S., and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In international conference on machine learning,37: 448-456.
  21. Islam, N., Rashid, M. M., Wibowo, S., Xu, C. Y., Morshed, A., Wasimi, S. A.,... and Rahman, S. M. 2021. Early weed detection using image processing and machine learning techniques in an Australian chilli farm. Agriculture, 11(5) :
  22. Jabir, B., and Falih, N. 2022. Deep learning-based decision support system for weeds detection in wheat fields. International Journal of Electrical and Computer Engineering, 12(1) : 816.
  23. Krizhevsky, A., Sutskever, I., and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.
  24. V. N. T. Truong, G., and Alameh, K. 2021. Detecting weeds from crops under complex field environments based on Faster RCNN. Eighth International Conference on Communications and Electronics, 21: 350-355.
  25. Radoglou-Grammatikis, P., Sarigiannidis, P., Lagkas, T., and Moscholios, I. 2020. A compilation of UAV applications for precision agriculture. Computer Networks, 172, 107148.
  26. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. IEEE conference on computer vision and pattern recognition, 2818-2826.