Font Size: a A A

RGB Imaging Combined With Machine Learning For Oilseed Rape Phenotyping

Posted on:2021-04-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:Alwaseela Abdalla Mohamed HassFull Text:PDF
GTID:1483306545468414Subject:Agricultural Electrification and Automation
Abstract/Summary:PDF Full Text Request
Image-based plant phenotyping plays an essential role in accelerating plant breeding in a large-scale experiment.It has been widely used with the help of computer vision and machine learning techniques to provide a high-throughput,non-destructive,and accurate estimation of the biochemical and biophysical traits of the crops,such as vegetation fraction,plant biomass,height,and resistant to biotic/abiotic stresses.However,the application of computer vision in image-based plant phenotyping is a great challenge,mainly,in field conditions because of the variations in illumination conditions,plants/weeds overlapping,and complexity of the plant architecture.Furthermore,for quantification dynamics traits,such as plant growth and transient responses of the plant to different environmental stressors,a large number of image dataset needs to be acquired with a high temporal-spatial resolution.Analyzing and handling such a tremendous number of datasets requires sophisticated machine learning algorithms that can provide both accurate and fast image processing.Therefore,this study was aimed to develop a reliable,automated,and fast algorithms for processing RGB images for plant phenotyping applications.To achieve this goal,large-scale oilseed rape experiments were conducted at the Agricultural Research Station of Zhejiang University,Hangzhou,China.Both traditional machine learning algorithms and deep learning techniques were applied to resolve the computer vision problems related to plant phenotyping.One of the most critical and challenging tasks in automated plant phenotyping in a field condition is the image color calibration.To fully automate the color calibration process,we have developed a deep learning-based framework combined with a kmeans algorithm to produce images with a consistent color and minimize the difference between the measured and standard color values supplied by X-Rite company.We compared our proposed method with statistical-based methods;our deep learning framework provided a stateof-the-art performance in color calibration and provided the color error of 16.23 ??.We also demonstrated that such a model could be used for real-time application(calibration time of less than 0.15 s/image).The second challenge is the infield image segmentation.To address this issue,unsupervised machine learning classifiers,including,k-means,Gaussian mixture model,fuzzy c-means,and artificial neural network-the self-organizing map was applied to image segmentation to separate the plants for the background.The performances of these algorithms were improved by combining a genetic algorithm and supreme color features that were extracted from different color models.The results demonstrated that the improved unsupervised learning models could efficiently separate the plant in the images from the background in an automated fashion.The SOM algorithm provided the best segmentation accuracy(96%)among other algorithms and the supreme color features outperformed the tradition color features and they were robust to the variation in illumination conditions.We proposed an efficient and cutting-edge deep learning method for dealing with the problems of high-density weeds.Specifically,we exploited the capability of existed pre-trained convolutional neural networks(CNN)using different training scenarios: full training from scratch,fine-tuning weights,and using CNN for only feature extraction,so-called transfer learning.All these scenarios showed superior performance in separating value crops in images with high weed pressure.However,the study suggested that it is not feasible to adequately train the CNN from scratch,as this typically requires a large number of annotated image dataset and computationally expensive.The results indicate that fine-tuning a pre-trained CNN tends to be the best performing scenario,which reduces the training dataset and training time.In fact,using the features extracted from a fine-tuned model with a support vector machine achieved the best overall segmentation accuracy of 96% and segmentation time less than 0.05 s/image.Therefore,the last training scenario can be reasonable for real-time application in field conditions and can be incorporated with any vision-guided plant phenotyping for detecting the value crops in a complex background.A deep learning-enabled dynamic model was used to diagnose the nutrient status of oilseed rape through RGB images collected in the field conditions at different growth stages and during a two-year experiment.Firstly,the color of the oilseed rape canopy images was calibrated,as mentioned above.Secondly,we utilized a previous deep learning image segmentation method to detect the oilseed rape in the image.This study also exploits the capabilities of different CNN architecture(e.g.,Alex Net,VGG,Squeeze Net,Res Net18,Res Net101,and Inception)to discover the hidden representative features from the sequential image dataset and these features are then introduced to long-short term memory(LSTM)to classify the plants according to their nutrient status.The contribution of the deep learning-based features was investigated by replacing them with traditional hand-crafted features,as well as we ignored the temporal information and replaced the dynamic model(LSTM)with conventional multiclass support vector machine(MCSVM)to further support the claim of the advancement being made by using the LSTM.The study revealed that deep features combined with the LSTM provided good performance in discriminating between plants grown under different nutrients with various levels.Among different model configurations,the highest overall classification accuracy ranged between 92.14% and 95.37% was obtained by the Inceptionv3-LSTM.The Inceptionv3-LSTM also provided a good generalization when tested on an independent dataset with an overall accuracy of 95.37%.This research introduced state-of-the-art machine learning and computer vision methods to process RGB images for high-throughput plant phenotyping applications and also demonstrated the applicability of these methods for monitoring the nutrients status of the plant under field condition.More research is needed to incorporate these models with any platform for real-time applications in the field.
Keywords/Search Tags:Machine Learning, Deep Learning, RGB Image, Plant Phenotyping, Long-Short Term Memory
PDF Full Text Request
Related items