Precision measurement techniques are growing in popularity in the agricultural industries, and many companies, such as John Deere, are pushing to help farmers improve the quality of their product and the effectiveness of land use. Though novel, targeted measurement techniques and automated equipment have been made commercially available to farmers, UAV-based imaging techniques remain a research concept with few examples of practical implementation. This project explores an image processing strategy that could greatly improve the practicality of existing research concepts UAV imaging as it applies to agriculture.
The code used for this project can be found on Github.
This project was done with guidance from the Ohio State University ReRout Laboratory, and the assumptions made in this analysis relate to the research practices of this lab’s research.
The most developed precision agriculture practice involving UAV and drone technology is the automated location of potential crop disease within a field. In this process, aerial images are collected from above the crop field during a UAV mission, the UAV returns to the mission launch location, and the imaged are processed at an edge device to detect potential areas of the field that could be affected by one of many diseases.
This process of collecting images is effective in collecting valuable research data, however, it is not practical for commercial implementation for several reasons. Some requirements include:
- A skilled researcher to operate UAV
- Time consuming missions
- Crop specific algorithms
In order to properly fly a UAV over the crop field under test, a skilled researcher with domain knowledge must fly the drone, offload collected data to an edge device, and perform classification algorithms on the collected data. Having a skilled researcher present when conducting such missions is very expensive and not justifiable for a commercial application.
Time Consuming Missions
In order to cover the entirety of a crop field, many individual missions must be conducted because of computing resource constraints, crop rotation, and field domain knowledge. The cost and time of analyzing a field for a customer is greatly dependent on the number of missions required to cover the entire area.
Crop Specific Algorithms
The algorithms run at the edge for detection of diseases are crop specific, and many field contain multiple species due to crop rotation and mixed use agricultural land. If a single mission or image to be analyzed contains multiple crops, the time required to classify each image increases because multiple models are needed for complete classification of each image.
Given these constraints, it would be more practical in a commercial application for researchers to be able to differentiate between different crops while the drone is flying to ensure that each mission only contains images of a particular type of crop. In order to achieve this, an object detection algorithm could be used in real time to direct the UAV so that it only captures images of a desired species. Object detection is a specific type of image processing algorithm that can localize a target object within a given image.
There are many existing object detection algorithms that can be used to achieve this. After computing constraints were considered, YOLOv4 Tiny was chosen as the foundation for this problem. YOLOv4 Tiny is made available via darknet and initialized weights are available to speed up transfer learning. There are several advantages for using YOLOv4 Tiny:
- High framerate achieved on the COCO dataset
- Low training time
- Fewer layers than other architectures
These attributes are extremely important when considering a UAV native application given the computing hardware constraints of commercially available drone technology. The YOLOv4 Tiny detection speed can be seen in the figure below in comparison to other versions of YOLO as it performed on the COCO dataset.
The dataset used to train the YOLOv4 Tiny model was obtained from the Eden Library, an organization that hosts agricultural image data. The specific dataset used contained 67 unique images of a mixed use crop field that contained broccoli crops, soy crops, and non-agricultural use of land. An example of one of the images in this dataset can be seen below.
Each image from this dataset was labeled manually to define regions containing soy crops and regions containing broccoli crops. This dataset is rather small in comparison to similar datasets used in object detection algorithms. In order to generate additional data a process called image augmentation was used to create additional images with variations in pixel noise, rotation, and contrast. The resulting dataset after augmentation was comprised of 134 unique images used to train a YOLOv4 model.
The model was trained using Google Colab, and the data was stored and transferred to the Google Colab environment using Roboflow, and image processing storage tool. The training was conducted with the following division of data:
- 70% – used in training
- 20% – used for validation after each testing iteration
- 10% – reserved for final test results
The hardware used to test these results was the Tesla P100 graphics processor with 16 GB of ram and an NVIDIA Pascal architecture with 3920 theoretical cores. This hardware has significantly more computing power than the DJI manifold drone, but it was the most comparable architecture to the system that I could access for this project. The Mean Average Precision (MAP) score was used to evaluate the accuracy of each iteration by calculating the harmonic average between the true positive and false positive object detection of each image. The MAP score per iteration can be seen in the figure below.
After 4000 iteration of training and training validation, the weights from the best MAP score were able to classify objects with a 91.34% accuracy. The training weights from this iteration were used to classify the remaining 10% of images reserved for testing, and an example of the classification results can be seen below.
After the test image results were collected, the model was also applied to a video containing no frames used in training. The model was able to classify the video frames at a rate of greater than 60 frames per second (the speed of the recording), suggesting that with a Tesla P100, the model would be able to classify video recording data in real time.
Though this model was able to achieve high speed classification on a Tesla P100, the DJI Manifold hardware used in UAV-based precision agriculture applications does not have access to this hardware. The DJI drone contains a processor with an NVIDIA Keplar architecture that can achieve a 192 theoretical CUDA cores as opposed to the Tesla P100’s 3920 theoretical cores. With this limitation, it can be calculated that the DJI Manifold would achieve a frame rate of up to 3 frame per second. While this is much slower than the extravagant real time object detection demo videos that can be found in other projects, it is still fast enough to correct the course of a drone to ensure that it remains on target. With the research and development that is consistently improving UAV and drone hardware, it can only be assumed that the performance of such models on drones will improve in the future. For now, this technique still seems like a viable option to bridge the gap between precision agriculture research and the practicality of customer needs.