GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • MDPI AG  (10)
  • dos Santos, Filipe Neves  (10)
  • 1
    In: Computation, MDPI AG, Vol. 9, No. 12 ( 2021-11-29), p. 127-
    Abstract: Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish a smarter, more precise, and sustainable forestry. This article presents a state-of-the-art review about unimodal and multimodal perception in forests, detailing the current developed work about perception using a single type of sensors (unimodal) and by combining data from different kinds of sensors (multimodal). This work also makes a comparison between existing perception datasets in the literature and presents a new multimodal dataset, composed by images and laser scanning data, as a contribution for this research field. Lastly, a critical analysis of the works collected is conducted by identifying strengths and research trends in this domain.
    Type of Medium: Online Resource
    ISSN: 2079-3197
    Language: English
    Publisher: MDPI AG
    Publication Date: 2021
    detail.hit.zdb_id: 2723192-6
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    In: Robotics, MDPI AG, Vol. 11, No. 6 ( 2022-11-27), p. 136-
    Abstract: Object identification, such as tree trunk detection, is fundamental for forest robotics. Intelligent vision systems are of paramount importance in order to improve robotic perception, thus enhancing the autonomy of forest robots. To that purpose, this paper presents three contributions: an open dataset of 5325 annotated forest images; a tree trunk detection Edge AI benchmark between 13 deep learning models evaluated on four edge-devices (CPU, TPU, GPU and VPU); and a tree trunk mapping experiment using an OAK-D as a sensing device. The results showed that YOLOR was the most reliable trunk detector, achieving a maximum F1 score around 90% while maintaining high scores for different confidence levels; in terms of inference time, YOLOv4 Tiny was the fastest model, attaining 1.93 ms on the GPU. YOLOv7 Tiny presented the best trade-off between detection accuracy and speed, with average inference times under 4 ms on the GPU considering different input resolutions and at the same time achieving an F1 score similar to YOLOR. This work will enable the development of advanced artificial vision systems for robotics in forestry monitoring operations.
    Type of Medium: Online Resource
    ISSN: 2218-6581
    Language: English
    Publisher: MDPI AG
    Publication Date: 2022
    detail.hit.zdb_id: 2662587-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    In: Journal of Imaging, MDPI AG, Vol. 7, No. 9 ( 2021-09-03), p. 176-
    Abstract: Mobile robotics in forests is currently a hugely important topic due to the recurring appearance of forest wildfires. Thus, in-site management of forest inventory and biomass is required. To tackle this issue, this work presents a study on detection at the ground level of forest tree trunks in visible and thermal images using deep learning-based object detection methods. For this purpose, a forestry dataset composed of 2895 images was built and made publicly available. Using this dataset, five models were trained and benchmarked to detect the tree trunks. The selected models were SSD MobileNetV2, SSD Inception-v2, SSD ResNet50, SSDLite MobileDet and YOLOv4 Tiny. Promising results were obtained; for instance, YOLOv4 Tiny was the best model that achieved the highest AP (90%) and F1 score (89%). The inference time was also evaluated, for these models, on CPU and GPU. The results showed that YOLOv4 Tiny was the fastest detector running on GPU (8 ms). This work will enhance the development of vision perception systems for smarter forestry robots.
    Type of Medium: Online Resource
    ISSN: 2313-433X
    Language: English
    Publisher: MDPI AG
    Publication Date: 2021
    detail.hit.zdb_id: 2824270-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    MDPI AG ; 2021
    In:  Agronomy Vol. 11, No. 2 ( 2021-02-02), p. 279-
    In: Agronomy, MDPI AG, Vol. 11, No. 2 ( 2021-02-02), p. 279-
    Abstract: Sap flow measurements of trees are today the most common method to determine evapotranspiration at the tree and the forest/crop canopy level. They provide independent measurements for flux comparisons and model validation. The most common approach to measure the sap flow is based on intrusive solutions with heaters and thermal sensors. This sap flow sensor technology is not very reliable for more than one season crop; it is intrusive and not adequate for low diameter trunk trees. The non-invasive methods comprise mostly Radio-frequency (RF) technologies, typically using satellite or air-born sources. This system can monitor large fields but cannot measure sap levels of a single plant (precision agriculture). This article studies the hypothesis to use of RF signals attenuation principle to detect variations in the quantity of water present in a single plant. This article presents a well-defined experience to measure water content in leaves, by means of high gains RF antennas, spectrometer, and a robotic arm. Moreover, a similar concept is studied with an off-the-shelf radar solution—for the automotive industry—to detect changes in the water presence in a single plant and leaf. The conclusions indicate a novel potential application of this technology to precision agriculture as the experiments data is directly related to the sap flow variations in plant.
    Type of Medium: Online Resource
    ISSN: 2073-4395
    Language: English
    Publisher: MDPI AG
    Publication Date: 2021
    detail.hit.zdb_id: 2607043-1
    SSG: 23
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    In: Agronomy, MDPI AG, Vol. 12, No. 2 ( 2022-01-31), p. 356-
    Abstract: The harvesting operation is a recurring task in the production of any crop, thus making it an excellent candidate for automation. In protected horticulture, one of the crops with high added value is tomatoes. However, its robotic harvesting is still far from maturity. That said, the development of an accurate fruit detection system is a crucial step towards achieving fully automated robotic harvesting. Deep Learning (DL) and detection frameworks like Single Shot MultiBox Detector (SSD) or You Only Look Once (YOLO) are more robust and accurate alternatives with better response to highly complex scenarios. The use of DL can be easily used to detect tomatoes, but when their classification is intended, the task becomes harsh, demanding a huge amount of data. Therefore, this paper proposes the use of DL models (SSD MobileNet v2 and YOLOv4) to efficiently detect the tomatoes and compare those systems with a proposed histogram-based HSV colour space model to classify each tomato and determine its ripening stage, through two image datasets acquired. Regarding detection, both models obtained promising results, with the YOLOv4 model standing out with an F1-Score of 85.81%. For classification task the YOLOv4 was again the best model with an Macro F1-Score of 74.16%. The HSV colour space model outperformed the SSD MobileNet v2 model, obtaining results similar to the YOLOv4 model, with a Balanced Accuracy of 68.10%.
    Type of Medium: Online Resource
    ISSN: 2073-4395
    Language: English
    Publisher: MDPI AG
    Publication Date: 2022
    detail.hit.zdb_id: 2607043-1
    SSG: 23
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    In: Agriculture, MDPI AG, Vol. 11, No. 3 ( 2021-03-04), p. 208-
    Abstract: Smart and precision agriculture concepts require that the farmer measures all relevant variables in a continuous way and processes this information in order to build better prescription maps and to predict crop yield. These maps feed machinery with variable rate technology to apply the correct amount of products in the right time and place, to improve farm profitability. One of the most relevant information to estimate the farm yield is the Leaf Area Index. Traditionally, this index can be obtained from manual measurements or from aerial imagery: the former is time consuming and the latter requires the use of drones or aerial services. This work presents an optical sensing-based hardware module that can be attached to existing autonomous or guided terrestrial vehicles. During the normal operation, the module collects periodic geo-referenced monocular images and laser data. With that data a suggested processing pipeline, based on open-source software and composed by Structure from Motion, Multi-View Stereo and point cloud registration stages, can extract Leaf Area Index and other crop-related features. Additionally, in this work, a benchmark of software tools is made. The hardware module and pipeline were validated considering real data acquired in two vineyards—Portugal and Italy. A dataset with sensory data collected by the module was made publicly available. Results demonstrated that: the system provides reliable and precise data on the surrounding environment and the pipeline is capable of computing volume and occupancy area from the acquired data.
    Type of Medium: Online Resource
    ISSN: 2077-0472
    Language: English
    Publisher: MDPI AG
    Publication Date: 2021
    detail.hit.zdb_id: 2651678-0
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    In: Robotics, MDPI AG, Vol. 9, No. 4 ( 2020-11-21), p. 97-
    Abstract: Research and development of autonomous mobile robotic solutions that can perform several active agricultural tasks (pruning, harvesting, mowing) have been growing. Robots are now used for a variety of tasks such as planting, harvesting, environmental monitoring, supply of water and nutrients, and others. To do so, robots need to be able to perform online localization and, if desired, mapping. The most used approach for localization in agricultural applications is based in standalone Global Navigation Satellite System-based systems. However, in many agricultural and forest environments, satellite signals are unavailable or inaccurate, which leads to the need of advanced solutions independent from these signals. Approaches like simultaneous localization and mapping and visual odometry are the most promising solutions to increase localization reliability and availability. This work leads to the main conclusion that, few methods can achieve simultaneously the desired goals of scalability, availability, and accuracy, due to the challenges imposed by these harsh environments. In the near future, novel contributions to this field are expected that will help one to achieve the desired goals, with the development of more advanced techniques, based on 3D localization, and semantic and topological mapping. In this context, this work proposes an analysis of the current state-of-the-art of localization and mapping approaches in agriculture and forest environments. Additionally, an overview about the available datasets to develop and test these approaches is performed. Finally, a critical analysis of this research field is done, with the characterization of the literature using a variety of metrics.
    Type of Medium: Online Resource
    ISSN: 2218-6581
    Language: English
    Publisher: MDPI AG
    Publication Date: 2020
    detail.hit.zdb_id: 2662587-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    In: Agronomy, MDPI AG, Vol. 11, No. 9 ( 2021-09-21), p. 1890-
    Abstract: The agricultural sector plays a fundamental role in our society, where it is increasingly important to automate processes, which can generate beneficial impacts in the productivity and quality of products. Perception and computer vision approaches can be fundamental in the implementation of robotics in agriculture. In particular, deep learning can be used for image classification or object detection, endowing machines with the capability to perform operations in the agriculture context. In this work, deep learning was used for the detection of grape bunches in vineyards considering different growth stages: the early stage just after the bloom and the medium stage where the grape bunches present an intermediate development. Two state-of-the-art single-shot multibox models were trained, quantized, and deployed in a low-cost and low-power hardware device, a Tensor Processing Unit. The training input was a novel and publicly available dataset proposed in this work. This dataset contains 1929 images and respective annotations of grape bunches at two different growth stages, captured by different cameras in several illumination conditions. The models were benchmarked and characterized considering the variation of two different parameters: the confidence score and the intersection over union threshold. The results showed that the deployed models could detect grape bunches in images with a medium average precision up to 66.96%. Since this approach uses low resources, a low-cost and low-power hardware device that requires simplified models with 8 bit quantization, the obtained performance was satisfactory. Experiments also demonstrated that the models performed better in identifying grape bunches at the medium growth stage, in comparison with grape bunches present in the vineyard after the bloom, since the second class represents smaller grape bunches, with a color and texture more similar to the surrounding foliage, which complicates their detection.
    Type of Medium: Online Resource
    ISSN: 2073-4395
    Language: English
    Publisher: MDPI AG
    Publication Date: 2021
    detail.hit.zdb_id: 2607043-1
    SSG: 23
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 9
    In: Agronomy, MDPI AG, Vol. 13, No. 2 ( 2023-02-04), p. 463-
    Abstract: The efficiency of agricultural practices depends on the timing of their execution. Environmental conditions, such as rainfall, and crop-related traits, such as plant phenology, determine the success of practices such as irrigation. Moreover, plant phenology, the seasonal timing of biological events (e.g., cotyledon emergence), is strongly influenced by genetic, environmental, and management conditions. Therefore, assessing the timing the of crops’ phenological events and their spatiotemporal variability can improve decision making, allowing the thorough planning and timely execution of agricultural operations. Conventional techniques for crop phenology monitoring, such as field observations, can be prone to error, labour-intensive, and inefficient, particularly for crops with rapid growth and not very defined phenophases, such as vegetable crops. Thus, developing an accurate phenology monitoring system for vegetable crops is an important step towards sustainable practices. This paper evaluates the ability of computer vision (CV) techniques coupled with deep learning (DL) (CV_DL) as tools for the dynamic phenological classification of multiple vegetable crops at the subfield level, i.e., within the plot. Three DL models from the Single Shot Multibox Detector (SSD) architecture (SSD Inception v2, SSD MobileNet v2, and SSD ResNet 50) and one from You Only Look Once (YOLO) architecture (YOLO v4) were benchmarked through a custom dataset containing images of eight vegetable crops between emergence and harvest. The proposed benchmark includes the individual pairing of each model with the images of each crop. On average, YOLO v4 performed better than the SSD models, reaching an F1-Score of 85.5%, a mean average precision of 79.9%, and a balanced accuracy of 87.0%. In addition, YOLO v4 was tested with all available data approaching a real mixed cropping system. Hence, the same model can classify multiple vegetable crops across the growing season, allowing the accurate mapping of phenological dynamics. This study is the first to evaluate the potential of CV_DL for vegetable crops’ phenological research, a pivotal step towards automating decision support systems for precision horticulture.
    Type of Medium: Online Resource
    ISSN: 2073-4395
    Language: English
    Publisher: MDPI AG
    Publication Date: 2023
    detail.hit.zdb_id: 2607043-1
    SSG: 23
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 10
    In: Sensors, MDPI AG, Vol. 21, No. 10 ( 2021-05-20), p. 3569-
    Abstract: The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage (flower to the ripe tomato). The state-of-the-art for visual tomato detection focuses mainly on ripe tomato, which has a distinctive colour from the background. This paper contributes with an annotated visual dataset of green and reddish tomatoes. This kind of dataset is uncommon and not available for research purposes. This will enable further developments in edge artificial intelligence for in situ and in real-time visual tomato detection required for the development of harvesting robots. Considering this dataset, five deep learning models were selected, trained and benchmarked to detect green and reddish tomatoes grown in greenhouses. Considering our robotic platform specifications, only the Single-Shot MultiBox Detector (SSD) and YOLO architectures were considered. The results proved that the system can detect green and reddish tomatoes, even those occluded by leaves. SSD MobileNet v2 had the best performance when compared against SSD Inception v2, SSD ResNet 50, SSD ResNet 101 and YOLOv4 Tiny, reaching an F1-score of 66.15%, an mAP of 51.46% and an inference time of 16.44ms with the NVIDIA Turing Architecture platform, an NVIDIA Tesla T4, with 12 GB. YOLOv4 Tiny also had impressive results, mainly concerning inferring times of about 5 ms.
    Type of Medium: Online Resource
    ISSN: 1424-8220
    Language: English
    Publisher: MDPI AG
    Publication Date: 2021
    detail.hit.zdb_id: 2052857-7
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...