蒙賀偉 周馨曌 吳烽云 鄒天龍
摘要:自主導(dǎo)航與采摘定位作為采摘機(jī)器人的關(guān)鍵任務(wù),可有效減輕人工勞動(dòng)強(qiáng)度,提高作業(yè)精度與作業(yè)效率。該文闡述和分析基于視覺的采摘機(jī)器人采摘定位與自主導(dǎo)航方法,主要涉及視覺導(dǎo)航的可行駛區(qū)域檢測(cè)、果實(shí)目標(biāo)識(shí)別及采摘點(diǎn)定位,并根據(jù)國(guó)內(nèi)外的研究現(xiàn)狀,對(duì)機(jī)器視覺的最新發(fā)展和未來發(fā)展趨勢(shì)進(jìn)行展望。
關(guān)鍵詞:采摘機(jī)器人;機(jī)器視覺;自主導(dǎo)航;可行駛區(qū)域檢測(cè);果實(shí)目標(biāo)識(shí)別;采摘點(diǎn)定位
中圖分類號(hào):TP391.4 ??????????文獻(xiàn)標(biāo)志碼:A ???????????文章編號(hào):1674-2605(2023)05-0001-07
DOI:10.3969/j.issn.1674-2605.2023.05.001
Picking?Location and Navigation Methods?for?Vision-based
Picking Robots
MENG?Hewei1ZHOU?Xinzhao1,2WU Fengyun3,4Zou Tianlong2
(1.College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832000, China
2.Foshan-Zhongke Innovation Research Institute of Intelligent Agriculture, Foshan 528010, China
3.Guangzhou College of Commerce, Guangzhou 511363, China
4.College of Engineering, South China Agricultural University, Guangzhou 510642, China)
Abstract:?Autonomous navigation and picking positioning, as key tasks of picking robots, can effectively reduce manual labor intensity, improve work accuracy and efficiency. This article elaborates and analyzes the methods of vision-based picking positioning and autonomous navigation for picking robots, mainly involving the detection of movable areas, fruit target recognition, and picking point positioning in visual navigation. Based on the current research status at home and abroad, it looks forward to the latest development and future development trends of machine vision.
Keywords:picking robots; machine vision; autonomous navigation; travelable area detection; fruit target recognition; picking point positioning
0 引言
在世界各地,水果在農(nóng)業(yè)經(jīng)濟(jì)中占有越來越重要的地位。根據(jù)聯(lián)合國(guó)糧食及農(nóng)業(yè)組織的統(tǒng)計(jì)數(shù)據(jù),自1991年至2021年以來,葡萄、蘋果、柑橘等水果的生產(chǎn)總值呈現(xiàn)穩(wěn)步增長(zhǎng)的趨勢(shì)[1]。水果收獲具有工作周期短、勞動(dòng)密集、耗時(shí)等特點(diǎn)。隨著人口老齡化和
農(nóng)村勞動(dòng)力的短缺,人工成本逐年增加,勞動(dòng)力需求與人工成本之間的矛盾日益突出,制約了中國(guó)傳統(tǒng)農(nóng)業(yè)的發(fā)展。隨著現(xiàn)代信息技術(shù)、人工智能技術(shù)的快速發(fā)展,面向蘋果[2-3]、番茄[4-5]、荔枝[6-8]、火龍果[9]、茶葉[10-11]、甜椒[12-13]等多種作物的采摘機(jī)器人及相關(guān)技術(shù)[14-15]得到了國(guó)內(nèi)外學(xué)者的關(guān)注。采摘機(jī)器人的應(yīng)用對(duì)提高生產(chǎn)力、作業(yè)效率以及農(nóng)業(yè)可持續(xù)性發(fā)展具有重要的意義。荔枝采摘機(jī)器人如圖1所示。
相較于工業(yè)機(jī)器人,采摘機(jī)器人的作業(yè)環(huán)境更加復(fù)雜,干擾因素多、障礙物多、不規(guī)則程度高,且由于果樹葉片遮擋等影響,降低了全球定位系統(tǒng)(global positioning system,?GPS)的定位精度。機(jī)器視覺具有成本低、操作簡(jiǎn)單、信息豐富等特點(diǎn),更適用于GPS信號(hào)被遮擋的山間、農(nóng)田等復(fù)雜環(huán)境。機(jī)器視覺導(dǎo)航的關(guān)鍵技術(shù)主要涉及可行駛區(qū)域檢測(cè),其研究方法通常分為基于機(jī)器學(xué)習(xí)的分割方法和基于圖像特征的分割方法。
采摘機(jī)器人除了實(shí)現(xiàn)果園環(huán)境下的自主行走外,還需要在復(fù)雜的環(huán)境下實(shí)現(xiàn)果實(shí)自動(dòng)采摘。如何實(shí)現(xiàn)低損、智能、擬人化的采摘作業(yè)是采摘機(jī)器人的應(yīng)用重點(diǎn)。目前相關(guān)學(xué)者的研究主要集中在果實(shí)目標(biāo)識(shí)別、采摘點(diǎn)定位等方面。
本文分析基于視覺的采摘機(jī)器人自主導(dǎo)航與采摘定位的研究進(jìn)展,在對(duì)基于機(jī)器學(xué)習(xí)的可行駛區(qū)域分割方法和基于圖像特征的可行駛區(qū)域分割方法進(jìn)行總結(jié)分析的基礎(chǔ)上,進(jìn)一步闡述果實(shí)目標(biāo)識(shí)別、采摘點(diǎn)定位等方法的發(fā)展現(xiàn)狀,最后結(jié)合無人農(nóng)場(chǎng)與智慧農(nóng)業(yè),對(duì)采摘機(jī)器人定位與導(dǎo)航技術(shù)的未來應(yīng)用場(chǎng)景進(jìn)行展望。
1 基于機(jī)器視覺的自主導(dǎo)航
可行駛區(qū)域檢測(cè)的主要目的是從復(fù)雜場(chǎng)景中提取無障礙可行駛區(qū)域,為確定導(dǎo)航路徑奠定基礎(chǔ)。根據(jù)可行駛區(qū)域的特點(diǎn),可分為結(jié)構(gòu)化可行駛區(qū)域和非結(jié)構(gòu)化可行駛區(qū)域兩類。其中,結(jié)構(gòu)化可行駛區(qū)域類似于城市道路、高速公路等標(biāo)準(zhǔn)化道路,車道標(biāo)線清晰,道路邊緣規(guī)則,幾何特征鮮明;非結(jié)構(gòu)化可行駛區(qū)域類似于果園、農(nóng)村的道路以及作物行間區(qū)域,可行駛區(qū)域邊緣不規(guī)則、邊界不清晰、沒有車道標(biāo)線。與結(jié)構(gòu)化可行駛區(qū)域相比,非結(jié)構(gòu)化可行駛區(qū)域具有更為復(fù)雜的環(huán)境背景。大部分非結(jié)構(gòu)化可行駛區(qū)域的路面凹凸不平,并伴有隨機(jī)分布的雜草。
1.1 基于機(jī)器學(xué)習(xí)的可行駛區(qū)域分割方法
基于機(jī)器學(xué)習(xí)的可行駛區(qū)域分割方法可分為聚類[16]、支持向量機(jī)(support vector machine, SVM)[17]、深度學(xué)習(xí)[18]等。YANG等[19]提出一種基于神經(jīng)網(wǎng)絡(luò)和像素掃描的視覺導(dǎo)航路徑提取方法,引入Segnet網(wǎng)絡(luò)和Unet網(wǎng)絡(luò),提高果園路況信息和背景環(huán)境的分割效果;同時(shí)采用滑動(dòng)濾波算法、掃描法和加權(quán)平均法擬合最終的導(dǎo)航路徑。LEI等[20]結(jié)合改進(jìn)的種子SVM和二維激光雷達(dá)點(diǎn)云數(shù)據(jù),對(duì)非結(jié)構(gòu)化道路進(jìn)行檢測(cè)和識(shí)別。WANG等[21]結(jié)合光照不變圖像,通過組合分析概率圖與梯度信息,實(shí)現(xiàn)復(fù)雜場(chǎng)景的道路提取。KIM等[22]采用基于補(bǔ)丁和卷積神經(jīng)網(wǎng)絡(luò)(convolu-tional neural network, CNN)的輕量化神經(jīng)網(wǎng)絡(luò),實(shí)現(xiàn)半結(jié)構(gòu)化果園環(huán)境中的自主路徑識(shí)別。ALAM等[23]采用最近鄰分類(nearest neighbor, NN)算法和軟投票聚合相結(jié)合的方法,實(shí)現(xiàn)結(jié)構(gòu)化和非結(jié)構(gòu)化環(huán)境下的道路提取。部分學(xué)者[24-26]基于機(jī)器學(xué)習(xí)的方法,研究遙感中的道路提取方法,但這種方法并不適用于采摘機(jī)器人。
1.2 基于圖像特征的可行駛區(qū)域分割方法
基于圖像特征的可行駛區(qū)域分割方法通過建立模型,利用顏色、紋理等特征來區(qū)分道路和非道路區(qū)域。ZHOU等[27]利用H分量來提取天空區(qū)域的目標(biāo)路徑。CHEN等[28-29]利用改進(jìn)的灰度因子和最大類間方差法提取土壤和植物的灰度圖像,實(shí)現(xiàn)溫室環(huán)境下土壤和植物的分割。ZHOU等[30]基于圖像預(yù)處理算法,優(yōu)化灰度與因子,實(shí)現(xiàn)雙空間融合的非結(jié)構(gòu)化道路提取,并在此基礎(chǔ)上實(shí)現(xiàn)非結(jié)構(gòu)化道路與路側(cè)果實(shí)的同步識(shí)別,如圖2所示。
QI等[31]基于圖的流形排序方法對(duì)道路區(qū)域進(jìn)行分割,并使用二項(xiàng)式函數(shù)來擬合道路區(qū)域模型,實(shí)現(xiàn)農(nóng)村環(huán)境下的道路識(shí)別。一些學(xué)者在道路提取過程中考慮了消失點(diǎn)等空間結(jié)構(gòu)特征,如SU等[32]在光照不變圖像預(yù)消失點(diǎn)約束的基礎(chǔ)上,采用Dijkstra方法結(jié)合單線激光雷達(dá)實(shí)現(xiàn)道路提?。籔HUNG等[33]基于改進(jìn)的消失點(diǎn)估計(jì)方法結(jié)合幾何和顏色,實(shí)現(xiàn)行人車道的檢測(cè)。然而,消失點(diǎn)的檢測(cè)比較耗時(shí)[34],且大多應(yīng)用于結(jié)構(gòu)化道路檢測(cè),不適用于處理非結(jié)構(gòu)化道路。
2 基于機(jī)器視覺的采摘定位
2.1 果實(shí)目標(biāo)識(shí)別
果實(shí)目標(biāo)識(shí)別的方法主要分為基于傳統(tǒng)圖像特征分析的方法和基于深度學(xué)習(xí)的方法。
基于傳統(tǒng)圖像特征分析的方法主要通過顏色[35]、形狀紋理[36]以及多種特征[37-38]對(duì)水果進(jìn)行識(shí)別。如P?REZ-ZAVALA等[39]基于形狀和紋理信息,將聚類像素的區(qū)域分離成葡萄串,其平均檢測(cè)精度為88.61%,平均召回率為80.34%。周文靜等[40]基于K近鄰算法和最大類間方差法將葡萄果粒與圖像背景進(jìn)行區(qū)別分割,并基于圓形Hough變換實(shí)現(xiàn)葡萄果粒的識(shí)別。LIU等[41]通過顏色、紋理信息以及SVM,實(shí)現(xiàn)了葡萄果束的分離和計(jì)數(shù)。吳亮生等[42]基于Cb Cr色差法和區(qū)域生長(zhǎng)策略提取了楊梅果實(shí)潛在的前景區(qū)域。
基于傳統(tǒng)圖像特征分析的方法在面向多變環(huán)境時(shí)具有一定的局限性,因此基于深度學(xué)習(xí)的方法得到快速發(fā)展,并廣泛應(yīng)用于智慧農(nóng)業(yè)領(lǐng)域[43-47],如在作物生長(zhǎng)形態(tài)識(shí)別[48-53]、分類定位[54-57]、跟蹤計(jì)數(shù)[58-60]和病蟲害識(shí)別[61-64]等領(lǐng)域受到學(xué)者的高度重視,深度學(xué)習(xí)的相關(guān)技術(shù)也在果蔬的目標(biāo)檢測(cè)與識(shí)別方面得到深入研究。LI等[65]利用Faster R-CNN網(wǎng)絡(luò)模型、色差和色差比實(shí)現(xiàn)果園混亂背景下的蘋果檢測(cè)與分割。WANG等[66]開發(fā)一種基于通道修剪YOLOv5s算法的蘋果果實(shí)檢測(cè)方法,該模型具有尺寸小、檢測(cè)速度快等特點(diǎn)。FU等[67]通過修改YOLOv4網(wǎng)絡(luò)模型,實(shí)現(xiàn)果園自然環(huán)境中香蕉束和莖的快速檢測(cè)。HAYDAR等[68]基于OpenCV AI Kit (OAK-D)與YOLOv4-tiny深度學(xué)習(xí)模型,開發(fā)一種支持深度學(xué)習(xí)的機(jī)器視覺控制系統(tǒng),實(shí)現(xiàn)果實(shí)高度的檢測(cè)以及割臺(tái)采摘齒耙位置的自動(dòng)調(diào)整。LI等[69]基于改進(jìn)的Faster R-CNN提出Strawberry R-CNN,通過創(chuàng)建草莓計(jì)數(shù)誤差集,設(shè)計(jì)一種草莓識(shí)別與計(jì)數(shù)評(píng)估方法。SUNIL等[70]采用ResNet50、多視角特征融合網(wǎng)絡(luò)(multi-view feature fusion network, MFFN)和自適應(yīng)注意力機(jī)制,結(jié)合通道、空間和像素注意力對(duì)番茄植物葉子圖像進(jìn)行分類,實(shí)現(xiàn)基于MFFN的番茄植物病害分類。程佳兵等[71]基于深度目標(biāo)檢測(cè)網(wǎng)絡(luò)實(shí)現(xiàn)水果與背景區(qū)域的分割,借助立體匹配和三角測(cè)量技術(shù),實(shí)現(xiàn)水果三維點(diǎn)云與空間位置的獲取。
2.2 采摘點(diǎn)定位
對(duì)果實(shí)目標(biāo)識(shí)別后,結(jié)合果實(shí)的特性,對(duì)果實(shí)的采摘點(diǎn)進(jìn)行定位,為采摘末端提供作業(yè)信息,以實(shí)現(xiàn)低損、準(zhǔn)確的水果收獲。
張勤等[72]基于YOLACT模型對(duì)番茄果梗進(jìn)行粗分割,通過感興趣區(qū)域(region of interest, ROI)的位置匹配關(guān)系、細(xì)化算法、膨脹操作和果梗形態(tài)特征等進(jìn)一步對(duì)果梗進(jìn)行細(xì)分割,最終結(jié)合深度信息求取采摘點(diǎn)坐標(biāo)。徐鳳如等[73]采用改進(jìn)型YOLOv4-Dense算法和OpenCV圖像處理方法,在對(duì)芽葉進(jìn)行檢測(cè)的基礎(chǔ)上,基于熔斷行預(yù)目標(biāo)采摘區(qū)域的交點(diǎn)即為理想采摘點(diǎn)的思想,對(duì)茶樹芽葉采摘點(diǎn)進(jìn)行定位。宋彥等[74]構(gòu)建一種基于多頭自注意力機(jī)制結(jié)合多尺度特征融合的RMHSA-NeXt語義分割算法,實(shí)現(xiàn)茶葉采摘點(diǎn)的分割,具有準(zhǔn)確性高、推理速度快等特點(diǎn)。寧政通等[75]采用掩模區(qū)域卷積神經(jīng)網(wǎng)絡(luò)與閾值分割方法,實(shí)現(xiàn)葡萄果梗的識(shí)別,并將果梗質(zhì)心點(diǎn)確定為采摘點(diǎn)。杜文圣等[76]通過改進(jìn)的MaskR-CNN模型和集合邏輯算法,實(shí)現(xiàn)鮮食葡萄的檢測(cè)與夾持點(diǎn)定位。梁喜鳳等[77]利用番茄果串的質(zhì)心及其輪廓邊界確定果梗的ROI,以第一個(gè)果實(shí)分叉點(diǎn)與果梗骨架角點(diǎn)確定采摘點(diǎn)位置。畢松等[78]通過分割成熟草莓的目標(biāo)點(diǎn)云對(duì)草莓位姿進(jìn)行估計(jì),結(jié)合草莓位姿質(zhì)心與草莓高確定采摘點(diǎn)。ZHAO等[79]基于采摘點(diǎn)服從邊界框的思路,結(jié)合改進(jìn)的YOLOv4,實(shí)現(xiàn)葡萄檢測(cè)與采摘點(diǎn)的同步預(yù)測(cè)。張勤等[80]基于YOLOv4算法和番茄串與對(duì)應(yīng)果梗的連通關(guān)系,篩選可摘番茄串,并利用深度信息和顏色特征確定采摘點(diǎn)。WU等[81]提出一種自上而下的葡萄果梗定位思路,整合目標(biāo)和關(guān)鍵點(diǎn)的檢測(cè)功能,實(shí)現(xiàn)果梗及其采摘點(diǎn)的定位。JIN等[82]構(gòu)建遠(yuǎn)近距離立體視覺系統(tǒng)對(duì)葡萄果串與果梗進(jìn)行識(shí)別與定位,并基于果梗質(zhì)心識(shí)別算法實(shí)現(xiàn)采摘點(diǎn)定位。TANG等[83]采用k-means++先驗(yàn)框聚類方法對(duì)YOLOv4-tiny模型進(jìn)行改進(jìn),并基于提取的目標(biāo)ROI提出一種雙目立體匹配策略,在降低算法計(jì)算量的同時(shí),實(shí)現(xiàn)復(fù)雜環(huán)境下的油茶果果實(shí)檢測(cè)與采摘點(diǎn)定位。WU等[84-85]為實(shí)現(xiàn)香蕉智能化采摘,提出改進(jìn)的YOLOv5-B模型,搭建立體視覺香蕉雄花簇切斷機(jī)器人實(shí)驗(yàn)平臺(tái),并獲得花軸切斷點(diǎn)的三維空間坐標(biāo)。上述研究方法為采摘機(jī)器人的無人化作業(yè)和低損采摘奠定了基礎(chǔ)。
3 結(jié)論與展望
本文綜述了機(jī)器視覺在采摘機(jī)器人中采摘定位與自主導(dǎo)航的應(yīng)用,主要包括基于機(jī)器視覺的自主導(dǎo)航、果實(shí)目標(biāo)識(shí)別與采摘點(diǎn)定位等。
雖然人工智能及深度學(xué)習(xí)方法提高了采摘機(jī)器人作業(yè)的準(zhǔn)確性與可靠性,但由于農(nóng)業(yè)環(huán)境的復(fù)雜性和不確定性,機(jī)器視覺的應(yīng)用仍然存在較大的定位誤差。因此,需要結(jié)合采摘機(jī)器人的控制系統(tǒng)和機(jī)構(gòu)的創(chuàng)新設(shè)計(jì),進(jìn)一步開發(fā)機(jī)器視覺與末端的誤差主動(dòng)容錯(cuò)機(jī)制,以降低目標(biāo)定位和操作誤差。
采摘機(jī)器人在自主行走作業(yè)過程中,受到地面不平、震動(dòng)等動(dòng)態(tài)干擾,導(dǎo)致視覺畫面質(zhì)量降低,影響導(dǎo)航的精確度。因此,需要結(jié)合圖像預(yù)處理技術(shù),進(jìn)一步開發(fā)實(shí)時(shí)降噪、多源信息融合的導(dǎo)航算法,提高采摘機(jī)器人在野外果園作業(yè)的魯棒性與可靠性。此外,采摘與行走的多行為協(xié)同決策是一個(gè)值得研究的方向。
參考文獻(xiàn)
參考文獻(xiàn)
[1]FAO. Value of agricultural production [EB/OL]. (2023-09-23) [2023-09-23]. https://www.fao.org/faostat/en/#data/QV/visualize.
[2]丁一,姬偉,許波,等.蘋果采摘機(jī)器人柔順抓取的參數(shù)自整定阻抗控制[J].農(nóng)業(yè)工程學(xué)報(bào),2019,35(22):257-266.
[3]LI T, XIE F, ZHAO Z, et al. A multi-arm robot system for efficient apple harvesting: Perception, task plan and control[J]. Computers and Electronics in Agriculture, 2023,211:107979.
[4]LI Y, FENG Q, LIU C, et al. MTA-YOLACT: Multitask-aware network on fruit bunch identification for cherry tomato robotic harvesting[J]. European Journal of Agronomy, 2023,146:126812.
[5]于豐華,周傳琦,楊鑫,等.日光溫室番茄采摘機(jī)器人設(shè)計(jì)與試驗(yàn)[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2022,53(1):41-49.
[6]ZHONG Z, XIONG J, ZHENG Z, et al. A method for litchi picking points calculation in natural environment based on main fruit bearing branch detection[J]. Computers and Electronics in Agriculture, 2021,189:106398.
[7]陳燕,蔣志林,李嘉威,等.夾剪一體的荔枝采摘末端執(zhí)行器設(shè)計(jì)與性能試驗(yàn)[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2018,49(1):35-41.
[8]LI J, TANG Y, ZOU X, et al. Detection of fruit-bearing branches and localization of litchi clusters for vision-based harvesting robots[J]. IEEE Access, 2020,8:117746-117758.
[9]ZHANG F, CAO W, WANG S, et al. Improved YOLOv4 recognition algorithm for pitaya based on coordinate attention and combinational convolution[J]. Frontiers in Plant Science, 2022,13:1030021.
[10]CHEN C, LU J, ZHOU M, et al. A YOLOv3-based computer vision system for identification of tea buds and the picking point[J]. Computers and Electronics in Agriculture, 2022,198: 107116.
[11]楊化林,鐘巖,姜沅政,等.基于時(shí)間與急動(dòng)度最優(yōu)的并聯(lián)式采茶機(jī)器人軌跡規(guī)劃混合策略[J].機(jī)械工程學(xué)報(bào),2022, 58(9):62-70.
[12]BARTH R, HEMMING J, VAN HENTEN E J. Angle estimation between plant parts for grasp optimisation in harvest robots[J]. Biosystems Engineering, 2019,183:26-46.
[13]HESPELER S C, NEMATI H, DEHGHAN-NIRI E. Non-destructive thermal imaging for object detection via advanced deep learning for robotic inspection and harvesting of chili peppers[J]. Artificial Intelligence in Agriculture, 2021,5:102-117.
[14]林俊強(qiáng),王紅軍,鄒湘軍,等.基于DPPO的移動(dòng)采摘機(jī)器人避障路徑規(guī)劃及仿真[J].系統(tǒng)仿真學(xué)報(bào),2023,35(8):1692-1704.
[15]霍韓淋,鄒湘軍,陳燕,等.基于視覺機(jī)器人障礙點(diǎn)云映射避障規(guī)劃及仿真[J/OL].系統(tǒng)仿真學(xué)報(bào):1-12[2023-09-20]. http://kns.cnki.net/kcms/detail/11.3092.V.20230823.0932.002.html.
[16]ZHANG Z, ZHANG X, CAO R, et al. Cut-edge detection method for wheat harvesting based on stereo vision[J]. Computers and Electronics in Agriculture, 2022,197:106910.
[17]LIU Y, XU W, DOBAIE A M, et al. Autonomous road detection and modeling for UGVs using vision-laser data fusion[J]. Neurocomputing, 2018,275:2752-2761.
[18]LI Y, HONG Z, CAI D, et al. A SVM and SLIC based detection method for paddy field boundary line[J]. Sensors, 2020,20(9):2610.
[19]YANG Z, OUYANG L, ZHANG Z, et al. Visual navigation path extraction of orchard hard pavement based on scanning method and neural network[J]. Computers and Electronics in Agriculture, 2022,197:106964.
[20]LEI G, YAO R, ZHAO Y, et al. Detection and modeling of unstructured roads in forest areas based on visual-2D lidar data fusion[J]. Forests, 2021,12(7):820.
[21]WANG E, LI Y, SUN A, et al. Road detection based on illuminant invariance and quadratic estimation[J]. Optik, 2019, 185:672-684.
[22]KIM W S, LEE D H, KIM Y J, et al. Path detection for autonomous traveling in orchards using patch-based CNN[J]. Computers and Electronics in Agriculture, 2020,175:105620.
[23]ALAM A, SINGH L, JAFFERY Z A, et al. Distance-based confidence generation and aggregation of classifier for un-structured road detection[J]. Journal of King Saud University--Computer and Information Sciences, 2022,34(10):8727-8738.
[24]XIN J, ZHANG X, ZHANG Z, et al. Road extraction of high-resolution remote sensing images derived from DenseUNet[J]. Remote Sensing, 2019,11(21):2499.
[25]GUAN H, LEI X, YU Y, et al. Road marking extraction in UAV imagery using attentive capsule feature pyramid network[J]. International Journal of Applied Earth Observation and Geoinformation, 2022,107:102677.
[26]YANG M, YUAN Y, LIU G. SDUNet: Road extraction via spatial enhanced and densely connected UNet[J]. Pattern Recognition, 2022,126:108549.
[27]ZHOU M, XIA J, YANG F, et al. Design and experiment of visual navigated UGV for orchard based on Hough matrix and RANSAC[J]. International Journal of Agricultural and Biolo-gical Engineering, 2021,14(6):176-184.
[28]CHEN J, QIANG H, WU J, et al. Extracting the navigation path of a tomato-cucumber greenhouse robot based on a median point Hough transform[J]. Computers and Electronics in Agriculture, 2020,174:105472.
[29]CHEN J, QIANG H, WU J, et al. Navigation path extraction for greenhouse cucumber-picking robots using the prediction-point Hough transform[J]. Computers and Electronics in Agriculture, 2021,180:105911.
[30]ZHOU X, ZOU X, TANG W, et al. Unstructured road extraction and roadside fruit recognition in grape orchards based on a synchronous detection algorithm[J]. Frontiers in Plant Science, 2023,14:1103276.
[31]QI N, YANG X, LI C, et al. Unstructured road detection via combining the model-based and feature-based methods[J]. IET Intelligent Transport Systems, 2019,13(10):1533-1544.
[32]SU Y, ZHANG Y, ALVAREZ J M, et al. An illumination-invariant nonparametric model for urban road detection using monocular camera and single-line lidar[C]//2017 IEEE Inter-national Conference on Robotics and Biomimetics (ROBIO). IEEE, 2017:68-73.
[33]PHUNG S L, LE M C, BOUZERDOUM A. Pedestrian lane detection in unstructured scenes for assistive navigation[J]. Computer Vision and Image Understanding, 2016,149:186-196.
[34]XU F, HU B, CHEN L, et al. An illumination robust road detection method based on color names and geometric infor-mation[J]. Cognitive Systems Research, 2018,52:240-250.
[35]王玉德,張學(xué)志.復(fù)雜背景下甜瓜果實(shí)分割算法[J].農(nóng)業(yè)工程學(xué)報(bào),2014,30(2):176-181.
[36]田有文,李天來,李成華,等.基于支持向量機(jī)的葡萄病害圖像識(shí)別方法[J].農(nóng)業(yè)工程學(xué)報(bào),2007(6):175-180.
[37]謝忠紅,姬長(zhǎng)英.基于顏色模型和紋理特征的彩色水果圖像分割方法[J].西華大學(xué)學(xué)報(bào)(自然科學(xué)版),2009,28(4):41-45.
[38]盧軍,桑農(nóng).變化光照下樹上柑橘目標(biāo)檢測(cè)與遮擋輪廓恢復(fù)技術(shù)[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2014,45(4):76-81;60.
[39]P?REZ-ZAVALA R, TORRES-TORRITI M, CHEEIN F A, et al. A pattern recognition strategy for visual grape bunch detection in vineyards[J]. Computers and Electronics in Agri-culture, 2018,151:136-149.
[40]周文靜,查志華,吳杰.改進(jìn)圓形Hough變換的田間紅提葡萄果穗成熟度判別[J].農(nóng)業(yè)工程學(xué)報(bào),2020,36(9):205-213.
[41]LIU S, WHITTY M. Automatic grape bunch detection in vineyards with an SVM classifier[J]. Journal of Applied Logic, 2015, 13(4): 643-653.
[42]吳亮生,雷歡,陳再勵(lì),等.基于局部滑窗技術(shù)的楊梅識(shí)別與定位方法[J].自動(dòng)化與信息工程,2021,42(6):30-35;48.
[43]TANG Y, CHEN M, WANG C, et al. Recognition and localization methods for vision-based fruit picking robots: A review[J]. Frontiers in Plant Science, 2020, 11: 510.
[44]SANAEIFAR A, GUINDO M L, BAKHSHIPOUR A, et al. Advancing precision agriculture: The potential of deep lear-ning for cereal plant head detection[J]. Computers and Electro-nics in Agriculture, 2023, 209:107875.
[45]WENG S, TANG L, QIU M, et al. Surface-enhanced Raman spectroscopy charged probes under inverted superhydrophobic platform for detection of agricultural chemicals residues in rice combined with lightweight deep learning network[J]. Analy-tica Chimica Acta, 2023,1262:341264.
[46]KHAN S, ALSUWAIDAN L. Agricultural monitoring system in video surveillance object detection using feature extraction and classification by deep learning techniques[J]. Computers and Electrical Engineering, 2022, 102:108201.
[47]GUO R, XIE J, ZHU J, et al. Improved 3D point cloud segmentation for accurate phenotypic analysis of cabbage plants using deep learning and clustering algorithms[J]. Com-puters and Electronics in Agriculture, 2023,211:108014.
[48]YU S, FAN J, LU X, et al. Deep learning models based on hyperspectral data and time-series phenotypes for predicting quality attributes in lettuces under water stress[J]. Computers and Electronics in Agriculture, 2023, 211:108034.
[49]PAN Y, ZHANG Y, WANG X, et al. Low-cost livestock sorting information management system based on deep lear-ning[J]. Artificial Intelligence in Agriculture, 2023,9:110-126.
[50]梁金營(yíng),黃貝琳,潘棟.基于圖像識(shí)別的物流停車場(chǎng)引導(dǎo)系統(tǒng)的設(shè)計(jì)[J].機(jī)電工程技術(shù),2022,51(11):163-166.
[51]陳傳敏,賈文瑤,劉松濤,等.高鹽廢水中硅的形態(tài)定性識(shí)別及定量分析[J].中國(guó)測(cè)試,2023,49(2):87-92.
[52]郭林,沈東義,毛火明,等.基于形態(tài)相似度識(shí)別的大數(shù)據(jù)分析方法在測(cè)井巖性識(shí)別中的研究[J].電腦知識(shí)與技術(shù),2023, 19(3):54-56.
[53]宮志宏,董朝陽,于紅,等.基于機(jī)器視覺的冬小麥葉片形態(tài)測(cè)量軟件開發(fā)[J].中國(guó)農(nóng)業(yè)氣象,2022,43(11):935-944.
[54]SUNIL G C, ZHANG Y, KOPARAN C, et al. Weed and crop species classification using computer vision and deep learning technologies in greenhouse conditions[J]. Journal of Agricul-ture and Food Research, 2022,9:100325.
[55]PUTRA Y C, WIJAYANTO A W. Automatic detection and counting of oil palm trees using remote sensing and object-based deep learning[J]. Remote Sensing Applications: Society and Environment, 2023, 29:100914.
[56]劉斌,龍健寧,程方毅,等.基于卷積神經(jīng)網(wǎng)絡(luò)的物流貨物圖像分類研究[J].機(jī)電工程技術(shù),2021,50(12):79-82;175.
[57]林靜,徐月華.水果姿態(tài)圖像自動(dòng)采集訓(xùn)練檢測(cè)儀設(shè)計(jì)[J].中國(guó)測(cè)試,2021,47(7):119-124.
[58]WU Z, SUN X, JIANG H, et al. NDMFCS: An automatic fruit counting system in modern apple orchard using abatement of abnormal fruit detection[J]. Computers and Electronics in Agriculture, 2023, 211:108036.
[59]WU F, YANG Z, MO X, et al. Detection and counting of banana bunches by integrating deep learning and classic image-processing algorithms[J]. Computers and Electronics in Agriculture, 2023,209:107827.
[60]成海秀,陳河源,曹惠茹,等.無人機(jī)目標(biāo)跟蹤系統(tǒng)的設(shè)計(jì)與實(shí)現(xiàn)[J].機(jī)電工程技術(shù),2020,49(11):165-167.
[61]ALSHAMMARI H H, TALOBA A I, SHAHIN O R. Identification of olive leaf disease through optimized deep learning approach[J]. Alexandria Engineering Journal, 2023, 72:213-224.
[62]GIAKOUMOGLOU N, PECHLIVANI E M, SAKELLIOU A, et al. Deep learning-based multi-spectral identification of grey mould[J]. Smart Agricultural Technology, 2023,4:100174.
[63]KAUR P, HARNAL S, GAUTAM V, et al. An approach for characterization of infected area in tomato leaf disease based on deep learning and object detection technique[J]. Enginee-ring Applications of Artificial Intelligence, 2022,115:105210.
[64]ZHU D J, XIE L Z, CHEN B X, et al. Knowledge graph and deep learning based pest detection and identification system for fruit quality[J]. Internet of Things, 2023,21:100649.
[65]LI T, FANG W, ZHAO G, et al. An improved binocular localization method for apple based on fruit detection using deep learning[J]. Information Processing in Agriculture 2023, 10(2):276-287.
[66]WANG D, HE D. Channel pruned YOLO V5s-based deep learning approach for rapid and accurate apple fruitlet detection before fruit thinning[J]. Biosystems Engineering, 2021,210:271-281.
[67]FU L, WU F, ZOU X, et al. Fast detection of banana bunches and stalks in the natural environment based on deep learning[J]. Computers and Electronics in Agriculture, 2022,194:106800.
[68]HAYDAR Z, ESAU T J, FAROOQUE A A, et al. Deep learning supported machine vision system to precisely auto-mate the wild blueberry harvester header[J]. Scientific Reports, 2023,13(1):10198.
[69]LI J, ZHU Z, LIU H, et al. Strawberry R-CNN: Recognition and counting model of strawberry based on improved faster R-CNN[J]. Ecological Informatics, 2023,77:102210.
[70]SUNIL C K, JAIDHAR C D, PATIL N. Tomato plant disease classification using multilevel feature fusion with adaptive channel spatial and pixel attention mechanism[J]. Expert Systems with Applications, 2023,228:120381.
[71]程佳兵,鄒湘軍,陳明猷,等.多類復(fù)雜水果目標(biāo)的通用三維感知框架[J].自動(dòng)化與信息工程,2021,42(3):15-20.
[72]張勤,龐月生,李彬.基于實(shí)例分割的番茄串視覺定位與采摘姿態(tài)估算[J/OL].農(nóng)業(yè)機(jī)械學(xué)報(bào):1-13[2023-09-18]. http://kns.cnki.net/kcms/detail/11.1964.S.20230808.1632.024.html.
[73]徐鳳如,張昆明,張武,等.一種基于改進(jìn)YOLOv4算法的茶樹芽葉采摘點(diǎn)識(shí)別及定位方法[J].復(fù)旦學(xué)報(bào)(自然科學(xué)版),2022,61(4):460-471.
[74]宋彥,楊帥,鄭子秋,等.基于多頭自注意力機(jī)制的茶葉采摘點(diǎn)語義分割算法[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2023,54(9):297-305.
[75]寧政通,羅陸鋒,廖嘉欣,等.基于深度學(xué)習(xí)的葡萄果梗識(shí)別與最優(yōu)采摘定位[J].農(nóng)業(yè)工程學(xué)報(bào),2021,37(9):222-229.
[76]杜文圣,王春穎,朱衍俊,等.采用改進(jìn)Mask R-CNN算法定位鮮食葡萄疏花夾持點(diǎn)[J].農(nóng)業(yè)工程學(xué)報(bào),2022,38(1):169-177.
[77]梁喜鳳,金超杞,倪梅娣,等.番茄果實(shí)串采摘點(diǎn)位置信息獲取與試驗(yàn)[J].農(nóng)業(yè)工程學(xué)報(bào),2018,34(16):163-169.
[78]畢松,隗朋峻,劉仁學(xué).溫室高架栽培草莓空間姿態(tài)識(shí)別與采摘點(diǎn)定位方法[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2023,54(9):53-64;84.
[79]ZHAO R, ZHU Y, LI Y. An end-to-end lightweight model for grape and picking point simultaneous detection[J]. Biosystems Engineering, 2022,223:174-188.
[80]張勤,陳建敏,李彬,等.基于RGB-D信息融合和目標(biāo)檢測(cè)的番茄串采摘點(diǎn)識(shí)別定位方法[J].農(nóng)業(yè)工程學(xué)報(bào),2021,37(18): 143-152.
[81]WU Z, XIA F, ZHOU S, et al. A method for identifying grape stems using keypoints[J]. Computers and Electronics in Agri-culture, 2023,209:107825.
[82]JIN Y, YU C, YIN J, et al. Detection method for table grape ears and stems based on a far-close-range combined vision system and hand-eye-coordinated picking test[J]. Computers and Electronics in Agriculture, 2022,202:107364.
[83]TANG Y, ZHOU H, WANG H, et al. Fruit detection and positioning technology for a Camellia oleifera C. Abel orchard based on improved YOLOv4-tiny model and binocular stereo vision[J]. Expert Systems with Applications, 2023,211:118573.
[84]WU F, DUAN J, AI P, et al. Rachis detection and three-dimensional localization of cut off point for vision-based banana robot[J]. Computers and Electronics in Agriculture, 2022,198:107079.
[85]WU F, DUAN J, CHEN S, et al. Multi-target recognition of bananas and automatic positioning for the inflorescence axis cutting point[J]. Frontiers in Plant Science, 2021,12:705021.
作者簡(jiǎn)介:
蒙賀偉,男,1982年生,博士,教授/博士生導(dǎo)師,主要研究方向:農(nóng)業(yè)機(jī)械化、農(nóng)業(yè)電氣化與自動(dòng)化。E-mail: mhw_mac@126.com
周馨曌,女,1997年生,博士,主要研究方向:智慧農(nóng)業(yè)、機(jī)器視覺、農(nóng)業(yè)電氣化與自動(dòng)化。E-mail: zxinzhao@126.com
吳烽云(通信作者),女,1988年生,博士,主要研究方向:智慧農(nóng)業(yè)、機(jī)器視覺。E-mail: fyseagull@163.com
鄒天龍,男,1986年生,大專,主要研究方向:測(cè)控系統(tǒng)集成應(yīng)用。E-mail:?84174619@qq.com