国产日韩欧美一区二区三区三州_亚洲少妇熟女av_久久久久亚洲av国产精品_波多野结衣网站一区二区_亚洲欧美色片在线91_国产亚洲精品精品国产优播av_日本一区二区三区波多野结衣 _久久国产av不卡

?

基于多特征融合和深度置信網(wǎng)絡(luò)的稻田苗期雜草識(shí)別

2018-08-10 07:48鄧向武陳學(xué)深劉海云陳偉烽
關(guān)鍵詞:識(shí)別率網(wǎng)絡(luò)結(jié)構(gòu)紋理

鄧向武,齊 龍,馬 旭,蔣 郁,2,陳學(xué)深,劉海云,陳偉烽

?

基于多特征融合和深度置信網(wǎng)絡(luò)的稻田苗期雜草識(shí)別

鄧向武1,齊 龍1※,馬 旭1,蔣 郁1,2,陳學(xué)深1,劉海云1,陳偉烽1

(1. 華南農(nóng)業(yè)大學(xué)工程學(xué)院,廣州 510642;2. 華南農(nóng)業(yè)大學(xué)現(xiàn)代教育技術(shù)中心,廣州 510642)

雜草的準(zhǔn)確識(shí)別是田間雜草精準(zhǔn)防控管理的前提,機(jī)器視覺(jué)技術(shù)是實(shí)現(xiàn)雜草準(zhǔn)確識(shí)別的有效手段。該文以水稻苗期雜草為研究對(duì)象,采集稻田自然背景下和不同光照條件下的6種雜草圖像共928幅,包括空心蓮子草、丁香蓼、鱧腸、野慈姑、稗草和千金子。采用1.1顏色因子將雜草RGB圖像進(jìn)行灰度化,選擇自動(dòng)閾值自動(dòng)分割得到雜草前景二值圖像,通過(guò)腐蝕膨脹形態(tài)學(xué)操作進(jìn)行葉片內(nèi)部孔洞填充,應(yīng)用面積濾波去除其他干擾目標(biāo),最后將雜草二值圖像與RGB圖像進(jìn)行掩膜運(yùn)算得到去除背景的雜草圖像;提取雜草圖像的顏色特征、形狀特征和紋理特征共101維特征,并對(duì)其進(jìn)行歸一化處理。在雙隱含層和單隱含層的深度置信網(wǎng)絡(luò)(deep belief networks,DBN)結(jié)構(gòu)基礎(chǔ)上,對(duì)DBN隱含層節(jié)點(diǎn)數(shù)選擇方法進(jìn)行研究。針對(duì)雙隱含層DBN節(jié)點(diǎn)數(shù),選擇恒值型、升值型和降值型3種節(jié)點(diǎn)組合方式進(jìn)行優(yōu)化研究,當(dāng)網(wǎng)絡(luò)結(jié)構(gòu)為101-210-55-6時(shí)雜草識(shí)別率為83.55%;通過(guò)對(duì)單隱含層節(jié)點(diǎn)參數(shù)優(yōu)化得到網(wǎng)絡(luò)結(jié)構(gòu)為101-200-6時(shí)雜草識(shí)別率達(dá)到91.13%。以同一測(cè)試樣本的運(yùn)行時(shí)間值作為模型的測(cè)試時(shí)間對(duì)3種不同模型進(jìn)行耗時(shí)測(cè)試,SVM模型、BP模型和DBN模型測(cè)試結(jié)果分別為0.029 7、0.030 6和0.034 1 s,試驗(yàn)結(jié)果表明基于多特征融合的DBN模型的識(shí)別精度最高,且耗時(shí)較其他2種模型相差不大,可滿(mǎn)足實(shí)時(shí)檢測(cè)的速度要求,所以在實(shí)際應(yīng)用中應(yīng)優(yōu)先選擇基于多特征融合的DBN模型。該研究可為稻田雜草識(shí)別與藥劑選擇性噴施提供參考。

機(jī)器視覺(jué);圖像處理;雜草識(shí)別;深度置信網(wǎng)絡(luò);多特征融合;特征提取

0 引 言

中國(guó)水稻種植面積常年穩(wěn)定在3 000萬(wàn)hm2,超過(guò)65%的人口以稻米為主食[1]。在水稻生長(zhǎng)過(guò)程中,稻田雜草與秧苗競(jìng)爭(zhēng)養(yǎng)分、水分和生長(zhǎng)空間,影響水稻秧苗的正常生長(zhǎng),降低水稻產(chǎn)量和質(zhì)量;同時(shí)雜草為病蟲(chóng)害提供滋生和蔓延的條件,導(dǎo)致水稻病蟲(chóng)害爆發(fā)引起大量減產(chǎn)。目前農(nóng)田雜草防控方式主要為化學(xué)除草[2-3],采用除草劑大面積均勻噴施方式,然而這種傳統(tǒng)的除草劑施用方式常常造成化學(xué)藥劑的過(guò)量施用,引起作物藥害、土壤和水源污染、稻米農(nóng)藥殘留等問(wèn)題[4]。農(nóng)藥的精準(zhǔn)噴施可在不影響雜草防控效果的前提下,有效節(jié)約40%~60%的農(nóng)藥用量[5]。稻田雜草種類(lèi)識(shí)別是選擇高效除草劑的依據(jù),是稻田雜草精確防控管理的基礎(chǔ),因此,如何快速準(zhǔn)確對(duì)稻田雜草自動(dòng)識(shí)別具有重要意義。

機(jī)器視覺(jué)技術(shù)具有成本低和便捷易操作等特點(diǎn),在精準(zhǔn)農(nóng)業(yè)方面得到了廣泛應(yīng)用。基于機(jī)器視覺(jué)技術(shù)的雜草識(shí)別[6]過(guò)程包括2大部分:1)特征選擇和提?。?)分類(lèi)器選擇和訓(xùn)練。如何選擇和設(shè)計(jì)有效的識(shí)別特征是雜草識(shí)別技術(shù)的關(guān)鍵所在,國(guó)內(nèi)外學(xué)者分別對(duì)雜草的顏色[7-9]、形狀[10-11]、紋理[12]等特征進(jìn)行了研究和探索。為避免單一特征的局限性,進(jìn)一步提高雜草識(shí)別精度和魯棒性,顏色、形狀和紋理等特征的融合[13-16]得到廣泛應(yīng)用。分類(lèi)器選擇是基于機(jī)器視覺(jué)技術(shù)雜草識(shí)別的另一個(gè)重點(diǎn),在分類(lèi)器選擇上,當(dāng)前分類(lèi)學(xué)習(xí)算法多為淺層結(jié)構(gòu)算法,包括常見(jiàn)的支持向量機(jī)(support vector machine,SVM)[17]、Booting[18]和邏輯回歸[19]等。

近幾年,深度學(xué)習(xí)以其強(qiáng)大的特征提取能力,在圖像分類(lèi)等任務(wù)上取得巨大成功。其中基于卷積神經(jīng)網(wǎng)絡(luò)(convolutional neural network,CNN)的旱田雜草識(shí)別[20]和水稻病害識(shí)別[21]取得很好的效果?;谠紙D像直接構(gòu)建CNN模型需大樣本數(shù)據(jù)集,而針對(duì)小樣本數(shù)據(jù)集,可基于訓(xùn)練好的經(jīng)典CNN模型(如AlexNet[22]、GoogLeNet[23]和ResNet[24]等)提取特征建立分類(lèi)模型,或通過(guò)遷移學(xué)習(xí)建立分類(lèi)模型[25],但基于CNN模型的卷積計(jì)算對(duì)硬件資源要求較高[26]。因此,針對(duì)本文的稻田雜草小樣本圖像數(shù)據(jù)集,采用另一種應(yīng)用廣泛的深度置信網(wǎng)絡(luò)(deep belief network,DBN)模型[27],其對(duì)硬件資源要求較小,且可作為特征分類(lèi)器單獨(dú)使用。深度置信網(wǎng)絡(luò)(deep belief network,DBN)在農(nóng)業(yè)領(lǐng)域已被成功應(yīng)用于豬咳嗽聲疾病預(yù)警[28]、蘋(píng)果霉心病無(wú)損檢測(cè)[29]、大棚冬棗病蟲(chóng)害預(yù)測(cè)[30]和雞胚雌雄早期識(shí)別[31]等。

上述研究主要是針對(duì)所提取得到的各種特征進(jìn)行DBN建模,但DBN結(jié)構(gòu)參數(shù)都是在基于經(jīng)驗(yàn)的基礎(chǔ)上通過(guò)反復(fù)試驗(yàn)來(lái)選擇,缺乏對(duì)DBN結(jié)構(gòu)參數(shù)選擇方法的相關(guān)研究。由于稻田雜草顏色差異小,同科屬間雜草形態(tài)具有一定的相似性,此外稻田反光、倒影等因素也給稻田雜草種類(lèi)的圖像識(shí)別造成一定困難。本文提出一種DBN雜草圖像分類(lèi)算法,首先提取稻田雜草圖像的顏色、形狀和紋理等特征,然后以這些特征融合作為DBN的輸入數(shù)據(jù)進(jìn)行訓(xùn)練,并對(duì)DBN結(jié)構(gòu)參數(shù)選擇方法進(jìn)行研究。通過(guò)對(duì)顏色、紋理和形狀的多特征融合,解決稻田雜草單一特征識(shí)別率不高的問(wèn)題,采用DBN進(jìn)行訓(xùn)練,以克服單一特征及淺層結(jié)構(gòu)算法(SVM、Boosting等)分類(lèi)效果不佳的缺點(diǎn)。

1 材料與方法

1.1 圖像獲取及預(yù)處理

1.1.1 圖像獲取

分別于2017年4月10-11日和同年8月7-8日在廣東江門(mén)農(nóng)科所農(nóng)業(yè)試驗(yàn)基地稻田自然環(huán)境下,水稻試驗(yàn)田為機(jī)插秧試驗(yàn)田,固定行距為30 cm。由于除草劑的選擇性使其具有不同的殺草譜,稻田雜草種類(lèi)的準(zhǔn)確識(shí)別可為除草劑的選擇以及混合配方施用提供科學(xué)依據(jù)。在稻田化學(xué)除草精準(zhǔn)防控管理過(guò)程中,主要針對(duì)稻苗行間雜草進(jìn)行選擇性噴施除草劑,所以本文采集雜草全部在水稻秧苗行間采集,不涉及稻苗圖像的采集。采集早稻和晚稻秧苗移栽后7~15 d水稻田內(nèi)的單株雜草圖像,雜草處于幼苗生長(zhǎng)期。為了充分考慮自然場(chǎng)景的天氣條件,分別選擇了晴天、多云和陰天進(jìn)行圖像采集。為了獲取的圖像有較大的代表性,采集方式為距離雜草20~40 cm、相機(jī)鏡頭垂直于稻田水面進(jìn)行拍攝。采集設(shè)備為佳能單反數(shù)碼相機(jī),型號(hào)為IXUS 1000 HS(EF-S 36-360 mm f/3.4-5.6 IS STM),焦距設(shè)定為自動(dòng)智能對(duì)焦,圖像分辨率為640×480像素[32]。共采集華南稻區(qū)常規(guī)惡性雜草圖像928幅,形成了稻田雜草圖像庫(kù),分別為空心蓮子草165幅、丁香蓼90幅、鱧腸243幅、野慈姑98幅、稗草165幅、千金子167幅,用于雜草識(shí)別模型的訓(xùn)練與測(cè)試,6種稻田雜草圖像如圖1所示。

1.1.2 圖像預(yù)處理

由于雜草圖像全部是在上述田間自然環(huán)境下獲取,圖像拍攝時(shí)背景和光照都不同,背景包括水、泥土、陰影和秸稈等,光照條件有太陽(yáng)直射、斜射、陰天和多云等天氣情況,如圖2所示。

為突出稻田雜草自身顏色為綠色的特征,在去除RGB雜草圖像背景的過(guò)程中,首先采用式(1)中的1.1顏色因子將空心蓮子草RGB圖像(如圖3a所示)灰度化得到雜草的灰度圖像,結(jié)果如圖3b所示;基于最大類(lèi)間方差法(OTSU法)計(jì)算閾值[33],將雜草灰度圖像自動(dòng)分割為二值圖像,結(jié)果如圖3c所示;通過(guò)腐蝕膨脹形態(tài)學(xué)操作進(jìn)行葉片內(nèi)部孔洞填充,應(yīng)用面積濾波去除其他干擾目標(biāo),將雜草二值圖像與RGB圖像進(jìn)行掩膜運(yùn)算得到雜草的前景圖像,結(jié)果如圖3d所示。

圖1 稻田雜草圖像

圖2 復(fù)雜背景和光照的空心蓮子草圖像

式中()為像素坐標(biāo)為()的灰度值,和分別表示像素的紅和綠2個(gè)顏色分量。

1.2 特征提取

由于稻田同科屬間雜草顏色和形態(tài)間具有一定相似性,為提高識(shí)別效果,本文提取雜草圖像的顏色、紋理、形狀共3類(lèi)特征作為稻田雜草的識(shí)別特征。

1.2.1 顏色特征

顏色特征是基于圖像像素提取而來(lái),具有旋轉(zhuǎn)、尺度和平移不變性等優(yōu)點(diǎn)。由于6種稻田苗期雜草顏色都為綠色,不同雜草RGB圖像間顏色分量值存在較大的重疊部分;由于顏色矩能體現(xiàn)出圖像的顏色分布特征,因此分析統(tǒng)計(jì)顏色特征中的顏色矩特征[34]能提高特征的辨識(shí)度。

由于HSV(色調(diào),飽和度,明度)顏色空間模型為RGB顏色空間模型轉(zhuǎn)換而來(lái),其更適合基于機(jī)器視覺(jué)的顏色表達(dá),可作為雜草圖像顏色特征的補(bǔ)充。但HSV顏色模型中的V分量與色彩無(wú)關(guān),所以本文只需提取雜草圖像RGB和HSV顏色空間模型下分量的顏色矩特征(一階矩、二階矩及三階矩),共計(jì)15個(gè)顏色特征,計(jì)算如式(2)~(4)所示。

式中M1、M2、M3表示顏色的一階矩(mean)、二階矩(variance)和三階矩(skewness),表示圖像中像素的個(gè)數(shù),P表示RGB圖像中第個(gè)顏色通道分量中灰度的像素出現(xiàn)的概率。

1.2.2 形狀特征

形態(tài)特征主要描述對(duì)象的形狀參數(shù),與人的視覺(jué)感知系統(tǒng)具有較好的關(guān)聯(lián)性。由于不同種類(lèi)雜草具有很大的形態(tài)差異,因此本文選擇7個(gè)Hu不變矩[35-36]特征和幾何特征作為雜草的形狀特征,本文共采用4種幾何特征[16]。

1)圓形度(Form factor)的計(jì)算公式為:

式中area為目標(biāo)的面積,即目標(biāo)的像素總數(shù)。perimeter為目標(biāo)的周長(zhǎng),即目標(biāo)的最外輪廓長(zhǎng)度。

2)細(xì)長(zhǎng)比(Eiongatedness)的計(jì)算公式為:

式中thiclcness為目標(biāo)最小外接矩的寬。

3)凹凸度(Convexity)的計(jì)算公式為:

式中convex_perimeter為目標(biāo)最小凸多邊形的周長(zhǎng)。

4)固靠度(Solidity)的計(jì)算公式為:

式中convex_area為目標(biāo)最小凸多邊形的面積。

1.2.3 紋理特征

紋理特征是一種反映像素間空間分布的區(qū)域特征,體現(xiàn)了物體表面組織結(jié)構(gòu)的周期性變化。紋理包括規(guī)則紋理和隨機(jī)紋理,而雜草圖像紋理特征屬于規(guī)則紋理和隨機(jī)紋理的組合。由于在紋理提取方法中基于統(tǒng)計(jì)的紋理分析方法最常用且研究最多[37],本文選擇基于統(tǒng)計(jì)紋理分析方法的灰度共生矩陣(gray-level co-occurrence matrix,GLCM)[38]和局部二進(jìn)制模式(local binary patterns,LBP)[39],將2種紋理特征進(jìn)行融合作為雜草圖像的紋理特征。

1)灰度共生矩陣

Cotlieb等[40]在研究GLCM中各種統(tǒng)計(jì)特征的基礎(chǔ)上,通過(guò)試驗(yàn)得出GLCM的4個(gè)關(guān)鍵特征:對(duì)比度(contrast)、能量(asm)、嫡(entropy)和相關(guān)性(correlation)。在計(jì)算之前要對(duì)GLCM進(jìn)行歸一化,計(jì)算方法依次如式(9)~(13)所示。

式中(,)為歸一化后的灰度共生矩陣,GLCM(,)為原灰度共生矩陣值。

在試驗(yàn)中首先將預(yù)處理后的圖像轉(zhuǎn)換為16個(gè)灰度級(jí),然后計(jì)算4個(gè)方向(0°,45°,90°和135°)的4個(gè)GLCM關(guān)鍵特征,得到共計(jì)16個(gè)特征值。

2)LBP特征

LBP具有原理簡(jiǎn)單、計(jì)算量小、灰度不變性和旋轉(zhuǎn)不變性等優(yōu)點(diǎn)[39]。LBP相關(guān)的基本符號(hào)定義如下:g表示局部區(qū)域中心點(diǎn)的灰度值,g(= 0,1,…,7)對(duì)應(yīng)于中心點(diǎn)周?chē)染喾植嫉狞c(diǎn),(x,y)表示中心點(diǎn)的坐標(biāo)。以(x,y)為中心的LBP局部區(qū)域紋理計(jì)算方法如式(14)~(15)所示。

對(duì)于鄰域有8個(gè)點(diǎn)的LBP算子將會(huì)產(chǎn)生28種LBP值,鄰域內(nèi)采樣點(diǎn)的數(shù)量決定紋理特征的維度。本文采用Uniform模式[41]對(duì)LBP特征維度進(jìn)行降維,對(duì)于8個(gè)采樣點(diǎn),LBP特征維度減少為59種,如圖4所示。

1.3 建立稻田雜草識(shí)別模型

DBN是由多個(gè)限制玻爾茲曼機(jī)(restricted boltzmann machine,RBM)構(gòu)成的結(jié)構(gòu)模型[42],因此DBN擁有和RBM一樣的結(jié)構(gòu)。本文重點(diǎn)對(duì)DBN網(wǎng)絡(luò)結(jié)構(gòu)參數(shù)選擇方法進(jìn)行研究,DBN網(wǎng)絡(luò)結(jié)構(gòu)參數(shù)主要包括網(wǎng)絡(luò)模型深度及隱含層節(jié)點(diǎn)數(shù)。

本文是在提取雜草圖像特征的基礎(chǔ)上,以DBN作為分類(lèi)器建立識(shí)別模型。由于雜草類(lèi)別只有6類(lèi),所用DBN結(jié)構(gòu)簡(jiǎn)單,無(wú)需用到dropout算法。由于輸入向量維度為101維,且樣本總數(shù)為928,形成928個(gè)101維的樣本空間。其中最少類(lèi)樣本丁香蓼與最大類(lèi)樣本鱧腸的樣本數(shù)之比為1∶2.7,屬于平衡樣本的范疇[43],因此不會(huì)降低本文中所用各分類(lèi)器的性能。

圖4 雜草的局部二進(jìn)制模式特征

由于DBN網(wǎng)絡(luò)模型深度對(duì)模型識(shí)別精度影響很大,網(wǎng)絡(luò)深度過(guò)大容易陷入局部最優(yōu),網(wǎng)絡(luò)深度過(guò)小易造成表達(dá)能力不足。結(jié)合樣本數(shù)量和輸入輸出特征維度[28-31],在2種不同的DBN模型深度的基礎(chǔ)上,即單隱含層和雙隱含層的DBN網(wǎng)絡(luò)結(jié)構(gòu)模型對(duì)DBN隱含層節(jié)點(diǎn)數(shù)選擇方法進(jìn)行研究,2種DBN網(wǎng)絡(luò)結(jié)構(gòu)如圖5所示。

DBN網(wǎng)絡(luò)結(jié)構(gòu)主要包括輸入層、隱含層和輸出層,其中隱含層數(shù)目決定DBN網(wǎng)絡(luò)的深度,為表述的簡(jiǎn)潔性,下文中DBN網(wǎng)絡(luò)結(jié)構(gòu)可簡(jiǎn)化為[輸入層節(jié)點(diǎn)數(shù),隱含層節(jié)點(diǎn)數(shù),輸出層節(jié)點(diǎn)數(shù)]。由于輸入樣本特征有101維,將輸入層節(jié)點(diǎn)數(shù)設(shè)置為101;最后一層為雜草類(lèi)別數(shù)量,所以輸出層節(jié)點(diǎn)數(shù)量設(shè)置為6。為獲得最終分類(lèi)結(jié)果,在最后一層RBM中加入sigmoid函數(shù)進(jìn)行回歸,作為最終結(jié)果輸出層。

sigmoid函數(shù)為:

2 試驗(yàn)與結(jié)果分析

整個(gè)測(cè)試過(guò)程操作系統(tǒng)為windows10,開(kāi)發(fā)軟件為MATLAB 2017b。計(jì)算機(jī)內(nèi)存8 GB,搭載Intel@Core(TM) i7-7700HQCPU @ CPU 2.80GHz ×8處理器。

2.1 特征選擇

本文采用多種算法提取雜草圖像特征,包括:采用顏色矩算法提取顏色分量的一階矩、二階矩及三階矩,共計(jì)15個(gè)顏色特征;采用灰度共生矩陣和LBP算法提取圖像16個(gè)灰度共生矩陣和59個(gè)LBP特征;提取了圓形度、細(xì)長(zhǎng)比、凹凸度和固靠度4種常用的幾何特征以及7個(gè)Hu不變矩特征;并將所有特征進(jìn)行融合,為下一步訓(xùn)練和測(cè)試提供良好的試驗(yàn)數(shù)據(jù)。

2.2 雜草識(shí)別流程

采用70%的稻田雜草圖像樣本作為訓(xùn)練集,剩余的樣本作為測(cè)試集,算法流程如圖6所示。

圖6 雜草識(shí)別算法流程圖

具體步驟如下:

1)特征表達(dá)與融合:對(duì)每一幅雜草圖像提取其顏色、紋理和形狀特征,共101個(gè)特征,形成101維的特征向量。

2)歸一化處理:為保證不同雜草特征數(shù)據(jù)的尺度一致性,必須將所有特征值進(jìn)行歸一化處理。

其中x代表特征值元素,min和max代表最小和最大特征值元素。

3)數(shù)據(jù)分類(lèi):從928個(gè)101維的樣本空間中選擇650個(gè)作為訓(xùn)練集,其余278個(gè)的作為測(cè)試集。

4)訓(xùn)練過(guò)程:利用文獻(xiàn)[44]中所述方法對(duì)本文的DBN模型進(jìn)行訓(xùn)練。

5)測(cè)試過(guò)程:采用DBN訓(xùn)練過(guò)程得到的權(quán)重和偏置對(duì)測(cè)試集進(jìn)行測(cè)試得出分類(lèi)結(jié)果。

2.3 結(jié)果與分析

由于DBN隱含層節(jié)點(diǎn)參數(shù)需要依據(jù)經(jīng)驗(yàn)選擇,而隱含層節(jié)點(diǎn)數(shù)對(duì)訓(xùn)練時(shí)間和精度影響很大,所以網(wǎng)絡(luò)節(jié)點(diǎn)如選擇過(guò)小無(wú)法滿(mǎn)足精度要求;選擇過(guò)大會(huì)導(dǎo)致網(wǎng)絡(luò)陷入局部最優(yōu),同時(shí)也給算法帶來(lái)多余的隱含層節(jié)點(diǎn),這些隱含層節(jié)點(diǎn)的數(shù)目超過(guò)必須所使用的隱含層節(jié)點(diǎn)點(diǎn)數(shù),也會(huì)增加網(wǎng)絡(luò)的復(fù)雜度[45]。因此,本文在單隱含層和雙隱含層的DBN網(wǎng)絡(luò)結(jié)構(gòu)模型基礎(chǔ)上,對(duì)DBN隱含層節(jié)點(diǎn)數(shù)選擇方法進(jìn)行研究。

2.3.1 雙隱含層節(jié)點(diǎn)數(shù)確定

由于雙隱含層DBN網(wǎng)絡(luò)需確定隱含層節(jié)點(diǎn)數(shù)的層數(shù)為2,其中隱含層節(jié)點(diǎn)數(shù)變化趨勢(shì)有3種:上升、下降和不變,所以本文選擇升值型、降值型和恒值型3種隱含層節(jié)點(diǎn)數(shù)組合方式對(duì)雙隱含層DBN網(wǎng)絡(luò)模型參數(shù)進(jìn)行優(yōu)化試驗(yàn)。恒值型是指隱含層節(jié)點(diǎn)數(shù)都相等,升值型是指隱含層節(jié)點(diǎn)數(shù)的選擇隨隱含層數(shù)而遞增,降值型是指隱含層節(jié)點(diǎn)數(shù)的選擇隨隱含層數(shù)而遞減。

迭代次數(shù)設(shè)置為250次,樣本批次為42,學(xué)習(xí)率為0.001,試驗(yàn)結(jié)果取10次運(yùn)算的平均值。3種隱含層節(jié)點(diǎn)數(shù)組合方式的DBN模型識(shí)別率如表1所示,恒值型和升值型雙隱含層的識(shí)別率不如降值型,表明節(jié)點(diǎn)數(shù)為降值型的隱含層模型能夠較好學(xué)習(xí)到原始特征數(shù)據(jù)的分布式特征。在選擇降值型的隱含層節(jié)點(diǎn)數(shù)時(shí),當(dāng)隱含層節(jié)點(diǎn)數(shù)過(guò)少(25-12和50-25)或過(guò)多(300-150)模型參數(shù)得不到充分訓(xùn)練而導(dǎo)致識(shí)別率低,當(dāng)隱含層節(jié)點(diǎn)數(shù)在一定范圍內(nèi),模型識(shí)別率隨降值型隱含層節(jié)點(diǎn)數(shù)的增加而得到提升。

為確定最優(yōu)的雙隱含層DBN網(wǎng)絡(luò)隱含層節(jié)點(diǎn)參數(shù),根據(jù)表1在區(qū)間[150,300]內(nèi),按固定間隔10依次選擇第1層隱含層節(jié)點(diǎn)數(shù);第2層隱含層節(jié)點(diǎn)數(shù)分別按第一層隱含層節(jié)點(diǎn)數(shù)的1/2、1/4和1/6計(jì)算取整數(shù)值。通過(guò)對(duì)不同雙隱含層節(jié)點(diǎn)數(shù)組合進(jìn)行試驗(yàn)驗(yàn)證,最終得到雙隱含層DBN的網(wǎng)絡(luò)結(jié)構(gòu)為[101,210,55,6]時(shí),模型的識(shí)別率最優(yōu),達(dá)到83.55%。

表1 雙隱含層DBN中不同隱藏層節(jié)點(diǎn)組合模型的雜草識(shí)別率

注:25-25代表隱含層1和隱含層2的節(jié)點(diǎn)數(shù)。

Note: 25-25 respectively represent nodes number in the first hidden layer and the second hidden layer.

2.3.2 單隱含層節(jié)點(diǎn)數(shù)確定

由于單隱含層DBN網(wǎng)絡(luò)需確定隱含層節(jié)點(diǎn)數(shù)的層數(shù)為1,可根據(jù)表1選取合理的單隱含層節(jié)點(diǎn)數(shù)區(qū)間[150,300]進(jìn)行優(yōu)化試驗(yàn),節(jié)點(diǎn)數(shù)按固定間隔10進(jìn)行選擇。如圖7所示,通過(guò)試驗(yàn)得出單隱含層DBN模型隱含層節(jié)點(diǎn)數(shù)設(shè)置為200時(shí),即DBN網(wǎng)絡(luò)結(jié)構(gòu)為[101,200,6]時(shí)識(shí)別率最高,達(dá)到91.13%。

本文單隱含層DBN結(jié)構(gòu)一共由2層RBM疊加而成,如圖5a所示,輸入層和隱含層構(gòu)成RBM1,隱含層和輸出層構(gòu)成RBM2。網(wǎng)絡(luò)每一層RBM的預(yù)訓(xùn)練是通過(guò)無(wú)監(jiān)督學(xué)習(xí)來(lái)完成,其訓(xùn)練結(jié)果作為高一層RBM的輸入,最后通過(guò)監(jiān)督學(xué)習(xí)去調(diào)整所有RBM層的網(wǎng)絡(luò)參數(shù)。如圖8所示,訓(xùn)練錯(cuò)誤率在第一層RBM的前20次迭代訓(xùn)練過(guò)程中大幅下降,但是在25次迭代訓(xùn)練后錯(cuò)誤率不再變化,RBM內(nèi)的參數(shù)值都趨向穩(wěn)定;第二層RBM內(nèi)參數(shù)的訓(xùn)練是在第一層RBM參數(shù)的基礎(chǔ)上進(jìn)行訓(xùn)練,參數(shù)的變化比較平緩,模型趨向于穩(wěn)定;最后進(jìn)行一次有監(jiān)督網(wǎng)絡(luò)參數(shù)微調(diào)并進(jìn)行分類(lèi)訓(xùn)練,得到最終的訓(xùn)練結(jié)果。

圖7 不同單隱含層節(jié)點(diǎn)數(shù)DBN模型識(shí)別率

圖8 單隱含層DBN的訓(xùn)練誤差曲線(xiàn)

由于基于DBN為數(shù)據(jù)驅(qū)動(dòng)型模型結(jié)構(gòu)[46],針對(duì)不同的數(shù)據(jù)樣本與之有相對(duì)應(yīng)的最優(yōu)網(wǎng)絡(luò)結(jié)構(gòu)。本文在雙隱含層和單隱含層的DBN網(wǎng)絡(luò)結(jié)構(gòu)基礎(chǔ)上,針對(duì)本文928個(gè)101維的樣本空間,其中650個(gè)樣本作為訓(xùn)練集,其余278個(gè)樣本作為測(cè)試集。分別對(duì)其隱含層節(jié)點(diǎn)數(shù)進(jìn)行優(yōu)化。試驗(yàn)結(jié)果得出單隱含層DBN網(wǎng)絡(luò)結(jié)構(gòu)相比于雙隱含層DBN網(wǎng)絡(luò)結(jié)構(gòu)可取得更高的識(shí)別率,表明單隱含層DBN網(wǎng)絡(luò)結(jié)構(gòu)能更好地適應(yīng)所提取雜草特征樣本的數(shù)據(jù)結(jié)構(gòu),并能挖掘出本文樣本數(shù)據(jù)的分布特征,取得更高的識(shí)別率。因此,下文基于稻田雜草單一特征和融合特征進(jìn)行DBN建模都采用單隱含層DBN網(wǎng)絡(luò)結(jié)構(gòu)。

2.4 雜草識(shí)別方法比較

本文提取了稻田雜草圖像的顏色、形狀和紋理單一特征,并對(duì)3種特征進(jìn)行融合得到融合特征,基于單一特征和融合特征分別構(gòu)建了SVM、BP人工神經(jīng)網(wǎng)絡(luò)[47]和DBN的雜草識(shí)別模型。從表2可以看出,基于單一顏色和形狀特征的DBN模型的識(shí)別率低于SVM模型和BP模型,由于單一顏色和形狀特征維度較小,不能體現(xiàn)出DBN模型的特征表達(dá)優(yōu)勢(shì);基于單一紋理特征和融合特征的DBN模型識(shí)別率高于SVM模型和BP模型,由于DBN為挖掘數(shù)據(jù)分布式特征的網(wǎng)絡(luò)結(jié)構(gòu),可較好適應(yīng)數(shù)據(jù)高維結(jié)構(gòu)并挖掘出數(shù)據(jù)的分布特征,紋理特征和融合特征的特征維度足以有效驅(qū)動(dòng)本文所建的DBN模型,使其對(duì)紋理特征和融合特征數(shù)據(jù)進(jìn)行特征表達(dá)。

表2 基于不同分類(lèi)器模型和不同特征的雜草識(shí)別率

針對(duì)278個(gè)雜草圖像測(cè)試樣本的形狀、顏色、紋理及融合特征,以同一測(cè)試樣本的運(yùn)行時(shí)間值作為模型的測(cè)試時(shí)間。由表3可知,針對(duì)101維融合特征測(cè)試樣本的3種模型,SVM模型、BP模型和DBN模型的測(cè)試時(shí)間分別為0.029 7、0.030 6和0.034 1 s,即SVM+fusion

表3 基于不同分類(lèi)器模型和不同特征的雜草測(cè)試集識(shí)別時(shí)間

3 結(jié) 論

本文根據(jù)稻田復(fù)雜背景及伴生雜草顏色特點(diǎn),提出1.1顏色因子灰度化雜草圖像,并采用閾值分割及形態(tài)學(xué)處理得到雜草目標(biāo)圖像;根據(jù)雜草種類(lèi)間的顏色形態(tài)等特點(diǎn)選擇顏色矩特征、幾何特征、Hu不變矩、灰度共生矩陣和LBP紋理特征作為雜草的識(shí)別特征。

1)建立了2種不同的DBN網(wǎng)絡(luò)模型,選擇不同節(jié)點(diǎn)組合方式對(duì)雙隱含層DBN模型識(shí)別效果進(jìn)行探究,試驗(yàn)得出單隱含層網(wǎng)絡(luò)結(jié)構(gòu)比雙隱含層網(wǎng)絡(luò)結(jié)構(gòu)識(shí)別率高,且節(jié)點(diǎn)數(shù)為[101,200,6]的單隱含層網(wǎng)絡(luò)結(jié)構(gòu)最佳,識(shí)別率達(dá)91.13%。

2)針對(duì)測(cè)試樣本的形狀、顏色、紋理及融合特征,分別構(gòu)建了SVM、BP和DBN共3種雜草特征識(shí)別模型,通過(guò)模型準(zhǔn)確率和識(shí)別時(shí)間的比較,基于多特征融合的DBN模型的識(shí)別率最高為91.13%,且耗時(shí)較其他2種模型相差不大,可滿(mǎn)足實(shí)時(shí)檢測(cè)的速度要求,所以在實(shí)際應(yīng)用中應(yīng)優(yōu)先選擇基于多特征融合的DBN模型。

[1] 趙凌,趙春芳,周麗慧,等. 中國(guó)水稻生產(chǎn)現(xiàn)狀與發(fā)展趨勢(shì)[J]. 江蘇農(nóng)業(yè)科學(xué),2015,43(10):105-107. Zhao Ling, Zhao Chunfang, Zhou Lihui, et al. Production status and developing trend of rice in China[J]. Jiangsu Agricultural Sciences, 2015, 43(10): 105-107. (in Chinese with English abstract)

[2] Sogaard H T, Lund I, Graglia E. Real-time application of herbicides in seed lines by computer vision and micro-spray system[J]. American Society of Agricultural and Biological Engineers, 2006, 12: 118-127.

[3] 馬旭,齊龍,梁柏,等.水稻田間機(jī)械除草裝備與技術(shù)研究現(xiàn)狀及發(fā)展趨勢(shì)[J]. 農(nóng)業(yè)工程學(xué)報(bào),2011,27(6):162-168. Ma Xu, Qi Long, Liang Bai, et al. Present status and prospects of mechanical weeding equipment and technology in paddy field[J]. Transactions of the Chinese Society of Agricultural Engineering(Transactions of the CSAE), 2011, 27(6): 162-168. (in Chinese with English abstract)

[4] 劉延,劉波,王險(xiǎn)峰,等. 中國(guó)化學(xué)除草問(wèn)題與對(duì)策[J]. 農(nóng)藥,2005,44(7):289-293. Liu Yan, Liu Bo, Wang Xianfeng, et al. Problems and solutions for chemical weed control in China[J]. Chinese Journal of Pesticides, 2005, 44(7): 289-293. (in Chinese with English abstract)

[5] Jensen H G, Jacobsen, L B, Pedersen S M, et al. Socioeconomic impact of widespread adoption of precision farming and controlled traffic systems in Denmark[J]. Precision Agriculture, 2012, 13(6):661-677.

[6] 范德耀,姚青,楊保軍,等. 田間雜草識(shí)別與除草技術(shù)智能化研究進(jìn)展[J]. 中國(guó)農(nóng)業(yè)科學(xué),2010,43(9):1823-1833. Fan Deyao, Yao Qing, Yang Baojun, et al. Progress in research on intelligentization of field weed recognition and weed control technology[J]. Scientia Agricultura Sinica, 2010, 43(9): 1823-1833. (in Chinese with English abstract)

[7] 沈?qū)殗?guó),陳樹(shù)人,尹建軍,等. 基于顏色特征的棉田綠色雜草圖像識(shí)別方法[J]. 農(nóng)業(yè)工程學(xué)報(bào),2009,25(6):163-167.Shen Baoguo, Chen Shuren, Yin Jianjun, et al. Image recognition of green weeds in cotton fields based on color feature[J]. Transactions of the Chinese Society of Agricultural Engineering(Transactions of the CSAE), 2009, 25(6): 163-167. (in Chinese with English abstract).

[8] 張小龍,謝正春,張念生,等. 豌豆苗期田間雜草識(shí)別與變量噴灑控制系統(tǒng)[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2012,43(11): 220-225. Zhang Xiaolong, Xie Zhengchun, Zhang Niansheng, et al. Weed Recognition from pea seedling images and variable spraying control system[J]. Transactions of the Chinese Society for Agricultural Machinery, 2012, 43(11): 220-225. (in Chinese with English abstract)

[9] 韓丁,武佩,張強(qiáng),等. 基于顏色矩的典型草原牧草特征提取與圖像識(shí)別[J]. 農(nóng)業(yè)工程學(xué)報(bào),2016,32(23):168-175.Han Ding, Wu Pei, Zhang Qiang, et al. Feature extraction and image recognition of typical grassland forage based on color moment[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2016, 32(23): 168-175. (in Chinese with English abstract)

[10] 李先鋒,朱偉興,紀(jì)濱,等. 基于圖像處理和蟻群優(yōu)化的形狀特征選擇與雜草識(shí)別[J]. 農(nóng)業(yè)工程學(xué)報(bào),2010,26(10):178-182. Li Xianfeng, Zhu Weixing, Ji Bin, et al. Shape feature selection and weed recognition based on image processing and ant colony optimization[J]. Transactions of the Chinese Society of Agricultural Engineering(Transactions of the CSAE), 2010, 26(10): 178-182. (in Chinese with English abstract).

[11] Herrera P J, Dorado J, Ribeiro á. A novel approach for weed type classification based on shape descriptors and a fuzzy decision-making method[J]. Sensors, 2014, 14, 15304-15324.

[12] Pahikkala T, Kari K, Mattila H, et al. Classification of plant species from images of overlapping leaves[J]. Computers and Electronics in Agriculture, 2015, 118: 186-192.

[13] 何東健,喬永亮,李攀,等. 基于SVM-DS多特征融合的雜草識(shí)別[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2013,44(2):182-187. He Dongjian, Qiao Yongliang, Li Pan, et al. Weed recognition based on SVM-DS multi-feature fusion[J]. Transactions of the Chinese Society for Agricultural Machinery, 2013, 44(2): 182-187. (in Chinese with English abstract)

[14] 王璨,李志偉. 利用融合高度與單目圖像特征的支持向量機(jī)模型識(shí)別雜草[J]. 農(nóng)業(yè)工程學(xué)報(bào),2016,32(15):165-174.Wang Can, Li Zhiwei. Weed recognition using SVM model with fusion height and monocular image features[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactionsof the CSAE), 2016, 32(15): 165-174. (in Chinesewith English abstract).

[15] 趙鵬,韋興竹. 基于多特征融合的田間雜草分類(lèi)識(shí)別研究[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2014,45(3):275-281. Zhao Peng, Wei Xingzhu. Weed recognition in agricultural field using multiple feature fusions[J]. Transactions of the Chinese Society for Agricultural Machinery, 2014, 45(3): 275-281. (in Chinese with English abstract)

[16] Ahmed F, Al-Mamun H A, Hossain-Bari A S M, et al. Classi?cation of crops and weeds from digital images: A support vector machine approach[J]. Crop Protection, 2012, 40: 98-104.

[17] Suykens J A K, Vandewalle J. Least squares support vector machine classifiers[J]. Neural Processing Letters, 1999, 9(3): 293-300.

[18] Freund Y, Schapire R E. Experiments with a new boosting algorithm[C]// Proceedings of the Thirteenth ICML. Washington D.C., USA:IEEE Press, 1996: 148-156.

[19] Lin C Y, Tsai C H, Lee C P, et al. Large-scale logistic regression and linear support vector machines using spark [C]//Proc of IEEE International Conference on Big Data. IEEE, 2014: 519-528.

[20] Dyrmann M, Karstoft H, Midtiby H S. Plant species classification using deep convolutional neural network[J]. Biosystems Engineering, 2016, 151: 72-80.

[21] Lu Yang, Yi Shujuan, Zeng Nianyin. Identification of rice diseases using deep convolutional neural networks[J]. Neurocomputing, 2017 (267): 378-384.

[22] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]// Proceedings of the 2012 Advance in Neural Information Processing Systems, NIPS, 2012, 60(2): 1097-1105.

[23] Szegedy C, Wei L, Yangqing J, et al. Going deeper with convolutions[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition, CVPR,2015:1-9.

[24] He K, Zhang X, Ren S, et al. Deep Residual Learning for Image Recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2016: 770-778.

[25] 段萌,王功鵬,牛常勇. 基于卷積神經(jīng)網(wǎng)絡(luò)的小樣本圖像識(shí)別方法[J]. 計(jì)算機(jī)工程與設(shè)計(jì),2018,39(1):224-229. Duan Meng, Wang Gongpeng, Niu Changyong, et al. Method of small sample size image recognition based on convolution neural network[J]. Computer Engineering and Design, 2018, 39(1): 224-229. (in Chinese with English abstract).

[26] 盧冶,陳瑤,李濤,等. 面向邊緣計(jì)算的嵌入式FPGA卷積神經(jīng)網(wǎng)絡(luò)構(gòu)建方法[J]. 計(jì)算機(jī)研究與發(fā)展,2018,55(3):551-562. Lu Ye, Chen Yao, Li Tao, et al. Convolutional neural network construction method for embedded FPGAs oriented edge computing[J]. Journal of Computer Research and Development, 2018, 55(3): 551-562. (in Chinese with English abstract)

[27] Zhou Shusen, Chen Qingcai, Wang Xiaolong. Discriminative deep belief networks for image classification[C]// IEEE 17th International Conference on Image Processing, Hong Kong, 2010: 1561-1564.

[28] 黎煊,趙建,高云,等. 基于深度信念網(wǎng)絡(luò)的豬咳嗽聲識(shí)別[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2018,49(03):179-186. Li Xuan, Zhao Jian, Gao Yun, et al. Recognition of pig cough sound based on deep belief nets [J]. Transactions of the Chinese Society for Agricultural Machinery, 2018, 49(3): 179-186. (in Chinese with English abstract)

[29] 周兆永,何東健,張海輝,等. 基于深度信念網(wǎng)絡(luò)的蘋(píng)果霉心病病害程度無(wú)損檢測(cè)[J]. 食品科學(xué),2017,38(14): 297-303. Zhou Zhaoyong, He Dongjian, Zhang Haihui, et al. Non- destructive detection of moldy core in apple fruit based on deep belief network[J]. Food Science, 2017, 38(14): 297-303. (in Chinese with English abstract)

[30] 張善文,張傳雷,丁軍. 基于改進(jìn)深度置信網(wǎng)絡(luò)的大棚冬棗病蟲(chóng)害預(yù)測(cè)模型[J]. 農(nóng)業(yè)工程學(xué)報(bào),2017,33(19):202-208. Zhang Shanwen, Zhang Chuanlei, Ding Jun. Disease and insect pest forecasting model of greenhouse winter jujube based on modified deep belief network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(19): 202-208. (in Chinese with English abstract)

[31] 祝志慧,湯勇,洪琪,等. 基于種蛋圖像血線(xiàn)特征和深度置信網(wǎng)絡(luò)的早期雞胚雌雄識(shí)別[J]. 農(nóng)業(yè)工程學(xué)報(bào),2018,34(6):197-203. Zhu Zhihui, Tang Yong, Hong Qi, et al. Female and male identification of early chicken embryo based on blood line features of hatching egg image and deep belief networks[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(6): 197-203. (in Chinese with English abstract)

[32] 王璨,武新慧,李志偉. 基于卷積神經(jīng)網(wǎng)絡(luò)提取多尺度分層特征識(shí)別玉米雜草[J]. 農(nóng)業(yè)工程學(xué)報(bào),2018,34(5):144-151. Wang Can, Wu Xinhui, Li Zhiwei. Recognition of maize and weed based on multi-scale hierarchical features extracted by convolutional neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(5): 144-151. (in Chinese withEnglish abstract)

[33] Otsu N. A threshold selection method from gray-level histogram[J]. IEEE Trans. Syst. Man Cybern, 1979, 9(1): 62-66.

[34] Zhang Lei, Lin Fuzong, Zhang Bo. A CBIR method based on color spatial feature[C]// Proceedings of the IEEE Region 10 Conference. IEEE, 1999: 166-169.

[35] Wong Y R. Scene matching with invariant moments[J]. Computer Graphics and Image Processing, 1978, 8(1): 16-24.

[36] Hu M K. Visual pattern recognition by moment invariants[J]. IRE Transaction Information Theory, 1962, 8(2): 179-187.

[37] 趙洋. 基于局部描述子的紋理識(shí)別方法及其在葉片識(shí)別方面的應(yīng)用[D]. 合肥:中國(guó)科學(xué)技術(shù)大學(xué),2013. Zhao Yang. Local Descriptor Methods for Texture Classification and Leaves Recognition[D]. Hefei: University of Science and Technology of China, 2013. (in Chinese with English abstract)

[38] Haralick R M, Shanmugam K, Dinstein I H. Textural features for image classification[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1973, 3(6): 610-621.

[39] Liu L, Fieguth P, Guo Y, et al. Local binary features for texture classification: Taxonomy and experimental study[J]. Pattern Recognition, 2017, 62: 135-160.

[40] Gotlieb C C, Kreyszig H E. Texture descriptors based on co-occurrence matrices[J].Computer Vision, Graphics, and Image Processing, 1990, 51: 70-86..

[41] Ojala T, Pietik?inen M, M?enp?? T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns[J].IEEE Transactions Pattern Analysis & Machine Intelligence, 2002, 24(7): 971-987.

[42] Lecun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.

[43] Weiss G M, Provost F. Learningwhen training data are costly: The effect of class distribution on tree induction[J] Journal ofArtificial Intelligence Research, 2003, 19(1): 315-354.

[44] Hinton G E, Osindero S, Teh Y W. A fast learning algorithm fordeep belief nets[J]. Neural Comput, 2006, 18(7): 1527-1554.

[45] 潘廣源,柴偉,喬俊飛. DBN網(wǎng)絡(luò)的深度確定方法[J]. 控制與決策,2015,30(2):256-260. Pan Guangyuan, Chai Wei, Qiao Junfei. Calculation for depth of deep belief network[J]. Control and Decision, 2015, 30(2): 256-260. (in Chinese with English abstract)

[46] 葛強(qiáng)強(qiáng). 基于深度置信網(wǎng)絡(luò)的數(shù)據(jù)驅(qū)動(dòng)故障診斷方法研究[D]. 哈爾濱:哈爾濱工業(yè)大學(xué),2016. Ge Qiangqiang. Research on Data-driven Fault Diagnosis Method based on Deep Belief Network[D]. Harbin: Harbin Institute of Technology, 2016. (in Chinese with English abstract)

[47] Debabrata M, Surjya K P, Partha S. Back propagation neural network based modeling of multiresponse of an electrical discharge machining process[J]. International Journal of Knowledge-based and Intelligent Engineering Systems, 2007(11): 105-113.

Recognition of weeds at seedling stage in paddy fields using multi-feature fusion and deep belief networks

Deng Xiangwu1, Qi Long1※, Ma Xu1, Jiang Yu1,2, Chen Xueshen1, Liu Haiyun1, Chen Weifeng1

(1.510642,; 2.510642,)

Weed identification was the key to the site-specific weed management in the field. The machine vision method was adopted to realize automatic and rapid detection of weeds. This paper selected 6 weed species in paddy fields, including,,,,, and, which were captured in early growth stages with natural background and variable illumination. A total of 928 images were taken. The,, andwere dicotyledonous weeds which had large heart-shaped opposite leaves, and the other 3 weed species were monocotyledonous weeds which had narrow leaves. The image was 640×480 pixels and only a single seedling of weed was in the scene, and the acquisition format was color images of RGB (red, green, blue). The component with 1.1-was applied to gray level transformation of original RGB images. The OTSU adaptive segmentation method was adopted to realize the image segmentation of grayscale image. The morphological operation was used to fill vacancies in weed images. The noises and small target were eliminated based on area-reconstruction operator. The background was removed by masking algorithm between binary image and original RGB images. The 101-dimensional features were extracted from the foreground image of weed, including color, shape and texture feature. The color feature was composed of the first, second and third moments, the shape feature was composed of geometric features and improved moment invariant features, and the texture feature was composed of gray level co-occurrence matrix and local binary patterns (LBP) feature. The weighting matrix of color, shape and texture feature would be the input parameter after unitary processing. A three-step method for model updating consisting of model structure tuning, model parameter updating and model validation was presented in this article. Firstly, the deep belief networks (DBNs) of double hidden layers and single hidden layer were established. Secondly, the influence of the 3 types of constant, rising and descending nodes of double hidden layers in DBN was analyzed. The experimental result showed that the descending nodes of double hidden layers in DBN could learn the distributed characteristics of the original characteristic data better than the other node types of double hidden layers. Finally, the testing optimization parameters of double hidden layers and single hidden layer were obtained by experiment. The recognition rate of double hidden layers of DBN was 83.55% when the number of nodes stood at 101-210-55-6, and the recognition rate of single hidden layer of DBN was 91.13% when the number of nodes stood at 101-200-6. The DBN structure of single hidden layer was better able to excavate the distribution rule of weed features than DBN with double hidden layer. The single color, shape, texture and fusion feature were used to construct 3 types of weed classification models, which were support vector machine (SVM), BP (back propagation) neural network and DBN. In the experiment, the recogniton rate of DBN model with single color and shape feature was lower than that of the SVM and BP neural network model. The dimensions of color and shape feature were relatively small, which could not reflect the advantage of characteristic representation with DBN. On the other hand, the recognition of DBN model with single texture and fusion feature was more accurate than that of the SVM and BP neural network model, and the recognition rate of DBN model reached 86.58% and 91.13% with single texture and fusion feature, respectively. The results demonstrate that the method put forward in the paper can improve the classification accuracy of weeds with the complex background and variable illumination in paddy fields.

machine vision; image processing; weed classification; deep belief networks (DBN); multi-feature fusion; feature extraction

10.11975/j.issn.1002-6819.2018.14.021

TP391.41

A

1002-6819(2018)-14-0165-08

2018-03-28

2018-05-30

國(guó)家自然科學(xué)基金(51575195);現(xiàn)代農(nóng)業(yè)產(chǎn)業(yè)技術(shù)體系建設(shè)專(zhuān)項(xiàng)資金(CARS-01-43);廣東省自然科學(xué)基金(2015A030313402);廣州市科技計(jì)劃項(xiàng)目(201803020021)

鄧向武,男,博士生,主要從事圖像處理與機(jī)器學(xué)習(xí)研究。Email:dengxiangwu123456@163.com

齊 龍,男,研究員,博士生導(dǎo)師,主要從事圖像處理與圖像識(shí)別研究。Email:qilong@scau.edu.cn

鄧向武,齊 龍,馬 旭,蔣 郁,陳學(xué)深,劉海云,陳偉烽. 基于多特征融合和深度置信網(wǎng)絡(luò)的稻田苗期雜草識(shí)別[J]. 農(nóng)業(yè)工程學(xué)報(bào),2018,34(14):165-172. doi:10.11975/j.issn.1002-6819.2018.14.021 http://www.tcsae.org

Deng Xiangwu, Qi Long, Ma Xu, Jiang Yu, Chen Xueshen, Liu Haiyun, Chen Weifeng. Recognition of weeds at seedling stage in paddy fields using multi-feature fusion and deep belief networks[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(14): 165-172. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2018.14.021 http://www.tcsae.org

猜你喜歡
識(shí)別率網(wǎng)絡(luò)結(jié)構(gòu)紋理
快遞網(wǎng)絡(luò)結(jié)構(gòu)研究進(jìn)展
基于BM3D的復(fù)雜紋理區(qū)域圖像去噪
基于AutoML的保護(hù)區(qū)物種識(shí)別①
基于真耳分析的助聽(tīng)器配戴者言語(yǔ)可懂度指數(shù)與言語(yǔ)識(shí)別率的關(guān)系
聽(tīng)力正常青年人的低通濾波言語(yǔ)測(cè)試研究*
使用紋理疊加添加藝術(shù)畫(huà)特效
提升高速公路MTC二次抓拍車(chē)牌識(shí)別率方案研究
TEXTURE ON TEXTURE質(zhì)地上的紋理
檔案數(shù)字化過(guò)程中OCR技術(shù)的應(yīng)用分析
基于時(shí)效網(wǎng)絡(luò)的空間信息網(wǎng)絡(luò)結(jié)構(gòu)脆弱性分析方法研究
永福县| 濮阳县| 昌平区| 凤山市| 德庆县| 克山县| 东安县| 银川市| 松阳县| 京山县| 临夏县| 龙游县| 临沭县| 黄梅县| 垫江县| 九龙县| 黎平县| 永寿县| 绿春县| 仪陇县| 个旧市| 台北县| 澄迈县| 吉林市| 凤冈县| 阿城市| 福泉市| 饶平县| 苏尼特左旗| 大港区| 绿春县| 宁河县| 玉环县| 崇州市| 赤峰市| 齐齐哈尔市| 福安市| 衡南县| 嘉祥县| 长春市| 徐水县|