Abstract
[Objective] To realize rapid detection of green tea species identification. [Methods] A rapid detection method based on the combination of electronic tongue and electronic nose combined with CNN-Transformer composite model was proposed. The electronic tongue and electronic nose were used to collect the fingerprint information of taste and smell for five different kinds of green tea. The one-dimensional electronic tongue and electronic nose signals were transformed into two-dimensional time-frequency maps using the short-time Fourier transform (STFT), which fully revealed the distribution characteristics of the signal energy in the time-frequency domain. A CNN-Transformer combination model was proposed to realize the fusion of the electronic tongue and the electronic nose information and pattern recognition. The model adopted selective kernel convolution and normalized attention in designing convolution module to replace the convolution layer of the traditional CNN to achieve the dynamic extraction of local features from the time-frequency map of the signal. The multi-head self-attention mechanism in the Transformer encoder was used to extract the global temporal information in the features of the electronic tongue and the electronic nose and achieve the weighted fusion of their features. Finally, classification recognition was carried out by the fully connected layer. [Results] The information fusion method based on electronic tongue and electronic nose could effectively extract the deep features of the taste and smell signals from green tea samples and provide richer fused feature representations for the model to achieve highly accurate recognition of different species, with a test set accuracy, precision, recall and F1-Score of 99.00%, 99.05%, 99.00%, and 99.00%, respectively. [Conclusion] This study provides a low-cost, fast and efficient detection method for green tea species recognition.
Publication Date
7-22-2024
First Page
34
Last Page
42,52
DOI
10.13652/j.spjx.1003.5788.2023.81091
Recommended Citation
Chuanzheng, LIU; Jingyu, MA; Xuerui, BAI; Wanqing, ZENG; and Zhiqiang, WANG
(2024)
"Green tea species recognition based on electronic tongue and electronic nose combined with CNN-Transformer model,"
Food and Machinery: Vol. 40:
Iss.
6, Article 5.
DOI: 10.13652/j.spjx.1003.5788.2023.81091
Available at:
https://www.ifoodmm.cn/journal/vol40/iss6/5
References
[1] 陆德彪, 苏祝成. 我国地理标志(产品)保护现状与问题: 以茶叶为例[J]. 中国茶叶, 2009, 31(4): 21-23, 36.
LU D B, SU Z C. Current situation and problems of geographical indication (product) protection in China: the case of tea[J]. China Tea, 2009, 31(4): 21-23, 36.
[2] 童阳, 艾施荣, 吴瑞梅, 等. 茶叶外形感官品质的计算机视觉分级研究[J]. 江苏农业科学, 2019, 47(5): 170-173.
TONG Y, AI S R, WU R M, et al. Sensory evaluation of tea appearance using computer vision classification[J]. Jiangsu Agricultural Sciences, 2019, 47(5): 170-173.
[3] 郭艺丹. 茶叶检测评审研究进展与关键技术分析[J]. 茶叶学报, 2023, 64(1): 10-20.
GUO Y D. A review and commentary on technologies and methods for tea quality evaluation[J]. Acta Tea Sinica, 2023, 64(1): 10-20.
[4] 陈美丽, 唐德松, 龚淑英, 等. 绿茶滋味品质的定量分析及其相关性评价[J]. 浙江大学学报(农业与生命科学版), 2014, 40(6): 670-678.
CHEN M L, TANG D S, GONG S Y, et al. Quantitative analysis and correlation evaluation on taste quality of green tea[J]. Journal of Zhejiang University (Agriculture and Life Sciences), 2014, 40(6): 670-678.
[5] 龙立梅, 宋沙沙, 李柰, 等. 3种名优绿茶特征香气成分的比较及种类判别分析[J]. 食品科学, 2015, 36(2): 114-119.
LONG L M, SONG S S, LI N, et al. Comparisons of characteristic aroma components and cultivar discriminant analysis of three varieties of famous green tea[J]. Food Science, 2015, 36(2): 114-119.
[6] 刘奇, 欧阳建, 刘昌伟, 等. 茶叶品质评价技术研究进展[J]. 茶叶科学, 2022, 42(3): 316-330.
LIU Q, OUYANG J, LIU C W, et al. Research progress of tea quality evaluation technology[J]. Journal of Tea Science, 2022, 42(3): 316-330.
[7] LIU T, CHEN Y, LI D, et al. Electronic tongue recognition with feature specificity enhancement[J]. Sensors, 2020, 20(3): 772.
[8] HINES E L, LLOBET E, GARDNER J. Electronic noses: a review of signal processing techniques[J]. IEE Proceedings-Circuits, Devices and Systems, 1999, 146(6): 297-310.
[9] KUMAR S, GHOSH A. Afeature extraction method using linear model identification of voltammetric electronic tongue[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(11): 9 243-9 250.
[10] CHEN J, YANG Y, DENG Y, et al. Aroma quality evaluation ofDianhong black tea infusions by the combination of rapid gas phase electronic nose and multivariate statistical analysis[J]. LWT, 2022, 153: 112496.
[11] LU L, DENG S, ZHU Z, et al. Classification ofrice by combining electronic tongue and nose[J]. Food Analytical Methods, 2015, 8(8): 1 893-1 902.
[12] BANERJEE M B, ROY R B, TUDU B, et al. Black tea classification employing feature fusion of e-nose and e-tongue responses[J]. Journal of Food Engineering, 2019, 244: 55-63.
[13] WEI Z, YANG Y, WANG J, et al. The measurement principles, working parameters and configurations of voltammetric electronic tongues and its applications for foodstuff analysis[J]. Journal of Food Engineering, 2018, 217: 75-92.
[14] KRIZHEVSKY A, SUTSKEVER I, HINTON G E.Imagenet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
[15] AZAD R, AL-ANTARY M, HEIDARI M, et al. TransNorm: transformer provides a strong spatial normalization mechanism for a deep segmentation model[M]. [S.l.]: IEEE Access, 2022: 108205.
[16] WANG E, YU Q, CHEN Y, et al. Multi-modal knowledge graphs representation learning via multi-headed self-attention[J]. Information Fusion, 2022, 88: 78-85.
[17] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017, 30: 5 998-6 008.
[18] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[J/OL]. arXiv, 2020. (2020-10-22) [2023-09-20]. https://arxiv.org/abs/2010.11929.
[19] GAO G W, WANG Z X, LI J C, et al. Lightweight bimodal network for single-image super-resolution via symmetric CNN and recursive transformer[J/OL]. arXiv, 2022. (2022-04-28) [2023-09-20]. https://arxiv.org/abs/2204.13286.
[20] YANG L, YANG Y, YANG J, et al. FusionNet: A convolution-transformer fusion network for hyperspectral image classification[J]. Remote Sensing, 2022, 14(16): 4 066.
[21] ZHANG Q, DENG L. An intelligent fault diagnosis method of rolling bearings based on short-time fourier transform and convolutional neural network[J]. Journal of Failure Analysis and Prevention, 2023, 23(2): 795-811.
[22] CHINCHOR N. Muc-4 evaluation metrics[C]// Proceedings of the 4th Conference on Message Understanding. [S.l.]: Association for Computational Linguistics, 1992: 22-29.
[23] 王首程, 于雪莹, 高继勇, 等. 基于电子舌和电子鼻结合DenseNet-ELM的陈醋年限检测[J]. 食品与机械, 2022, 38(4): 72-80, 133.
WANG S C, YU X Y, GAO J Y, et al. Age detection of mature vinegar based on electronic tongue and electronic nose combined with DenseNet-ELM[J]. Food & Machinery, 2022, 38(4): 72-80, 133.