| 研究生: |
侯建全 JIAN-CHIUAN HOU |
|---|---|
| 論文名稱: |
基於深度強化學習之多相機陣列協作機制:以智慧家庭跌倒偵測為實施例 Camera array collaboration based on deep reinforcement learning: A practical case of fall detection in smart home space |
| 指導教授: |
胡誌麟
Chih-Lin Hu |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 通訊工程學系 Department of Communication Engineering |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 英文 |
| 論文頁數: | 73 |
| 中文關鍵詞: | 物聯網 、強化學習 |
| 相關次數: | 點閱:13 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著全球人口高齡化與醫療照護人力短缺等現狀,居家健康照護成為當前民生重要議題年長者或獨居者在家中活動,跌倒受傷是普遍存在的風險,尤其是年長者一但跌倒時,若未獲得及時的協助,恐將造成嚴重的傷害。近年來許多跌倒警示系統和穿戴式跌倒警示裝置陸續提出,其中基於相機光學輔助的跌倒事件偵測技術與應用引起廣泛的研究關注,然而,在居家生活環境中這類的跌倒偵測方法面臨不少限制,例如障礙物遮擋及相機視幅和視角等因素。
因此本篇論文提出一套基於深度強化學習之多相機協作跌倒偵測機制,透過多台相機裝置之間進行協作與判斷,來解決單一相機在跌倒事件偵測時所遇到的困難,並且利用深度強化學習的方式來針對多相機協作的動態群組進行學習,目的是為了提升多相機系統的準確性以及加快系統決策的時間,並且在本論文中我們透過實際建置實驗環境和實作出系統之雛形開發,並且以跌倒偵測來作為我們的實施例,之後在針對單相機決策、多相機決策(未使用動態群組)以及多相機決策(使用動態群組)三種方案來進行實際的效能比較。
With the aging of the global population and the shortage of medical care manpower, home health care has become an important part of people’s livelihood.
To issue. If the elderly or those living alone are active at home, falls and injuries are a common risk, especially for the elderly.
However, if you do not get timely assistance when you fall, you may cause serious injuries. In recent years, many fall warning systems and
Wearable fall warning devices have been proposed one after another. Among them, the fall event detection technology and application based on the optical assist of the camera are introduced.
From a wide range of research concerns. However, such fall detection methods face many limitations in the home living environment, such as
Factors such as occlusion by obstacles and the camera's field of view and angle of view.
Therefore, this paper proposes a multi-camera collaborative fall detection mechanism based on deep reinforcement learning.
Image recognition collaboration and judgment between multiple cameras to solve the difficulties encountered by a single camera in the fall event detection
Difficult, and use deep reinforcement learning to learn for dynamic groups of multi-camera collaboration, the purpose
It is to improve the accuracy of the multi-camera system and speed up the decision-making time of the system, and through the actual construction
Detection environment for single-camera decision-making, multi-camera decision-making (not using dynamic groups), and multi-camera decision-making
(Using dynamic groups) Three schemes for actual performance comparison.
[1] Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge intelligence: Paving the last mile of artificial intelligence with edge computing,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, 2019.
[2] S. C. Agrawal, R. K. Tripathi, and A. S. Jalal, “Human-fall detection from an indoor video surveillance,” in 2017 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 2017, pp. 1–5.
[3] N. Lu, Y. Wu, L. Feng, and J. Song, “Deep learning for fall detection: Threedimensional cnn combined with lstm on video kinematic data,” IEEE Journal of Biomedical and Health InforCVPR2017matics, vol. 23, no. 1, pp. 314–323, Jan 2019. [4] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, Oct 2016.
[5] M. of Health and T. Welfare. (2019) Statistics of accidents by the health promotionadministration. [Online]. Available: http://web.archive.org/web/20080207010024/ http://www.808multimedia.com/winnt/kernel.htm
[6] T. Simon, H. Joo, I. Matthews, and Y. Sheikh, “Hand keypoint detection in single images using multiview bootstrapping,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017, p. 4645–4653.
[7] J. Lin, W. Yu, N. Zhang, X. Yang, H. Zhang, and W. Zhao, “A survey on internet of things: Architecture, enabling technologies, security and privacy, and applications,” IEEE Internet of Things Journal, vol. 4, no. 5, pp. 1125–1142, Oct 2017.
[8] D. Svozil, V. Kvasnicka, and J. Pospichal, “Introduction to multi-layer feed-forward neural networks,” in Chemometrics Intell. Lab. Syst, vol. 39, no. 1, 1997, pp. 43–62. [9] C. G. C. Index. (2018, Jun) Forecast and methodology, 2016–2021, white paper. [Online]. Available: https://www.cisco.com/c/en/us/solutions/collateral/ serviceprovider/global-cloud-index-gci/white-paper-c11-738085.html
[10] B. Heintz, A. Chandra, and R. K. Sitaraman, “Optimizing grouped aggregation in geo-distributed streaming analytics,” in 24th International Symposium on HighPerformance Parallel and Distributed Computing, 2015, pp. 133–144.
[11] L.-J. Lin, “Reinforcement learning for robots using neural networks,” Ph.D. dissertation, CMU-CS-93-103, Carnegie Mellon University, Schenley Park Pittsburgh, PA, United States, https:// ezproxy.lib.ncu.edu.tw/ login? url=https:// www.proquest.com/ dissertations-theses/ reinforcement-learning-robots-usingneural/docview/303995826/se-2?accountid=12690, 1993.
[12] L. Deng and D. Yu, “Deep learning: Methods and applications,” Foundations and Trends in Signal Processing, vol. 7, no. 3–4, pp. 197–387, 2014. [Online]. Available: http://dx.doi.org/10.1561/2000000039
[13] T. de Quadros, A. E. Lazzaretti, and F. K. Schneider, “A movement decomposition and machine learning-based fall detection system using wrist wearable device,” IEEE Sensors Journal, vol. 18, no. 12, pp. 5082–5089, June 2018.
[14] H. Li, A. Shrestha, H. Heidari, J. Le Kernec, and F. Fioranelli, “Bi-lstm network for multimodal continuous human activity recognition and fall detection,” IEEE Sensors Journal, vol. 20, no. 3, pp. 1191–1201, Feb 2020.
[15] L. Montanini, A. Del Campo, D. Perla, S. Spinsante, and E. Gambi, “A footwearbased methodology for fall detection,” IEEE Sensors Journal, vol. 18, no. 3, pp. 1233–1242, Feb 2018.
[16] J. Clemente, F. Li, M. Valero, and W. Song, “Smart seismic sensing for indoor fall detection, location, and notification,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 2, pp. 524–532, Feb 2020.
[17] F. Muheidat, L. Tawalbeh, and H. Tyrer, “Context-aware, accurate, and real time fall detection system for elderly people,” in 2018 IEEE 12th International Conference on Semantic Computing (ICSC), Jan 2018, pp. 329–333.
[18] S. M. Adnan, A. Irtaza, S. Aziz, M. O. Ullah, A. Javed, and M. T. Mahmood, “Fall detection through acoustic local ternary patterns,” Applied Acoustics, vol. 140, pp. 296–300, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/ pii/S0003682X18302834
[19] J.-L. Chua, Y. C. Chang, and W. K. Lim, “Visual based fall detection through human shape variation and head detection,” in IMPACT-2013, Nov 2013, pp. 61–65.
[20] X. Kong, Z. Meng, N. Nojiri, Y. Iwahori, L. Meng, and H. Tomiyama, “A hog-svm based fall detection iot system for elderly persons using deep sensor,” Procedia Computer Science, vol. 147, pp. 276–282, 2019, 2018 International Conference on Identification, Information and Knowledge in the Internet of Things. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S187705091930287X [21] V. D. Nguyen, M. T. Le, A. D. Do, H. H. Duong, T. D. Thai, and D. H. Tran, “An efficient camera-based surveillance for fall detection of elderly people,” in 2014 9th IEEE Conference on Industrial Electronics and Applications, June 2014, pp. 994–997. [22] X. Wang and K. Jia, “Human fall detection algorithm based on yolov3,” in 2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC), July 2020, pp. 50–54.
[23] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, “Openpose: Realtime multiperson 2d pose estimation using part affinity fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 172–186, Jan 2021.
[24] C. Zhang and Y. Tian, “Rgb-d camera-based daily living activity recognition,” in Journal of computer vision and image processing, vol. 2, no. 4, 2012, p. 12.
[25] Z. Huang, Y. Liu, Y. Fang, and B. K. P. Horn, “Video-based fall detection for seniors with human pose estimation,” in 2018 4th International Conference on Universal Village (UV), Oct 2018, pp. 1–4.
[26] C. Zhao, B. Fan, J. Hu, L. Tian, Z. Zhang, S. Li, and Q. Pan, “Pose estimation for multi-camera systems,” in 2017 IEEE International Conference on Unmanned Systems (ICUS), Oct 2017, pp. 533–538.
[27] T. Simon, H. Joo, I. Matthews, and Y. Sheikh, “Hand keypoint detection in single images using multiview bootstrapping,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 4645–4653.
[28] D. Rodriguez-Criado, P. Bachiller, P. Bustos, G. Vogiatzis, and L. J. Manso, “Multicamera torso pose estimation using graph neural networks,” in 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Aug 2020, pp. 827–832.
[29] G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler, and K. Murphy, “Towards accurate multi-person pose estimation in the wild,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 3711–3719.
[30] V.Mnih, K.Kavukcuoglu, D.Silver, A.A.Rusu, J.Veness, M.G.Bellemare, A.Graves, M.Riedmiller, A.KFidjeland, G.Ostrovski, S.Petersen, C.Beattie, A.Sadik, I. H. D. D. S.Legg, and D.Hassabis, “Human-level control through deep reinforcement learning,” in .Nature, Feb 2015, pp. 529–533.
[31] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. A. Riedmiller, “Playing atari with deep reinforcement learning,” CoRR, vol. abs/1312.5602, 2013. [Online]. Available: http://arxiv.org/abs/1312.5602
[32] L. Martínez-Villaseñor, H. Ponce, J. Brieva, E. Moya-Albor, J. Núñez-Martínez, and C. Peñafort-Asturiano, “Up-fall detection dataset: A multimodal approach,” Sensors, vol. 19, no. 9, 2019. [Online]. Available: https://www.mdpi.com/ 1424-8220/19/9/1988