| 研究生: |
陳泓年 Hong-Nien Chen |
|---|---|
| 論文名稱: |
混合實境中虛擬服裝的即時光雕投影系統 A Real-Time Projection Mapping System for Virtual Clothes in Mixed Reality |
| 指導教授: |
施國琛
Timothy K. Shih |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 論文出版年: | 2019 |
| 畢業學年度: | 107 |
| 語文別: | 英文 |
| 論文頁數: | 76 |
| 中文關鍵詞: | 虛擬劇院 、光雕秀 、人體追踪 、骨架 、Kinect 、投影校正 |
| 外文關鍵詞: | Virtual theater, Projection mapping, human tracking, Skeleton, Kinect, Projection correction |
| 相關次數: | 點閱:20 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,人機交互問題特別受歡迎。它搭配上混合實境的應用也是一種在未來與我們的生活無法分離的技術。這部分的應用都是希望能與計算機更自然的互動。
在許多表演中,表演服裝經常是影響表演精彩的重要因素之一,因此大部分的表演需要準備很多服裝並且經常在兩個表演之間改變。最近,由於表演等等的需求,對科技應用的需求與光雕效果相結合的需求越來越多,但光的應用一直受到限制。如果你想演出一個節目,你應該設置舞台並準備多件衣服,這對錶演者來說非常麻煩。而年輕人總是喜歡的東西,希望可以迅速得到變化。因此,本文希望將應用程序擴展到其他層面,以實現性能的卓越性和流暢性。如果我們需要為每個節目換衣服,在演出期間換衣服甚至搬運衣服都是問題。改善這些問題並提高性能可變性。
本文提出了一種即時虛擬服裝跟踪投影系統,希望開發新的投影應用應用,增加舞台表演的可能性,節省表演者在服裝上的成本,攜帶的不便,以及改變時間。服裝。結合技術,展現新風格,解決舞台的高支出,實現人的輕鬆即時生產目的。我們使用Kinect作為人體檢測的唯一傳感器,跟踪用戶並檢測他們的骨骼,通過快速有效的坐標轉換將虛擬3D坐標轉換為投影空間的3D坐標,並將骨骼鏈接到虛擬服裝。在每個部分中,通過使用子關節的運動學來計算子關節旋轉角度,並且模擬關節的自然姿勢,並且遵循虛擬服裝並且符合用戶的結果。各種服裝,提供不同的交互條件,如舉起手,或使用特定的手勢及時改變或觸發服裝的舞台效果,使用戶在各種表演中有足夠的可能性。結果顯示了預期的結果,並且還採用了演示膜,系統使用簡單,使用方便。
The issue of human-computer interaction has been particularly popular in recent years. Its application with Mixed Reality is also a technology that cannot be separated from our lives in the future. You will want to interact more naturally with computer. So the Human Computer Interaction (HCI) has become an important issue of computer science. The VR gives priority to the development in the future. The HCI makes people communicate with computers easier. The goal is presenting more interactive experience as better as possible.
In many performances, performance clothing is often seen as one of the important factors affecting the level of performance, so many performances require a lot of costumes to be prepared and frequently changed between the two performances. Facing the audience, the actors rely on the outline of the story and add their own creative improvisation to complete a perfect performance. Recently, due to the professional stage, there are more and more demand for performance applications combined with light-carving effects, but the application of light has always been limited. If you want to start a show, you should set up the stage and prepare multiple pieces of clothing. This is very troublesome for the performer. Young people always like fashionable things, and hopes can change quickly. Therefore, this paper wants to extend the application to other levels for the brilliance and fluency of the performance. If we need to change our clothes for every show, changing clothes during the performance and even carrying clothes are both problems. To improve these problems and increase performance variability.
In this paper, we propose a method for users to project virtual clothing on themselves using a computer with a projector. Kinect will capture the user\textquoteright{}s body and bones as well as each location and direction. The skeleton is mapped to each part of the virtual garment, and the rotation angle of the sub-joint is calculated by the kinetic kinematics to simulate the natural posture of the jointed joint, and the virtual garment follows and conforms to the user\textquoteright{}s result. The implementation the three steps method of a fast and efficient space coordinate transforming from camera coordinates to a real-world three-dimensional space coordinate to project and control virtual clothing in real time. Users can choose different costumes in our system. Anyone can easily wear virtual costumes during the show.
[1] Pachoulakis, I., & Kapetanakis, K. . ”Augmented Reality Platforms For Virtual Fitting
Rooms.”, The International Journal of Multimedia & Its Applications, 2012, 4(4), 35.
[2] D. Kuang, C. Yang, M. Wang and G. Peng, ”An improved approach for gesture recognition,”
2017 Chinese Automation Congress (CAC), Jinan, 2017, pp. 4856-4861.doi: 10.1109/CAC.
2017.8243638
[3] H. Li, J. Liu, G. Zhang, Y. Gao and Y. Wu, ”Multi-glimpse LSTM with color-depth feature
fusion for human detection,” 2017 IEEE International Conference on Image Processing
(ICIP), Beijing, 2017, pp. 905-909. doi: 10.1109/ICIP.2017.8296412
[4] L. Xia, C. Chen and J. K. Aggarwal, ”Human detection using depth information by Kinect,”
CVPR 2011 WORKSHOPS, Colorado Springs, CO, 2011, pp. 15-22.
[5] R. Li, K. Zou, X. Xu, Y. Li and Z. Li, ”Research of Interactive 3D Virtual Fitting Room on
Web Environment,” 2011 Fourth International Symposium on Computational Intelligence
and Design, Hangzhou, 2011, pp. 32-35.
[6] K. Srinivasan and S. Vivek, ”Implementation of virtual fitting room using image processing,”
2017 International Conference on Computer, Communication and Signal Processing
(ICCCSP), Chennai, 2017, pp. 1-3.
[7] PAI, Hong-yi., ”An imitation of 3D projection mapping using augmented reality and shader
effects.” 2016 International Conference on Applied System Innovation (ICASI). IEEE, 2016.
p. 1-4.
[8] SAKAMOTO, Makoto, et al. ”A Proposal of Interactive Projection Mapping Using Kinect.”
2018 International Conference on Information and Communication Technology Robotics
(ICT-ROBOT). IEEE, 2018. p. 1-4.
[9] X. Wang, Q. Ma and W. Wang, ”Kinect driven 3D character animation using semantical
skeleton,” 2012 IEEE 2nd International Conference on Cloud Computing and Intelligence
Systems, Hangzhou, 2012, pp. 159-163.
[10] J. Lee, H. Gu, H. Kim, J. Kim, H. Kim and H. Kim, ”Interactive manipulation of 3D objects
using Kinect for visualization tools in education,” 2013 13th International Conference
on Control, Automation and Systems (ICCAS 2013), Gwangju, 2013, pp. 1220-1222. doi:
10.1109/ICCAS.2013.6704175
[11] Hongjian Liao and Zhe Qu, ”Virtual experiment system for electrician training based on
Kinect and Unity3D,” Proceedings 2013 International Conference on Mechatronic Sciences,
Electric Engineering and Computer (MEC), Shengyang, 2013, pp. 2659-2662. doi: 10.1109/
MEC.2013.6885480
[12] Y. Ci and J. Yao, ”The design and research of the somatosensory interaction system based
on kinect and unity 3D,” 2015 10th International Conference on Computer Science & Education
(ICCSE), Cambridge, 2015, pp. 983-986. doi: 10.1109/ICCSE.2015.7250394
[13] J. Zhang, W. Xu and D. Meng, ”Inverse kinematics resolution method of redundant space
manipulator based on arm angle parameterization,” Proceedings of the 32nd Chinese Control
Conference, Xi’an, 2013, pp. 6022-6027.
[14] J. M. I. Zannatha and R. C. Limon, ”Forward and Inverse Kinematics for a Small-Sized Humanoid
Robot,” 2009 International Conference on Electrical, Communications, and Computers,
Cholula, Puebla, 2009, pp. 111-118. doi: 10.1109/CONIELECOMP.2009.50
[15] I. Nishihara, ”Simple Coordinate Transformation Method for 3D Interaction Systems,” 2015
International Conference on Cyberworlds (CW), Visby, 2015, pp. 369-372.
[16] T. Kaneda, N. Hamada and Y. Mitsukura, ”Automatic alignment method for projection
mapping on planes with depth,” 2016 IEEE 12th International Colloquium on Signal Processing
& Its Applications (CSPA), Malacca City, 2016, pp. 111-114.
[17] S. D. Kundan and G. R. M. Reddy, ”Projection and Interaction with Ad-hoc Interfaces on
Non-planar Surfaces,” 2013 2nd International Conference on Advanced Computing, Networking
and Security, Mangalore, 2013, pp. 1-6.
[18] X. Tong, P. Xu and X. Yan, ”Research on Skeleton Animation Motion Data Based on
Kinect,” 2012 Fifth International Symposium on Computational Intelligence and Design,
Hangzhou, 2012, pp. 347-350.
[19] H. Y. Huang and S. H. Chang, ”A Skeleton-Occluded Repair Method from Kinect,” 2014
International Symposium on Computer, Consumer and Control, Taichung, 2014, pp. 264-
267.
[20] W. Shen, K. Deng, X. Bai, T. Leyvand, B. Guo and Z. Tu, ”Exemplar-based human action
pose correction and tagging,” 2012 IEEE Conference on Computer Vision and Pattern
Recognition, Providence, RI, 2012, pp. 1784-1791.
[21] L. Xia, C. Chen and J. K. Aggarwal, ”Human detection using depth information by Kinect,”
CVPR 2011 WORKSHOPS, Colorado Springs, CO, 2011, pp. 15-22.
[22] Fengliang Xu and Kikuo Fujimura, ”Human detection using depth and gray images,” Proceedings
of the IEEE Conference on Advanced Video and Signal Based Surveillance, 2003.,
Miami, FL, USA, 2003, pp. 115-121. doi: 10.1109/AVSS.2003.1217910
[23] S. Chung and Y. Tseng, ”Projection reversibility principle and its application to projection
corrections,” 2018 International Workshop on Advanced Image Technology (IWAIT),
Chiang Mai, 2018, pp. 1-4. doi: 10.1109/IWAIT.2018.8369774