| 研究生: |
駱鍇頡 Kai-Jie Lo |
|---|---|
| 論文名稱: |
基於分散式運算架構探索時序性小行星軌跡 Exploration of Sequential Asteroid Trajectories with a Distributed Computing System |
| 指導教授: |
蔡孟峰
Meng-Feng Tsai |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 論文出版年: | 2018 |
| 畢業學年度: | 106 |
| 語文別: | 中文 |
| 論文頁數: | 76 |
| 中文關鍵詞: | 大數據 、分散式運算 、小行星軌跡 、Hough Transform 、Transitive Closure |
| 外文關鍵詞: | Big Data, Distributed Computing, Asteroid Trajectory, Hough Transform, Transitive Closure |
| 相關次數: | 點閱:20 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
由於天文觀測資料相當之龐大,長時間觀測下來的數據往往會到達PB以上等級。這不僅僅對天文人員造成分析上的困擾,也在分析過程中耗費難以想像的時間。雖然電腦規格日益進步,但是一般普通電腦還是無法獨自處理全部的數據,因為可能會遭遇到記憶體或者硬碟的空間不足以及運算耗時的問題。所以本論文提出,以基於分散式運算演算法的方法來處理天文資料,且要使其結果符合時序性質,以致可以有效且精確的處理天文數據。本論文將以泛星計畫 (Pan-STARRS, Panoramic Survey Telescope and Rapid Response System) 作為實驗的資料來源。 本論文以 Hadoop 分散式檔案系統作為儲存設備,以良好的擴充性以及可靠性作為考量。搭配 Apache Spark 作為分散式運算框架,能夠更有效率利用分散式的演算法來的尋找在天體中的小行星軌跡。為了能夠讓Spark 能夠更緊密的與 Hadoop 系統做接合,本論文也利用 Hadoop Yarn作為此系統之叢集資源管理器。
本論文於資料前處理階段,將會在原始資料上進行k-d Tree的範圍搜尋來消除雜訊。緊接著,會以分散式 Hough Transform 演算法找出可能為路徑的線,來作為第一次分群的條件。之後,會以第一次的分群結果,基於時序性的配對方式找出可能軌跡片段的兩個點,運算其速度以及方向,作為第二次分群的條件。再來,將第二次分群的結果,進行改編過的Floyd-Warshall 運算 Transitive Closure,進而得到軌跡的最大樣式(maximal patterns)。最後在輸出軌跡前,必須判斷軌跡最大樣式之相似度,將 Hough Transform 離散化取樣所產生的重複軌跡去除。
The amount of astronomical observational data is greatly increasing, along with long-term data entered into the petabytes (PB) scale. This presents a problem for analysis as well as a time-consuming puzzle. Although the current computer standard is improving, the ordinary personal computer encounters space exhaustion and associated problems. The purpose of this thesis is to study astronomical observational data, with results compiled as sequential property; data is used from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS).
The Hadoop Distributed File System (HDFS) is used for storage, as it is well-known for creating excellent scalability and reliability. This approach also adopts Apache Spark as the distributed computing framework to effectively use distributed algorithms and explore asteroid trajectories; similarly, the Hadoop Yarn is used as the cluster manager for this system.
This approach can be split into seven stages. First, there are processing range queries by k-dimensional (k-d) trees to filter noise. Second, it processes the distributed Hough transform algorithm to determine a line for grouping. Third, it filters detections by the standard deviation of magnitudes. Fourth, it pairs every two detections into a pair-based sequential property and calculates its velocity and direction as a condition for the next grouping stage. Fifth, it groups pairs by the Hough transform’s rho, theta, velocity and direction. Sixth, it uses the adapted Floyd-Warshall algorithm to compute transitive closure and establish maximal patterns. Finally, it deduplicates asteroid trajectories before outputting the result.
[1] N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, et al., "Pan-STARRS: a large synoptic survey telescope array," in Survey and Other Telescope Technologies and Discoveries, 2002, pp. 154-165.
[2] Pan-STARRS official website. Available: http://pswww.ifa.hawaii.edu/pswww/?page_id=154
[3] C.-S. Huang, M.-F. Tsai, P.-H. Huang, L.-D. Su, and K.-S. Lee, "Distributed asteroid discovery system for large astronomical data," Journal of Network and Computer Applications, vol. 93, pp. 27-37, 2017.
[4] M. Williams. What is the asteroid belt. Available: https://www.universetoday.com/32856/asteroid-belt/
[5] NASA, Asteroid. Available: https://ssd.jpl.nasa.gov/?asteroids
[6] N. Kaiser, H. Aussel, B. E. Burke, H. Boesgaard, K. Chambers, M. R. Chun, et al., "Pan-STARRS: a large synoptic survey telescope array," in Survey and Other Telescope Technologies and Discoveries, 2002, pp. 154-165.
[7] Apache Hadoop official website. Available: http://hadoop.apache.org/
[8] T. White, Hadoop: The Definitive Guide, 3rd Edition: O'Reilly Media, 2012.
[9] Apache Spark official website. Available: https://spark.apache.org/
[10] H. Karau, R. Warren, High Performance Spark: Best Practices for Scaling and Optimizing Apache Spark: O'Reilly Media, 2017.
[11] Rosen, Kenneth H, Discrete mathematics and its applications: McGraw-Hill Education, 2007
[12] R. O. Duda and P. E. Hart, "Use of the Hough transformation to detect lines and curves in pictures," Communications of the ACM, vol. 15, pp. 11-15, 1972.
55
[13] Hadoop :- Inside MapReduce ( Process of Shuffling , sorting )–Part II. Available: https://haritbigdata.files.wordpress.com/2015/07/mapreduce.png
[14] Submitting User Applications with spark-submit, AWS Big Data Blog. Available: https://aws.amazon.com/tw/blogs/big-data/submitting-user-applications-with-spark-submit/
[15] J. L. Bentley, "Multidimensional binary search trees used for associative searching," Communications of the ACM, vol. 18, pp. 509-517, 1975.
[16] R. K. Satzoda, S. Suchitra, and T. Srikanthan, "Parallelizing the Hough transform computation," IEEE Signal Processing Letters, vol. 15, pp. 297-300, 2008.
[17] N. S. O.-S. Peak. Magnitude. Available: https://web.archive.org/web/20080206074842/http://www.nso.edu/PR/answerbook/magnitude.html
[18] W. Gellert, S. Gottwald, M. Hellwich, H. Kästner, and H. Küstner, "The VNR Concise Encyclopedia of Mathematics.", 1989.