| 研究生: |
范振維 Chen-Wei Fan |
|---|---|
| 論文名稱: |
CRUnit - Capture / Replay Based Unit Testing CRUnit - Capture / Replay Based Unit Testing |
| 指導教授: |
鄭永斌
Yung-Pin Cheng |
| 口試委員: | |
| 學位類別: |
博士 Doctor |
| 系所名稱: |
資訊電機學院 - 軟體工程研究所 Graduate Institute of Software Engineering |
| 畢業學年度: | 100 |
| 語文別: | 中文 |
| 論文頁數: | 44 |
| 中文關鍵詞: | 單元測試 |
| 外文關鍵詞: | Capture / Replay, unit testing |
| 相關次數: | 點閱:7 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在軟體的開發流程中導入xUnit測試框架(xUnit testing framework)的時候,往往需要花費很大的力氣對SUT (System Under Test)進行重構(refactoring)才能夠順利地進行單元測試。雖然同樣都是軟體測試,單元測試並不像系統整合測試一樣可以利用工具來自動化地進行運作,它反而需要程式開發人員額外手動撰寫大量的測試程式碼才能達到目的。很不幸地,這些手動撰寫的測試程式碼就像系統程式碼一樣,非常容易落入程式碼維護的災難之中。
在這篇論文當中,我們針對開發人員在使用xUnit測試框架的時候需要付出額外的代價所帶來的問題提出了一種新的方法並且依據這個方法打造了一個工具,稱之為CRUnit(Capture / Replay based Unit Testing)。CRUnit是Eclipse整合開發環境(Integrated Development Environment, IDE)裡面的JUnit工具的延伸,它藉由除錯器(debugger)的幫助,將單元測試分成錄製與重播(Capture / Replay)兩種階段來進行,以取代傳統xUnit測試框架裡面所需要大量撰寫的驗證程式碼(assertions)。CRUnit突破了傳統xUnit testing framework必須將SUT視為一個黑盒子來做測試的限制,它利用除錯器的幫忙可以對SUT的內部狀態做出探測和驗證,如此一來我們就再也不需要在測試程式碼裡面撰寫冗長和繁瑣的assertion,也不需要為了測試的目的而違背黑盒子原則去對CUT (Class Under Test)的封裝做出破壞。我們利用的是人類善於使用人眼和大腦來對所看見的事物做出判斷和驗證的天性,搭配視覺化工具(visualizers)的幫忙來完成這樣子的半自動化單元測試。
Adopting xUnit testing framework in software development often requires a lot of refactoring to the system under test (SUT). Contrast to system testing which can be performed by tools, xUnit testing is coding activity which produces test code as the delivery. However, test code often suffers from maintenance problems like system code.
In this paper, a prototype tool called CRUnit is proposed to reduce the test overhead from adopting xUnit testing framework. CRUnit is an extension to the JUnit module in Eclipse IDE, which can replace the hand-crafted assertions by a Capture / Replay process with the help from debuggers. Contrast to xUnit testing framework that treats a SUT as a black-box, CRUnit probes the internal states of a SUT so that complicated hand-crafted assertions can be excluded from test methods and class encapsulation principles are no longer compromised. This semi-automated process is achieved by introducing the verification power of human brain and human eyes and the help from the “visualizers”.
[1] K. Beck, Test-Driven Development: By Example. The Addison-Wesley Signature Series, Addison-Wesley, 2003.
[2] K. Beck, JUnit – pocket guide: quick lookup and advice, O’Reilly, 2004.
[3] S. Fraser, D. Astels, K. Beck, B. W. Boehm, J. D. Mc-Gregor, J. Newkirk, and C. Poole, “Discipline and practices of TDD: (test driven development),” in OOPSLA Companion (R. Crocker and G. L. S. Jr., eds.), pp. 268–270, ACM, 2003.
[4] G. Meszaros, xUnit Test Patterns: Refactoring Test Code. Pearson Education, 2007.
[5] G. Misko Hevery, “Tutorial: How to write hard to test code and what to look for when reviewing other people’s hard to test code,” in OOPSLA Companion (S. Arora and G. T. Leavens, eds.), ACM, 2009.
[6] R. Osherove, The Art of Unit Testing: With Examples in .Net. Manning Pubs Co Series, Manning, 2009
[7] D. J. Richardson, S. L. Aha, and T. O. O’Malley, “Specification-based test oracles for reactive systems,” in ICSE (T. Montgomery, L. A. Clarke, and C. Ghezzi, eds.), pp. 105–118, ACM Press, 1992.
[8] D. Peters and D. L. Parnas, “Generating a test oracle from program documentation: work in progress,” in Proceedings of the 1994 ACM SIGSOFT international symposium on Software testing and analysis, ISSTA ’94, (New York, NY, USA), pp. 58–65, ACM, 1994
[9] Y.-P. Cheng, H.-Y. Tsai, C.-S. Wang, and C.-H. Hsueh, “xDIVA: automatic animation between debugging break points,” in SOFTVIS (A. Telea, C. G¨org, and S. P. Reiss, eds.), pp. 221–222, ACM, 2010.
[10] Y.-P. Cheng, J.-F. Chen, M.-C. Chiu, N.-W. Lai, and C.-C. Tseng, “xDIVA: a debugging visualization system with composable visualization metaphors,” in OOPSLA Companion (G. E. Harris, ed.), pp. 807–810, ACM, 2008.
[11] E. Horowitz, S. Sahni, and S. Anderson-Freed, Fundamentals of Data Structures in C. Silicon Press, 2007.
[12] Y. Cheon and G. T. Leavens, “A simple and practical approach to unit testing: The JML and JUnit way,” in ECOOP (B. Magnusson, ed.), vol. 2374 of Lecture Notes in Computer Science, pp. 231–255, Springer, 2002.
[13] Y. Cheon, M. Kim, and A. Perumandla, “A complete automation of unit testing for java programs,” in Software Engineering Research and Practice (H. R. Arabnia and H. Reza, eds.), pp. 290–295, CSREA Press, 2005.
[14] T. Xie and D. Notkin, “Tool-assisted unit-test generation and selection based on operational abstractions,” Autom. Softw. Eng., vol. 13, no. 3, pp. 345–371, 2006.
[15] S. Thummalapenta, M. R. Marri, T. Xie, N. Tillmann, and J. de Halleux, “Retrofitting unit tests for parameterized unit testing,” in FASE (D. Giannakopoulou and F. Orejas, eds.), vol. 6603 of Lecture Notes in Computer Science, pp. 294–309, Springer, 2011.
[16] N. Tillmann and J. de Halleux, “Pex-white box test generation for .net,” in TAP (B. Beckert and R. H¨ahnle, eds.), vol. 4966 of Lecture Notes in Computer Science, pp. 134–153, Springer, 2008.
[17] J. Steven, P. Chandra, B. Fleck, A. Podgurski, and A. Podgurski, “jRapture: A capture/replay tool for observation based testing.” in ISSTA, pp. 158–167, 2000.
[18] S. G. Elbaum, H. N. Chin, M. B. Dwyer, and M. Jorde, “Carving and replaying differential unit test cases from system test cases,” IEEE Trans. Software Eng., vol. 35, no. 1, pp. 29–45, 2009.
[19] D. F. Redmiles, T. Ellman, and A. Zisman, eds., 20th IEEE/ACM International Conference on Automated Software Engineering (ASE 2005), November 7-11, 2005, Long Beach, CA, USA, ACM, 2005.
[20] A. Orso and B. Kennedy, “Selective capture and replay of program executions,” ACM SIGSOFT Software Engineering Notes, vol. 30, no. 4, pp. 1–7, 2005.
[21] “Notes on the Eclipse Plug-in Architecture” Retrieved 6 14, 2012, from Eclipse.org: http://www.eclipse.org/articles/Article-Plug-in-architecture/plugin_architecture.html
[22] Retrieved 12 27, 2010, from WinRunner: http://www.loadtest.com.au/Technology/winrunner.htm
[23] Rational Tester. (2010, 12 27). Retrieved 12 27, 2010, from http://www-01.ibm.com/software/awdtools/tester/functional/
[24] Squish. (2010). Retrieved 12 28, 2010, from froglogic - Squish: http://www.froglogic.com/products/index.php
[25] Test Complete. (2010). Retrieved 12 28, 2010, from Test Complete version8: http://www.automatedqa.com/products/testcomplete/
[26] Graphviz. (2012). Retrieved 5 1, 2012, from Graphviz.org: http://www.graphviz.org/
[27] M. Feathers, Working Effectively with Legacy Code, Prentice Hall PTR, 2004.
[28] Unit Testing Tools by Typemock. (2012). Retrieved 6 20, 2012, from Typemock Isolator: http://www.typemock.com/typemock-isolator-product3
[29] A developer testing toolkit for Java. (2012). Retrieved 6 20, 2012, from JMockit: http://code.google.com/p/jmockit/
[30] Supported mscorlib types. (2012). Retrieved 6 21, 2012, from Typemock Isolator: http://www.typemock.com/mscorlib-types
[31] Examples of Test Oracles. (2012). Retrieved 6 23, 2012, from Center for Software Testing Education & Research: http://www.testingeducation.org/k04/OracleExamples.htm
[32] Platform Debug Model. (2012). Retrieved 6 24, 2012, from Eclipse Documentation: http://help.eclipse.org/galileo/index.jsp?topic=/org.eclipse.platform.doc.isv/guide/debug_model.htm
[33] DebugEvent. (2012). Retrieved 6 25, 2012, from Eclipse Platform API Specification: http://help.eclipse.org/indigo/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/debug/core/DebugEvent.html