Cognitive Estimation of Development Effort, Time, Errors, and the Defects of Software

Authors

  • Amit Kumar JAKHAR Department of Computer Science and Engineering, Birla Institute of Technology, Ranchi, Jharkhand
  • Kumar RAJNISH Department of Computer Science and Engineering, Birla Institute of Technology, Ranchi, Jharkhand

Keywords:

Cognitive weight, basic control structures, operators, operands, machine learning techniques

Abstract

In the software industry, measuring the effort and time for developing of software is very challenging. Measuring development effort and time comprises several phases, but measuring the effort in each phase creates problems. It is also observed that estimation of the effort required for developing a project may be over-estimated or under-estimated. It can lead to enormous damage to the organization, with respect to budget and schedule. So, to address the aforementioned, a cognitive technique is proposed for measuring the development effort, time, and errors. After measuring the development effort, machine learning techniques: Bayesian Net, Logistic Regression, Multi-perceptron, SMO, and Lib-SVM are applied for software defect prediction. To estimate the software development effort and defects, NASA PROMISE: CM1, KC3, PC1, PC2, and JM1 datasets and devised datasets (proposed cognitive technique parameters of original datasets) are used. The experimental results of both the experiments prove the goodness of the proposed work of this paper.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Author Biography

Amit Kumar JAKHAR, Department of Computer Science and Engineering, Birla Institute of Technology, Ranchi, Jharkhand

Department of computer science and engineering, BIT, mesra, Ranchi-Jharkhand-835215

References

RS Pressman. Software Engineering-A Practitioner’s Approach. 5th ed. McGraw Hill, New York, 2002.

JP Kearney, RL Sedlmeyer, WB Thompson, MA Gary and MA Adler. Software complexity measurement. Comm. ACM 1986; 28, 1044-50.

TJ McCabe. A complexity measure. IEEE Trans. Software Eng. 1976; 2, 308-20.

MH Halstead. Elements of Software Science. Elsevier, New York, 1977.

Y Wang. On cognitive informatics. In: Proceedings of the 1st IEEE International Conference on Cognitive Informatics. Calgary, Alta, Canada, 2002, p. 34-42.

Y Wang. On the cognitive informatics foundations of software engineering. In: Proceedings of the 3rd IEEE International Conference on Cognitive Informatics. Victoria, BC, Canada, 2004, p. 22-32.

MD Ambros, M Lanza and R Robbes. An extensive comparison of bug prediction approaches. In: Proceedings of the 7th IEEE Working Conference on Mining Software Repositories. Cape Town, SA, 2010, p. 31-41.

T Lee, J Nam, DG Han, S Kim and HP In. Micro interaction metrics for defect prediction. In: Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering. New York, USA, 2011, p. 311-21.

TM Khoshgoftaar, FD Ross, R Munikoti, N Goel, EB Allen and A Nandi. Predicting fault-prone modules with case-based reasoning. In: Proceedings of the 8th IEEE International Symposium on Software Reliability Engineering. Albuquerque, NM, USA, 1997, p. 27-35.

VR Basili, LC Briand and WL Melo. A validation of object-oriented design metrics as quality indicators. IEEE Trans. Software Eng. 1996; 22, 751-61.

LC Briand, VR Basili and CJ Hetmanski. Developing interpretable models with optimized set reduction for identifying high-risk software components. IEEE Trans. Software Eng. 1993; 19, 1028-44.

TM Khoshgoftaar and DL Lanning. A neural network approach for early detection of program modules having high risk in the maintenance phase. J. Syst. Software 1995; 29, 85-91.

D Azar, S Bouktif, B Kegl, H Sahraoui and D Precup. Combining and adapting software quality predictive models by genetic algorithms. In: Proceedings of the 17th IEEE International Conference on Automated Software Engineering. Edinburgh, UK, 2002, p. 285-8.

C Ebert. Classification techniques for metric-based software development. Software Qual. J. 1996; 5, 255-72.

M Li, H Zhang, R Wu and ZH Zhou. Sample-based software defect prediction with active and semi-supervised learning. Automat. Software Eng. 2012; 19, 201-30.

B Turhan, AT Msrl and A Bener. Empirical evaluation of the effects of mixed project data on learning defect predictors. Inform. Software Tech. 2013; 55, 1101-18.

J Nam, SJ Pan and S Kim. Transfer defect learning. In: Proceedings of the 13th ACM International Conference on Software Engineering. Piscataway, NJ, USA, 2013, p. 382-91.

S Kim, H Zhang, R Wu and L Gong. Dealing with noise in defect prediction. In: Proceedings of the 33rd IEEE International Conference on Software Engineering. New York, USA, 2011, p. 481-90.

S Shivaji, E Whitehead, R Akella and S Kim. Reducing features to improve code change-based bug prediction. IEEE Trans. Software Eng. 2013; 39, 552-69.

T Menzies, J Greenwald and A Frank. Data mining static code attributes to learn defect predictors. IEEE Trans. Software Eng. 2007; 33, 2-13.

R Wu, H Zhang, S Kim and SC Cheung. Recovering links between bugs and changes. In: Proceedings of the 19th ACM SIGSOFT International Symposium on Foundations of Software Engineering. New York, USA, 2011, p. 15-25.

A Mockus and LG Votta. Identifying reasons for software changes using historic databases. In: Proceedings of the IEEE International Conference on Software Maintenance. San Jose, CA, USA, 2000, p. 120-30.

T Fukushima, Y Kamei, S McIntosh, K Yamashita and N Ubayashi. An empirical study of just-in-time defect prediction using cross-project models. In: Proceedings of the 11th ACM Working Conference on Mining Software Repositories. New York, USA, 2014, p. 172-81.

Y Kamei, E Shihab, B Adams, AE Hassan, A Mockus, A Sinha, and N Ubayashi. A large-scale empirical study of just-in-time quality assurance. IEEE Trans. Software Eng. 2013; 39, 757-73.

S Kim, EJ Whitehead and Y Zhang. Classifying software changes: Clean or buggy? IEEE Trans. Software Eng. 2008; 34, 181-96.

T Zimmermann, N Nagappan, H Gall, E Giger and B Murphy. Cross-project defect prediction: A large scale experiment on data vs. domain vs. process. In: Proceedings of the 7th ACM SIGSOFT Symposium on the Foundations of Software Engineering. New York, USA, 2009, p. 91-100.

Y Ma, G Luo, X Zeng and A Chen. Transfer learning for cross-company software defect prediction. Inform. Software. Tech. 2012; 54, 248-56.

B Turhan, T Menzies, AB Bener and J Di Stefano. On the relative value of cross-company and within-company data for defect prediction. Empir. Software Eng. 2009; 14, 540-78.

C Lewis, Z Lin, C Sadowski, X Zhu, R Ou and EJ Whitehead. Does bug prediction support human developers? Findings from a Google case study. In: Proceedings of the 35th IEEE International Conference on Software Engineering. San Francisco, CA, USA, 2013, p. 372-81.

F Rahman, S Khatri, ET Barr and P Devanbu. Comparing static bug finders and statistical prediction. In: Proceedings of the 36th ACM International Conference on Software Engineering. New York, USA, 2014, p. 424-34.

T Jiang, L Tan and S Kim. Personalized defect prediction. In: Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering. Silicon Valley, CA, USA, 2013, p. 279-89.

F Zhang, A Mockus, I Keivanloo and Y Zou. Towards building a universal defect prediction model. In: Proceedings of the 11th ACM Working Conference on Mining Software Repositories. New York, USA, 2014, p. 182-91.

PROMISE Software Engineering Repository, Available at: http://promise.site.uottawa.ca/ SERepository, accessed June 2014.

Y Wang and J Shao. Measurement of the cognitive functional complexity of software. In: Proceedings of the 2nd IEEE International Conference on Cognitive Informatics. London, UK, 2003, p. 67-74.

S Misra. Cognitive program complexity measure. In: Proceedings of the 6th IEEE International Conference on Cognitive Informatics. Lake Tahoo, CA, USA, 2007, p. 120-5.

AK Jakhar and K Rajnish. A new cognitive approach to measure the complexity of software’s. Int. J. Software Eng. Its Appl. 2014; 8, 185-98.

AK Jakhar and K Rajnish. Measuring complexity, development time and understandability of a program: A cognitive approach. Int. J. Inform. Tech. Comput. Sci. 2014; 6, 53-60.

K Yugal and G Sahoo. Analysis of parametric & non parametric classifiers for classification technique using WEKA. Int. J. Inform. Tech. Comput. Sci. 2012; 4, 43-9.

K Yugal and G Sahoo. Study of parametric performance evaluation of machine learning and statistical classifiers. Int. J. Inform. Tech. Comput. Sci. 2013; 5, 57-64.

N Fenton, M Neil, W Marsh, P Hearty, L Radliski and P Krause. On the effectiveness of early life cycle defect prediction with Bayesian nets. Empir. Software Eng. 2008; 13, 499-537.

MJ Diamantopoulou, VZ Antonopoulos and DM Papamichai. The use of a neural network technique for the prediction of water quality parameters. Oper. Res. 2005; 5, 115-25.

WS Sarle. Neural networks and statistical models. In: Proceedings of the 19th Annual SAS Users Group International Conference. Cary, NC, USA, 1994, p. 1538-50.

RL de Mantaras and E Armengol. Machine learning from example: Inductive and Lazy methods. Data Knowl. Eng. 1998; 25, 99-123.

Downloads

Published

2015-04-03

How to Cite

JAKHAR, A. K., & RAJNISH, K. (2015). Cognitive Estimation of Development Effort, Time, Errors, and the Defects of Software. Walailak Journal of Science and Technology (WJST), 13(6), 465–478. Retrieved from https://wjst.wu.ac.th/index.php/wjst/article/view/1367

Issue

Section

Research Article