Search For:

Displaying 1-50 out of 88 total
Smarter Software Engineering: Practical Data Mining Approaches
Found in: Software Engineering Workshop, Annual IEEE/NASA Goddard
By Tim Menzies, Gary D. Boetticher
Issue Date:December 2002
pp. 1
No summary available.
 
The Many Faces of Software Analytics
Found in: IEEE Software
By Tim Menzies,Thomas Zimmermann
Issue Date:September 2013
pp. 28-29
Articles regarding the many faces of software analytics highlight the power of analytics for different types of organizations: large organizations and open source projects, as well as small- to medium-sized projects.
 
When to Test Less
Found in: IEEE Software
By Tim Menzies,Bojan Cukic
Issue Date:September 2000
pp. 107-112
The authors argue that, for many small-scale projects, a small number of randomly selected tests will adequately probe the software. They discuss how a program's shape can help determine whether rapid testing with limited resources will be as effective as ...
 
Learning Project Management Decisions: A Case Study with Case-Based Reasoning versus Data Farming
Found in: IEEE Transactions on Software Engineering
By Tim Menzies,Adam Brady,Jacky Keung,Jairus Hihn,Steven Williams,Oussama El-Rawas,Phillip Green,Barry Boehm
Issue Date:December 2013
pp. 1698-1713
Background: Given information on just a few prior projects, how do we learn the best and fewest changes for current projects? Aim: To conduct a case study comparing two ways to recommend project changes. 1) Data farmers use Monte Carlo sampling to survey a...
 
Learning from Open-Source Projects: An Empirical Study on Defect Prediction
Found in: 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)
By Zhimin He,Fayola Peters,Tim Menzies, Ye Yang
Issue Date:October 2013
pp. 45-54
The fundamental issue in cross project defect prediction is selecting the most appropriate training data for creating quality defect predictors. Another concern is whether historical data of open-source projects can be used to create quality predictors for...
 
On Parameter Tuning in Search Based Software Engineering: A Replicated Empirical Study
Found in: 2013 3rd International Workshop on Replication in Empirical Software Engineering Research (RESER)
By Abdel Salam Sayyad,Katerina Goseva-Popstojanova,Tim Menzies,Hany Ammar
Issue Date:October 2013
pp. 84-90
Multiobjective Evolutionary Algorithms are increasingly used to solve optimization problems in software engineering. The choice of parameters for those algorithms usually follows the "default" settings, often accepted as "rule of thumb"...
 
Software Analytics: So What?
Found in: IEEE Software
By Tim Menzies,Thomas Zimmermann
Issue Date:July 2013
pp. 31-37
The guest editors of this special issue of IEEE Software invited submissions that reflected the benefits (and drawbacks) of software analytics, an area of explosive growth. They had so many excellent submissions that they had to split this special issue in...
 
Beyond Data Mining
Found in: IEEE Software
By Tim Menzies
Issue Date:May 2013
pp. 92
Last century, it wasn't known if data miners could find structure within software projects. This century, we know better: data mining has been successfully applied to many different artifacts from software projects. So it's time to move on to "What's ...
   
Local vs. global models for effort estimation and defect prediction
Found in: Automated Software Engineering, International Conference on
By Tim Menzies,Andrew Butcher,Andrian Marcus,Thomas Zimmermann,David Cok
Issue Date:November 2011
pp. 343-351
Data miners can infer rules showing how to improve either (a) the effort estimates of a project or (b) the defect predictions of a software module. Such studies often exhibit conclusion instability regarding what is the most effective action for different ...
 
How to Find Relevant Data for Effort Estimation?
Found in: Empirical Software Engineering and Measurement, International Symposium on
By Ekrem Kocaguneli,Tim Menzies
Issue Date:September 2011
pp. 255-264
Background: Building effort estimators requires the training data. How can we find that data? It is tempting to cross the boundaries of development type, location, language, application and hardware to use existing datasets of other organizations. However,...
 
Understanding the Value of Software Engineering Technologies
Found in: Automated Software Engineering, International Conference on
By Phillip Green II, Tim Menzies, Steven Williams, Oussama El-Rawas
Issue Date:November 2009
pp. 52-61
When AI search methods are applied to software process models, then appropriate technologies can be discovered for a software project. We show that those recommendations are greatly affected by the business context of its use. For example, the automatic de...
 
Applications of Simulation and AI Search: Assessing the Relative Merits of Agile vs Traditional Software Development
Found in: Automated Software Engineering, International Conference on
By Bryan Lemon, Aaron Riesbeck, Tim Menzies, Justin Price, Joseph D'Alessandro, Rikard Carlsson, Tomi Prifiti, Fayola Peters, Hiuhua Lu, Dan Port
Issue Date:November 2009
pp. 580-584
This paper augments Boehm-Turner's model of agile and plan-based software development augmented with an AI search algorithm. The AI search finds the key factors that predict for the success of agile or traditional plan-based software developments. Accordin...
 
How to avoid drastic software process change (using stochastic stability)
Found in: Software Engineering, International Conference on
By Tim Menzies, Steve Williams, Barry Boehm, Jairus Hihn
Issue Date:May 2009
pp. 540-550
Before performing drastic changes to a project, it is worthwhile to thoroughly explore the available options within the current structure of a project. An alternative to drastic change are internal changes that adjust current options within a software proj...
 
Cost Curve Evaluation of Fault Prediction Models
Found in: Software Reliability Engineering, International Symposium on
By Yue Jiang, Bojan Cukic, Tim Menzies
Issue Date:November 2008
pp. 197-206
Prediction of fault prone software components is one of the most researched problems in software engineering. Many statistical techniques have been proposed but there is no consensus on the methodology to select the
 
A Broad, Quantitative Model for Making Early Requirements Decisions
Found in: IEEE Software
By Martin S. Feather, Steven L. Cornford, Kenneth A. Hicks, James D. Kiper, Tim Menzies
Issue Date:March 2008
pp. 49-56
During the early phases of project life cycles, detailed information is scarce, yet developers frequently need to make key decisions, especially concerning trade-offs among quality requirements. Such trading among competing concerns occurs in many fields, ...
 
Fault Prediction using Early Lifecycle Data
Found in: Software Reliability Engineering, International Symposium on
By Yue Jiang, Bojan Cukic, Tim Menzies
Issue Date:November 2007
pp. 237-246
The prediction of fault-prone modules in a software project has been the topic of many studies. In this paper, we investigate whether metrics available early in the development lifecycle can be used to identify fault-prone software modules. More precisely,...
 
Column Pruning Beats Stratification in Effort Estimation
Found in: Predictor Models in Software Engineering, International Workshop on
By Omid Jalali, Tim Menzies, Dan Baker, Jairus Hihn
Issue Date:May 2007
pp. 7
Local calibration combined with stratification, also known as row pruning, is a common technique used by cost estimation professionals to improve model performance. The results presented in this paper raise several serious questions concerning the benefits...
 
The Strangest Thing About Software
Found in: Computer
By Tim Menzies, David Owen, Julian Richardson
Issue Date:January 2007
pp. 54-60
Although there are times when random search is dangerous and should be avoided, software analysis should start with random methods because they are so cheap, moving to the more complex methods only when random methods fail.
 
Making Sense of Requirements, Sooner
Found in: Computer
By Tim Menzies, Julian Richardson
Issue Date:October 2006
pp. 112-114
Early requirements models can be built quickly using simulation to tease out key decisions.
 
Experiences using Visualization Techniques to Present Requirements, Risks to Them, and Options for Risk Mitigation
Found in: Requirements Engineering Visualization, First International Workshop on
By Martin S. Feather, Steven L. Cornford, James D. Kiper, Tim Menzies
Issue Date:September 2006
pp. 10
<p>For several years we have been employing a riskbased decision process to guide development and application of advanced technologies, and for research and technology portfolio planning. The process is supported by custom software, in which visualiz...
 
Evidence-Based Cost Estimation for Better-Quality Software
Found in: IEEE Software
By Tim Menzies, Jairus Hihn
Issue Date:July 2006
pp. 64-66
Evidence-based reasoning is becoming common in many fields. It's widely enshrined in the practice and teaching of medicine, law, and management, for example. Evidence-based approaches demand that, among other things, practitioners systematically track down...
 
Qualitative Modeling for Requirements Engineering
Found in: Software Engineering Workshop, Annual IEEE/NASA Goddard
By Tim Menzies, Julian Richardson
Issue Date:April 2006
pp. 11-20
Acquisition of
 
Finding the Right Data for Software Cost Modeling
Found in: IEEE Software
By Zhihao Chen, Barry Boehm, Tim Menzies, Daniel Port
Issue Date:November 2005
pp. 38-46
Strange to say, when building a software cost model, sometimes it's useful to ignore much of the available cost data. One way to do this is to perform data-pruning experiments after data collection and before model building. Experiments involving a set of ...
 
How Good Is Your Blind Spot Sampling Policy?
Found in: High-Assurance Systems Engineering, IEEE International Symposium on
By Tim Menzies, Justin S. Di Stefano
Issue Date:March 2004
pp. 129-138
Assessing software costs money and better assessment costs exponentially more money. Given finite budgets, assessment resources are typically skewed towards areas that are believed to be mission critical. This leaves blind spots: portions of the system tha...
 
Model-Based Software Testing via Incremental Treatment Learning
Found in: Software Engineering Workshop, Annual IEEE/NASA Goddard
By Dustin Geletko, Tim Menzies
Issue Date:December 2003
pp. 82
Model-based software has become quite popular in recent years, making its way into a broad range of areas, including the aerospace industry. The models provide an easy graphical interface to develop systems, which can generate the sometimes tedious code th...
 
On the Advantages of Approximate vs. Complete Verification: Bigger Models, Faster, Less Memory, Usually Accurate
Found in: Software Engineering Workshop, Annual IEEE/NASA Goddard
By David Owen, Tim Menzies, Mats Heimdahl, Jimin Gao
Issue Date:December 2003
pp. 75
As software grows increasingly complex, verification becomes more and more challenging. Automatic verification by model checking has been effective in many domains including computer hardware design, networking, security and telecommunications protocols, a...
 
Matching Software Practitioner Needs to Researcher Activities
Found in: Asia-Pacific Software Engineering Conference
By Martin S. Feather, Tim Menzies, Judith R. Connelly
Issue Date:December 2003
pp. 6
We present an approach to matching software practitioners' needs to software researchers' activities. It uses an accepted taxonomical software classification scheme as intermediary, in terms of which practitioners express needs, and researchers express act...
 
Data Mining for Very Busy People
Found in: Computer
By Tim Menzies, Ying Hu
Issue Date:November 2003
pp. 22-29
<p>Most modern businesses can access mountains of data electronically—the trick is effectively using that data. In practice, this means summarizing large data sets to find the data that really matters. Most data miners are zealous hunters seeking det...
 
Relating Practitioner Needs to Research Activities
Found in: Requirements Engineering, IEEE International Conference on
By Martin S. Feather, Tim Menzies, Judith R. Connelly
Issue Date:September 2003
pp. 352
Many organizations look to research to yield new and improved products and practices. Connecting practitioners who have the need for research results to the researchers producing those results is important to guiding research and utilizing its results. Lik...
   
When Can We Test Less?
Found in: Software Metrics, IEEE International Symposium on
By Tim Menzies, Justin Di Stefano, Kareem Ammar, Kenneth McGill, Pat Callis, Robert (Mike) Chapman, John Davis
Issue Date:September 2003
pp. 98
When it is impractical to rigorously assess all parts of complex systems, test engineers use defect detectors to focus their limited resources. In this article, we define some properties of an ideal defect detector and assess different methods of generatin...
 
Learning Early Lifecycle IV&V Quality Indicators
Found in: Software Metrics, IEEE International Symposium on
By Tim Menzies, Justin S. Di Stefano, Mike Chapman
Issue Date:September 2003
pp. 88
Traditional methods of generating quality code indicators (e.g. linear regression, decision tree induction) can be demonstrated to be inappropriate for IV&V purposes. IV&V is a unique aspect of the software lifecycle, and different methods are nece...
 
Guest Editor?s Introduction: 21st Century AI--Proud, Not Smug
Found in: IEEE Intelligent Systems
By Tim Menzies
Issue Date:May 2003
pp. 18-24
<p></p>
 
Metrics That Matter
Found in: Software Engineering Workshop, Annual IEEE/NASA Goddard
By Tim Menzies, Justin S. Di Stefano, Mike Chapman, Ken McGill
Issue Date:December 2002
pp. 51
Within NASA, there is an increasing awareness that software is of growing importance to the success of missions. Much data has been collected, and many theories have been advanced on how to reduce or eliminate errors in code. However, learning requires exp...
 
Data Sniffing — Monitoring of Machine Learning for Online Adaptive Systems
Found in: Tools with Artificial Intelligence, IEEE International Conference on
By Yan Liu, Tim Menzies, Bojan Cukic
Issue Date:November 2002
pp. 16
<p>Adaptive systems are systems whose function evolves while adapting to current environmental conditions. Due to the real-time adaptation, newly learned data have a significant impact on system behavior. When online adaptation is included in system ...
 
Saturation Effects in Testing of Formal Models
Found in: Software Reliability Engineering, International Symposium on
By Tim Menzies, David Owen, Bojan Cukic
Issue Date:November 2002
pp. 15
Formal analysis of software is a powerful analysis tool, but can be too costly. Random search of formal models can reduce that cost, but is theoretically incomplete. However, random search of finite-state machines exhibits an early saturation effect, i.e.,...
 
Machine Learning for Software Engineering: Case Studies in Software Reuse
Found in: Tools with Artificial Intelligence, IEEE International Conference on
By Justin S. Di Stefano, Tim Menzies
Issue Date:November 2002
pp. 246
<p>There are many machine learning algorithms currently available. In the 21st century, the problem no longer lies in writing the learner, but in choosing which learners to run on a given data set. In this paper, we argue that the final choice of lea...
 
An Alternative to Model Checking: Verification by Random Search of AND-OR Graphs Representing Finite-State Models
Found in: High-Assurance Systems Engineering, IEEE International Symposium on
By David Owen, Bojan Cukic, Tim Menzies
Issue Date:October 2002
pp. 119
<p>In the development of high-assurance systems, formal modeling, analysis and verification techniques are playing an increasingly important role. In spite of significant advances, formal modeling and verification using model checking, still suffer f...
 
Converging on the Optimal Attainment of Requirements
Found in: Requirements Engineering, IEEE International Conference on
By Martin S. Feather, Tim Menzies
Issue Date:September 2002
pp. 263
<p>Planning for the optimal attainment of requirements is an important early lifecycle activity. However, such planning is difficult when dealing with competing requirements, limited resources, and the incompleteness of information available at requi...
 
Model-Based Tests of Truisms
Found in: Automated Software Engineering, International Conference on
By Tim Menzies, David Raffo, Siri-on Setamanit, Ying Hu, Sina Tootoonian
Issue Date:September 2002
pp. 183
<p>Software engineering (SE) truisms capture broadly-applicable principles of software construction. The trouble with truisms is that such general principles may not apply in specific cases. This paper tests the specificity of two SE truisms: (a) inc...
 
Better Reasoning About Software Engineering Activities
Found in: Automated Software Engineering, International Conference on
By Tim Menzies, James D. Kiper
Issue Date:November 2001
pp. 391
Software management oracles often contain numerous subjective features. At each subjective point, a range of behaviors is possible. Stochastic simulation samples a subset of the possible behaviors. After many such stochastic simulations, the TAR2 treatment...
 
Fast Formal Analysis of Requirements via
Found in: Software Engineering, International Conference on
By Tim Menzies, John Powell, Michael E. Houle
Issue Date:May 2001
pp. 0391
No summary available.
 
The Complexity of TRMCS-like Spiral Specification
Found in: Software Specification and Design, International Workshop on
By Tim Menzies
Issue Date:November 2000
pp. 183
Modern software is often constructed using “spiral specification”; i.e. the specification is a dynamic document that is altered by experience with the current version of the system. Mathematically, many of the sub-tasks within spiral specification belong t...
 
Testing Nondeterminate Systems
Found in: Software Reliability Engineering, International Symposium on
By Tim Menzies, Bojan Cukic, Harhsinder Singh, John Powell
Issue Date:October 2000
pp. 222
The behavior of nondeterminate systems can be hard to predict, since similar inputs at different times can generate different outputs. In other words, the behavior seen during testing process may not be seen at runtime. Due to the uncertainties associated ...
 
Practical Large Scale What-if Queries: Case Studies with Software Risk Assessment
Found in: Automated Software Engineering, International Conference on
By Tim Menzies, Erik Sinsel
Issue Date:September 2000
pp. 165
When a lack of data inhibits decision making, large-scale what-if queries can be conducted over the uncertain parameter ranges. Such what-if queries can generate an overwhelming amount of data. We describe here a general method for understanding that data....
 
On the Sufficiency of Limited Testing for Knowledge Based Systems
Found in: Tools with Artificial Intelligence, IEEE International Conference on
By Tim Menzies, Bojan Cukic
Issue Date:November 1999
pp. 431
Knowledge-based engineering and computational intelligence are expected to become core technologies in the design and manufacturing for the next generation of space exploration missions. Yet, if one is concerned with the reliability of knowledge based syst...
 
An Empirical Investigation of Multiple Viewpoint Reasoning in Requirements Engineering
Found in: Requirements Engineering, IEEE International Conference on
By Tim Menzies, Steve Easterbrook, Bashar Nuseibeh, Sam Waugh
Issue Date:June 1999
pp. 100
Multiple viewpoints are often used in Requirements Engineering to facilitate traceability to stakeholders, to structure the requirements process, and to provide richer modelling by incorporating multiple conflicting descriptions. In the latter case, the ne...
 
Balancing Privacy and Utility in Cross-Company Defect Prediction
Found in: IEEE Transactions on Software Engineering
By Fayola Peters,Tim Menzies,Liang Gong,Hongyu Zhang
Issue Date:August 2013
pp. 1054-1068
Background: Cross-company defect prediction (CCDP) is a field of study where an organization lacking enough local data can use data from other organizations for building defect predictors. To support CCDP, data must be shared. Such shared data must be priv...
 
On the Value of Ensemble Effort Estimation
Found in: IEEE Transactions on Software Engineering
By Ekrem Kocaguneli,Tim Menzies,Jacky W. Keung
Issue Date:November 2012
pp. 1403-1416
Background: Despite decades of research, there is no consensus on which software effort estimation methods produce the most accurate models. Aim: Prior work has reported that, given M estimation methods, no single method consistently outperforms all others...
 
Genetic Algorithms for Randomized Unit Testing
Found in: IEEE Transactions on Software Engineering
By James H. Andrews, Tim Menzies, Felix C.H. Li
Issue Date:January 2011
pp. 80-94
Randomized testing is an effective method for testing software units. The thoroughness of randomized unit testing varies widely according to the settings of certain parameters, such as the relative frequencies with which methods are called. In this paper, ...
 
Is Continuous Compliance Assurance Possible?
Found in: Information Technology: New Generations, Third International Conference on
By Joseph M. D'Alessandro, Cynthia D. Tanner, Bonnie W. Morris, Tim Menzies
Issue Date:April 2009
pp. 1599
The increased threat of legal sanctions or fines for failure to comply with laws and regulations make it imperative that auditors assess the level of compliance with information sharing policies and regulations in a timely manner. Embedding a monitoring me...
 
 1  2 Next >>