Jorge Diaz-Herrera
, Ph.D.
Professor and President
Keukla College
Keukla Park, N.Y. 14478
USA
Phone: +1 315 279 5201
Email: jdiazh@keuka.edu

 

 

DVP Term Expires December 2013


Dr. Jorge L Diaz-Herrera is Professor and President of Keukla College since July 1, 2011. Prior to this position was was Professor and founding Dean of the College of Computing and Information Sciences at Rochester Institute of Technology in Rochester, New York, since July, 2002. Prior to this appointment, he was Professor of Computer Science and Department Head at SPSU in Atlanta and Yamacraw project coordinator with Georgia Tech. He has had other academic appointments with Carnegie Mellon’s Software Engineering Institute, Monmouth University in NJ, George Mason University in VA, and at SUNY Binghamton, NY.

Dr. Diaz-Herrera has conducted extensive consulting services with a number of firms and government agencies including: New York Stock Exchange (SIAC), MITRE Corp., the Institute for Defense Analysis, General Electric, Singer-Link, TRW, EG&G, IBM, among others. He has also provided professional expertise to international organizations including the European Software Institute, Australian Defense Science and Technology Office, Kyoto Computing Gaikum, Kuwait University, Cairo University, Instituto Politecnico Santo Domingo (INTEC), and Malaysia University of Technology, among others.

Dr. Diaz-Herrera has chaired several national and international conferences, and has been a technical reviewer for the National Science Foundation, the American Society for Engineering Education, and several conferences and journals. He has more than 90 publications. He served as writer of the IEEE-CS Software Engineering Professional Examination, and co-edited the Software Engineering volume of the ACM/IEEE Computing Curricula 2004. He is also an active member of the CRA-Deans group of the Computer Research Association in Washington, D.C. He serves and has served on various technical advisory committees and national governing boards including SEI Technical Advisory Group, NSF/CISE Advisory Committee, NY State Universal Broadband Council, among others.

Dr. Diaz-Herrera completed his undergraduate education in Venezuela, and holds both a Master’s and Ph.D. in Computing Studies from Lancaster University, in the UK. He recently completed the Graduate Certificate in Management Leadership in Education from Harvard University’s Graduate School of Education.


Software Product Lines
Maintaining leadership in software-intensive organizations depends increasingly on the ability to improve their design and development processes faster than their competitors. A promising approach is to move the focus from building single systems to orchestrating families of systems in a “range of similar” products by identifying “reusable” solutions that support future development of multiple systems. Thus, potentially taking advantage of economies of scope, a benefit that comes from developing one asset used in multiple contexts. A group of related software-intensive systems sharing a common managed set of features is considered a software product line (SPL). SPL requires planned large-scale systematic software reuse that makes deriving, instead of creating, individual systems in a prescribed way possible to produce quality products consistently and predictably. An underlying assumption is that its benefits offset potential extra costs for any increased organizational or design complexity. For example, organizations reduce cycle time and cost by eliminating redundancy; building systems from a common component base reduces risk and improves quality by using trusted, proven components repeatedly; an asset-based approach allows the management of legacy systems more efficiently, increasing the likelihood of longer time-INmarket; and finally, the organization evolves a common marketing strategy, and strengthens its core competency around strategic business interests and goals. In this presentation we introduce the SEI-sponsored Framework for Software Product Line Practice. We illustrate the approach with a systematic methodology for facilitating rapid and efficient software development for embedded systems. The method draws from results from system-level notations and standards such as UML-2. We conclude with an analysis of SPL adoption and research challenges. We know from experience that this approach has no been widely adopted in industry. Why is this the case? What are the impediments of adopting SPL in industry? What are the main issues and unsolved problems that impede the widespread adoption of the SPL approach as a way to deliver software products more effectively? We report on our findings from a survey directed to understand some of these questions and to identify key top research problems. We also present a mapping study and a textual analysis of key papers to infer how actively the research community has been working on the reported top research problems in SPL.

The Future of Software Engineering
Software is increasingly of public importance, and high quality software is becoming critical in our daily lives, our safety and security, and in national and global economies. When the term Software Engineering was introduced in the late 1960s, it was deliberately chosen as being provocative, implying the need for an approach similar to the established branches of engineering. The term is now widely used in industry, government and academia: Thousands of computing professionals go by the title software engineer; numerous publications, groups and organizations, and professional conferences use the term software engineering in their names; and there are many educational courses and programs on software engineering.

In spite of the increasing plead for an engineering approach to software development, software engineering is not generally recognized as an engineering discipline in the traditional sense, and the large majority of software engineers are NOT registered engineers. The question becomes whether the practice and education of software engineers is adequate from the point of view of engineering education and the practice of engineering in the traditional sense. If the set of activities associated with the creation of software is to be considered engineering, we must establish appropriate relationships with more conventional engineering. How do we establish these similarities and differences more clearly? What does a software developer do that can be considered as engineering? Are there basic universal principles and laws for software? Could a scientific theory for software exist?

Engineering disciplines have emerged from ad hoc practice by the exploitation and management of technology, and by the application of maturing science. Engineered artifacts’ performance and behavior can be predicted through the application of scientific knowledge and principles, and thus, engineering is critically dependent on matured science. There is, however, no (mature) science for software and its scientific knowledge base is minimal; and thus we cannot make actual predictions with any level of confidence about software behavior. It also remains true today that we do not have a coherent engineering design method for the systematic production of software, specifically for large and complex systems, and that the great majority of software produced is developed following ad-hoc methods. Furthermore, most software professionals are not specially trained for the work they do since companies are willing to accept self-taught programmers, particularly if they have other skills relevant to the business. Given this current situation, there is a need for providing appropriate definitions. In this presentation we describe the nature of software and its basic constraints and attributes highlighting the difficulty in measurements that can predict its behavior. We summarize what has been accomplished, what is taking place today (e.g., SwEBoK, SEMAT, and ACM/IEEE curricula), and how we may prepare better for the challenges of building ever more complex software
systems.

Cyberinfrastructure: computing in the 21st Century
“All cars were trucks because that’s what you needed on the farm,” said Steve Jobs comparing the role of the PC, the workhorse of computing for the past three decades, with that of the truck, when America was primarily an agrarian nation [New York Times, June 3, 2010]. That was the infrastructure of the 20th century. What should it be for the 21st century, the century of information?

The advent of powerful calculating machines in the mid 20th century made it possible for many scientific discoveries and engineering feats to take place that would not have been feasible by hand or slide rule. Information is the lifeblood of modern times. It is the raw material from which understanding and decisions happen; it is an asset in the form of knowledge of an organization; it is the intellectual capital that transforms our world. More information with more value is being generated due to the widespread use of computer systems and networks. Today your ability to complete business requirements is tied directly to information assurance and computer security. However, it is also a fact of life that productivity is hampered by faulty systems and software that does not work. As Disjkstra once said “when we did not have computers, we did not have problems, when we had a few computers we had a few problems, now we have lots of problems.” With the advent of the future Internet and new multi-modal connecting devices, the way we “compute” will change drastically.

This computing-enabled infrastructure of the 21st century has been termed cyberinfrastructure, and it refers to the networks and communications technologies, distributed and parallel computation algorithms and software development required to function in a knowledge economy in today’s information age. The recently coined term was introduced to describe the global information technology environments in which capabilities of the highest level of computing tools would be available in an interoperable network. The goal is “to join the [computing] community with scientific and engineering disciplines to build a high-performance, networked system of distributed computing, storage, visualization capabilities, and sensors on an unprecedented scale … with national, and ultimately global, presence.”

This talk focuses on four principal points, namely (a) the requirements for the infrastructure of the 21st century economy, (b), current advances on the design of a new Internet, i.e., the GENI project, (c) the notion of a world totally connected with examples of cloud computing environments, and (d) information assurance and computer security, the former referring to the set of technologies and methods to protect the confidentiality, integrity, and availability of information, while the latter refers to methods and tools in place to make sure that the cyberinfrastructure that create and transport information can be trusted.