ZeePedia

A Preview of Software Development and Maintenance in 2049:Requirements Analysis Circa 2049

<< Overview of 50 Software Best Practices:Best Practices for Outsourcing Software Applications
How Software Personnel Learn New Skills:The Evolution of Software Learning Channels >>
img
Chapter
3
A Preview of Software
Development and
Maintenance in 2049
Introduction
From the 1960s through 2009, software development has been essen-
tially a craft where complicated applications are designed as unique
artifacts and then constructed from source code on a line-by-line basis.
This method of custom development using custom code written line by
line can never be efficient, economical, or achieve consistent levels of
quality and security.
Composing and painting a portrait in oil paint and developing a soft-
ware application are very similar in their essential nature. Each of these
artifacts is unique, and each is produced using individual "brushstrokes"
that need to be perfectly placed and formed in order for the overall
results to be aesthetic and effective. Neither portraits nor software
applications are engineering disciplines.
Hopefully, by 2049, a true engineering discipline will emerge that will
allow software to evolve from a form of artistic expression to a solid engi-
neering discipline. This section presents a hypothetical analysis of the way
software applications might be designed and constructed circa 2049.
If software should become a true engineering discipline, then much
more than code development needs to be included. Architecture, require-
ments, design, code development, maintenance, customer support, train-
ing, documentation, metrics and measurements, project management,
security, quality, change control, benchmarks, and many other topics
need to be considered.
177
178
Chapter Three
The starting point in both 2009 and 2049 will of course be the require-
ments for the new application. In 2009 users are interviewed to develop
the requirements for new applications, but in 2049 a different method
may be available.
Let us assume that the application to be developed circa 2049 is a new
form of software planning and cost-estimating tool. The tool will provide
software cost estimates, schedule estimates, quality estimates, and staff-
ing estimates as do a number of existing tools. However, the tool will also
introduce a number of new features, such as:
1. Early sizing prior to knowledge of full requirements
2. Estimates of requirements changes during development
3. Estimates of defect quantities in creeping requirements
4. Integrated risk analysis
5. Integrated value analysis
6. Integrated security analysis
7. Prediction of effects of any CMMI level on productivity and quality
8. Prediction of effects of various quantities of reusable materials
9. Prediction of effects of intelligent agents of software development
10. Prediction of effects of intelligent agents on software maintenance
11. Prediction of effects of intelligent agents on software documentation
12. Predication of effects of intelligent agents on software customer
support
13. Prediction of effects of intelligent agents on software failures
14. Automated conversion between function points, LOC, story points,
and so on
15. Estimates of learning curves on the part of users of the application
16. Estimates of mistakes made while users learn the application
17. Estimates of customer support and maintenance for 10+ years after
deployment
18. Estimates of application growth for 10+ years after initial deploy-
ment
19. Integrated capture of historical data during development and main-
tenance
20. Automated creation of benchmarks for productivity and quality
21. Expert advice on software quality control
22. Expert advice on software security control
A Preview of Software Development and Maintenance in 2049
179
23. Expert advice on software governance
24. Expert advice on intellectual property
25. Expert advice on relevant standards and regulations
The 19th and 20th new features of the estimating tool would
involve establishing an overall license with the International Software
Benchmarking Standards Group (ISBSG) so that customers would be
able to use the tool to gather and analyze benchmarks of similar applica-
tions while estimating new applications. Each client would have to pay
for this service, but it should be integrated into the tool itself. Thus, not
only would estimates be produced by the tool, but also benchmarks for
similar applications would be gathered and used to support the estimate
by providing historical data about similar applications.
The new estimating tool is intended to be used to collect historical
data and create benchmarks semiautomatically. These benchmarks
would utilize the ISBSG question set, with some additional questions
included for special topics such as security, defect removal efficiency, and
customer support not included in the ISBSG questions.
Because the tool will be used to both predict and store confidential and
perhaps classified information, security is a stringent requirement, and
a number of security features will be implemented, including encryption
of all stored information.
We can also assume that the company building the new estimating
tool has already produced at least one prior tool in the same business
area; in other words, existing products are available for analysis within
the company.
Requirements Analysis Circa 2049
The first step in gathering requirements circa 2049 will be to dispatch
an intelligent agent or avatar to extract all relevant information about
software estimating and planning tools from the Web. All technical
articles and marketing information will be gathered and analyzed for
similar tools such as Application Insight, Artemis Views, Checkpoint,
COCOMO and its clones, KnowledgePlan, Microsoft Project, Price-S,
SEER, SLIM, SPQR/20, SoftCost, and all other such tools.
The intelligent agent will also produce a consolidated list of all of the
functions currently available in all similar tools; that is, sizing methods,
currency conversion, inflation-rate adjustments, quality predictions,
total cost of ownership, and so on.
Hopefully, by 2049, software reuse will have reached a level of maturity
so that comprehensive catalogs of reusable artifacts will be available; cer-
tification for quality and security will be commonplace; and architecture
180
Chapter Three
and design will have reached the point where standard structural descrip-
tions for applications, attachment points, and other relevant issues will
be easily accessible.
The intelligent agent will also gather information from public records
about numbers of copies of such tools sold, revenues from the tools, user
associations for the tools, litigation against tool vendors, and other rel-
evant business topics.
If the tool is used to estimate financial software applications, the intel-
ligent agent will also scan the Web for all government regulations that
may be applicable such as Sarbanes-Oxley and other relevant rules. Due
to the financial crisis and recession, scores of new regulations are about to
surface, and only an intelligent agent and expert system can keep up.
For other forms of software, the intelligent agent might also scan the
Web for regulations, standards, and other topics that affect governance
and also government mandates--for example, software applications
that deal with medical devices, process medical records, or that need
legal privacy protection.
Once the universe of existing tools and feature sets has been analyzed,
the next step is to consider the new features that will add value over and
above what is already available in existing project planning and estimat-
ing tools. Here, requirements in 2049 will resemble those of 2009, in that
inputs from a number of stakeholders will be collected and analyzed.
Since the application to be developed is an expert-system, much of
the information about new features must come from experts in software
planning and estimating. Although the views of customers via surveys
or focus groups will be helpful, and the views of the marketing organiza-
tion of the company will be helpful, only experts are qualified to specify
the details of the more unique features.
That being said, as requirements for new features are being planned,
a parallel effort will take place to develop patent applications for some
or all of the unique features. Here too an intelligent agent will be dis-
patched to gather and analyze all existing patents that cover features
that might be similar to those planned for the new estimating tool.
Assuming that most of the new features truly are unique and not
present in any current estimating tool, somewhere between half a dozen
to perhaps 20 new patent applications will probably be prepared as
the requirements are assembled. This is an important step in building
applications that contain new intellectual content: violating a patent
can cause huge expenses and stop development cold. In particular, the
patents of companies such as Intellectual Ventures, whose main busi-
ness is patent licensing, need to be considered.
In addition to or perhaps in place of patents, there may also be trade
secrets, invention disclosures, copyrights, and other forms of protection
for confidential and proprietary information and algorithms.
A Preview of Software Development and Maintenance in 2049
181
For the tool discussed in this example, patent protection will be needed
for the early sizing feature, for the feature that predicts requirements
changes during development, and also for the feature that predicts
customer learning-curve costs. Other topics might also require patent
protection, but the three just cited are novel and unique and not found
in competitive tools. For example, no current estimating tools have any
algorithms that deal with the impacts of intelligent agents.
The requirements analysis phase will also examine the possible plat-
forms for the new estimating tool; that is, what operating systems will
host the tool, what hardware platforms, and so on. No doubt a tool of this
nature would be a good candidate for personal computers, but perhaps
a subset of the features might also be developed for hand-held devices.
In any case, a tool of this sort will probably run on multiple platforms
and therefore needs to be planned for Windows, Apple, Linux, Unix,
and so on.
Not only will the tool operate on multiple platforms, but also it is
obviously a tool that would be valuable in many countries. Here too
an intelligent agent would be dispatched to look for similar tools that
are available in countries such as China, Japan, Russia, South Korea,
Brazil, Mexico, and so on. This information will be part of market plan-
ning and also will be used to ascertain how many versions must be built
with information translated into other natural languages.
Using information gathered via intelligent agents on current market
size, another aspect of requirements analysis will be to predict the
market potentials of the new tool and its new features in terms of cus-
tomers, revenue, competitive advantages, and so forth. As with any other
company, the value of the new features will have to generate revenues
perhaps ten times greater than development and maintenance costs to
commit funds for the new product.
The outputs from the requirements phase would include the require-
ments for the new tool, summary data on all patents that are relative to
the application area, and a summary of the current market for estimat-
ing and project planning tools in every country where the tool is likely
to generate significant revenues. Summaries of relevant government
regulations would also be included. It is interesting that about 85 per-
cent of these outputs could be produced by intelligent agents and expert
systems with little human effort other than setting up search criteria.
Superficially, applications designed for service-oriented architecture
(SOA) also envision collections of standard reusable components. The
object-oriented (OO) paradigm has incorporated reusable objects for
more than 30 years. However, neither SOA nor the OO paradigm includes
formal mining of legacy applications for algorithms and business rules.
Neither uses intelligent agents for searching the Web. Neither SOA nor
OO envisions developing all-new features as reusable objects, although
182
Chapter Three
the OO paradigm comes close. Also, neither the quality control nor the
security practices of the SOA and OO methods are as rigorous as needed
for truly safe applications. For example, certification of the reused code
is spotty in both domains.
Design Circa 2049
Because many similar applications already exist, and because the com-
pany itself has built similar applications, design does not start with a
clean piece of paper or a clean screen. Instead design starts by a careful
analysis of the architecture and design of all similar applications.
One very important difference between design circa 2009 and design
circa 2049 will be the use of many standard reusable features from in-
house sources, commercial sources, or possibly from libraries of certified
reusable functions.
For example, since the application is a cost-estimating tool, no doubt
currency conversion, inflation rate adjustments, internal and accounting
rates of return, and many other features are available in reusable form
from either commercial vendors or in-house tools already developed.
Some of the printed output may use report generation tools such as
Crystal Reports or something similar. Some application data may be
stored in normal commercial databases such as Access, Bento, or similar
packages.
Since the company building the application already has similar appli-
cations, no doubt many features such as quality estimation, schedule
estimation, and basic cost estimation will be available. The caveat is
that reuse needs to be certified to almost zero-defect levels to be eco-
nomically successful.
Ideally, at least 85 percent of the features and design elements will
be available in reusable form, and only 15 percent will be truly new and
require custom design. For the new features, it is important to ensure
high levels of quality and security, so design inspections would be per-
formed on all new features that are to be added.
However, custom development for a single application is never cost-
effective. Therefore, a major difference in design circa 2049 from design
circa 2009 is that almost every new feature will be designed as a reus-
able artifact, rather than being designed as a unique artifact for a single
application.
Along with formal reuse as a design goal for all important features,
security, quality, and portability among platforms (Windows, Apple,
Unix, Linux, etc.) are fundamental aspects of design. Custom design
for a single application needs to be eliminated as a general practice,
and replaced by design for reuse that supports many applications and
many platforms.
A Preview of Software Development and Maintenance in 2049
183
For example, the new feature that permits early sizing without knowl-
edge of full requirements is obviously a feature that might be licensed to
other companies or used in many other applications. Therefore it needs
to be designed for multiple uses and multiple platforms. It would also
need patent protection.
It may be that the design environment circa 2049 will be quite differ-
ent from 2009. For example, since most applications are based on prior
applications, descriptions of previous features will be extracted from
the legacy applications. The extraction of design and algorithms from
legacy code can be done automatically via data mining of the source
code, assuming that past specifications have not been fully updated or
may even be missing.
Therefore in the future, software designers can concentrate more on
what is new and novel rather than dealing with common generic topics
from legacy applications. The design of the carryover features from
legacy applications will be generated by means of an expert system,
augmented by web searches for similar applications by using an intel-
ligent agent.
An expert-system design tool will be needed in order to mine informa-
tion from similar legacy applications. This tool will include the features
of static analysis, complexity analysis, security analysis, architecture
and design structural analysis, and also the capability of extracting
algorithms and business rules from legacy code.
Outputs from the tool will include structural design graphs, control
flow information, information on dead code, and also textual and math-
ematical descriptions of business rules and algorithms embedded in the
legacy code.
Even sample use cases and "user stories" could be constructed auto-
matically by an intelligent agent based on examining information avail-
able on the Web and from published literature. Data dictionaries of all
applications could also be constructed using expert systems with little
human involvement.
Because software is dynamic, it can be expected that animation and
simulation will also be part of design circa 2049. Perhaps a 3-D dynamic
model of the application might be created to deal with issues such as
performance, security vulnerabilities, and quality that are not easily
understood using static representations on paper.
The completed design would show both old and new features, and
would even include comparisons between the new application and com-
petitive applications, with most of this work being done automatically
through the aid of intelligent agents and the design engine. Manual
design and construction of new algorithms by human experts would
be primarily for the new features such as early sizing, requirements
growth, and customer learning curves.
184
Chapter Three
For software engineering to become a true engineering discipline, it
will be necessary to have effective methods for analyzing and identifying
optimal designs of software applications. Designing every application
as a unique custom product is not really engineering. An expert system
that can analyze the structure, features, performance, and usability of
existing applications is a fundamental part of moving software from a
craft to an engineering discipline.
Indeed, catalogs of hundreds of optimal designs augmented by cata-
logs of certified reusable components should be standard features of
software architecture and design circa 2049. To do this, a taxonomy of
application types and a taxonomy of features are needed. Also, standard
architectural structures are needed and may perhaps follow the method
of the Zachman architectural approach.
Software Development Circa 2049
Assuming that perhaps 85 percent of software application features will
be in the form of standard reusable components, software development
circa 2049 will be quite different from today's line-by-line coding for
unique applications.
The first stage of software development circa 2049 is to accumulate all
existing reusable components and put them together into a working pro-
totype, with placeholders for the new features that will be added later.
This prototype can be used to evaluate basic issues such as usability,
performance, security, quality, and the like.
As new features are created and tested, they can be appended to the
initial working prototype. This approach is somewhat similar to Agile
development, except that most instances of Agile do not start by data
mining of legacy applications.
Some of the logistical portions of Agile development such as daily
progress meetings or Scrum sessions may also be of use.
However, because development is aimed at constructing reusable
objects rather than unique single-use objects, other techniques that
emphasize and measure quality will also be utilized. The Team Software
Process (TSP) and Personal Software Process (PSP) approaches, for
example, have demonstrated very high levels of quality control.
Due to very stringent security and quality requirements for the new
application, these reusable components must be certified to near zero-
defect levels. If such certification is not available, then the candidate
reusable components must be put through a very thorough examination
that will include automated static analysis, dynamic analysis, testing,
and perhaps inspections. In addition, the histories of all reusable compo-
nents will be collected and analyzed to evaluate any quality and security
flaws that might have been previously reported.
A Preview of Software Development and Maintenance in 2049
185
Because the new features for the application are not intended for a
single use, but are planned to become reusable components themselves,
it is obvious that they need to be developed very carefully. Of the avail-
able development methods for new development, the Team Software
Process (TSP) and the Personal Software Process (PSP) seem to have
the rigor needed for creating reusable artifacts. Some of the logistical
methods of Agile or other approaches may be utilized, but rigor and high
quality levels are the primary goals for successful reuse.
Because of the need for quality, automated static and dynamic analy-
sis, careful testing, and live inspections will also be needed. In particu-
lar, special kinds of inspections such as those concentrating on security
flaws and vulnerabilities will be needed.
Because of security issues, languages such as E that support secu-
rity might be used for development. However, some of the older reus-
able components will no doubt be in other languages such as C, Java,
C++, and so on, so language conversions may be required. However,
by 2049, hopefully, secure versions of all reusable components may be
available.
Software cost-estimating applications of the type discussed in this
example are usually about 2,500 function points in size circa 2009. Such
applications typically require about two and a half calendar years to
build and achieve productivity rates between 10 and 15 function points
per staff month.
Defect potentials for such applications average about 4.5 per function
point, while defect removal efficiency is only about 87 percent. As a
result, about 1,400 defects are still present when the software first goes
to users. Of these, about 20 percent, or 280, would be serious enough to
cause user problems.
By switching from custom design and custom code to construction
based on certified reusable components, it can be anticipated that pro-
ductivity rates will be in the range of 45 to 50 function points per staff
month. Schedules would be reduced by about one year, for a develop-
ment cycle of 1.5 calendar years instead of 2.5 calendar years.
Defect potentials would be only about 1.25 per function point, while
defect removal efficiency would be about 98 percent. As a result, only
about 60 latent defects would remain at delivery. Of these, only about
10 percent would be serious, so users might encounter as few as six
significant defects after release.
These improvements in quality will of course benefit customer sup-
port and maintenance as well as initial development.
Since the tool used as an example is designed to capture historical data
and create a superset of ISBSG benchmarks, obviously the development
of the tool itself will include productivity, schedule, staffing, and quality
benchmarks. In fact, it is envisioned that every major software application
186
Chapter Three
would include such benchmark data, and that it would routinely be added
to the ISBSG data collection. However, some applications' benchmark data
may not be made publicly available due to competitive situations, classi-
fied military security, or for some other overriding factor.
It is interesting to speculate on what would be needed to develop
100 percent of a new application entirely from reusable materials. First,
an expert system would have to analyze the code and structure of a
significant number of existing legacy applications: perhaps 100 or more.
The idea of this analysis is to examine software structures and archi-
tecture from examination of code, and then to use pattern-matching to
assemble optimal design patterns.
Another criterion for 100 percent development would be to have
access to all major sources of reusable code, and, for that matter, access
to reusable test cases, reusable user documentation, reusable HELP
text, and other deliverables. Not all of these would come from a single
source, so a dynamic and constantly updated catalog would be needed
with links to the major sources of reusable materials.
Needless to say, interfaces among reusable components need to be
rigorously defined and standardized for large-scale reuse to be feasible
when components are available from multiple companies and are cre-
ated using multiple methods and languages.
Because quality and security are critical issues, selected code seg-
ments would either have to be certified to high standards of excellence,
or run through a very careful quality vetting process that included static
analysis, dynamic analysis, security analysis, and usability analysis.
Assuming all of these criteria were in place, the results would be
impressive. Productivity rates might top 100 function points per month
for an application of 2,500 function points, while development schedules
would probably be in the range of three to six calendar months.
Defect potentials would drop below one per function point, while defect
removal efficiency might hit 99 percent. At these levels, an application
of 2500 function points would contain about 25 defects still present at
delivery, of which perhaps 10 percent would be serious. Therefore, only
about three serious defects would be present at delivery.
It is unlikely that automatic development of sophisticated applica-
tions will occur even by 2049, but at least the technologies that would be
needed can be envisioned. It is even possible to envision a kind of robotic
assembly line for software where intelligent agents and expert systems
perform more than 90 percent of the tasks now performed by humans.
User Documentation Circa 2049
In 2009 both customer support and user documentation are weak links
for software applications, and usually range between "unacceptable"
A Preview of Software Development and Maintenance in 2049
187
and "marginal." A few companies such as Apple, IBM, and Lenovo occa-
sionally reach levels of "good," but not very often.
Since applications constructed from reusable components will have
HELP text and user information as part of the package, the first step
is to assemble all of the document sections for the reusable materials
that are planned for the new application. However, documentation for
specific functions lacks any kind of overall information for the entire
application with dozens or hundreds of features, so quite a lot of new
information must be created.
For user documentation and HELP text, the next step would be to
dispatch an intelligent agent or avatar to check the user reviews of all
customer manuals, third-party user guides, and HELP text as discussed
on the Web. Obviously, both praise and complaints about these topics are
plentiful in forums and discussion groups, but an intelligent agent will
be needed to gather and assemble a full picture. The reviews of third-
party books at web sites such as Amazon will also be analyzed.
Once the intelligent agent has finished collecting information, the
sample of books and text with the highest and most favorable reviews
from customers will be analyzed, using both automated tools such as
the FOG and Fleisch indexes, and also reviews by human writers and
authors.
The goal of this exercise is to find the structure and patterns of books
and user information that provides the best information based on evalu-
ations of similar applications by live customers. Once excellent docu-
ments have been identified, it might be a good idea to subcontract the
work of producing user information to the authors whose books have
received the best reviews for similar applications.
If these authors are not available, then at least their books can be
provided to the authors who are available and who will create the
user guides. The purpose is to establish a solid and successful pattern
to follow for all publications. Note that violation of copyrights is not
intended. It is the overall structure and sequence of information that
is important.
Some years ago IBM did this kind of analysis for their own users'
guides. Customer evaluation reports were analyzed, and all IBM techni-
cal writers received a box of books and guides that users had given the
highest evaluation scores.
Other kinds of tutorial material include instructional DVDs, webi-
nars, and perhaps live instruction for really large and complex applica-
tions such as ERP packages, operating systems, telephone switching
systems, weapon systems, and the like. Unless such material is on the
Web, it would be hard to analyze using intelligent agents. Therefore,
human insight will probably still play a major part in developing train-
ing materials.
188
Chapter Three
Since the application is intended to be marketed in a number of coun-
tries, documentation and training materials will have to be translated
into several national languages, using automated translation as the
starting point. Hopefully, in 2049, automated translation will result in
smoother and more idiomatic text than translations circa 2009. However,
a final edit by a human author may be needed.
Because tools such as this have global markets, it can be expected
that documentation routinely will be converted into Japanese, Russian,
German, French, Korean, Chinese, Spanish, Portuguese, and Arabic ver-
sions. In some cases, other languages such as Polish, Danish, Norwegian,
Swedish, or Lithuanian may also occur.
Customer Support in 2049
As to customer support, it currently is even worse than user informa-
tion. The main problems with customer support include, but are not
limited to:
1. Long wait times when attempting to reach customer support by
phone
2. Limited phone support for deaf or hard-of-hearing customers
3. Poorly trained first-line support personnel who can't resolve many
questions
4. Limited hours for customer support; that is, working hours for one
time zone
5. Slow responses to e-mail queries for support
6. Charges for customer support even to report bugs in the vendor's
software
7. Lack of analysis of frequently reported bugs or defects
8. Lack of analysis for "frequently asked questions" and responses
Some of these issues are due to software being routinely released
with so many serious bugs or defects that about 75 percent of customer
service calls for the first year of application usage are about bugs and
problems. When software is developed from certified reusable materials,
and when new development aims at near zero-defect quality levels, the
numbers of bug-related calls circa 2049 should be reduced by at least
65 percent compared with 2009 norms. This should help in terms of
response times for phone and e-mail customer queries.
The next issue is inadequate support for the deaf and hard-of-hear-
ing customers. This issue needs more substantial work on the part
of software vendors. Automatic translation of voice to text should be
A Preview of Software Development and Maintenance in 2049
189
available using technologies that resemble Dragon Naturally Speaking
or other voice translators, but hopefully will have improved in speed
and accuracy by 2049.
While TTY devices and telephone companies may offer assistance
for the deaf and hard of hearing, these approaches are inconvenient for
dealing with software trouble reports and customer service. Long wait
times before vendor support phones answer and the need to deal with
technical terms makes such support awkward at best.
Ideally, cell phones and landlines might have a special key combina-
tion that indicates usage by a deaf or hard-of-hearing person. When
this occurs, automatic translation of voice into screen text might be
provided by the vendors, or perhaps even made available by cell phone
manufacturers.
The main point is that there are millions of deaf and hard-of-hearing
computer users, and the poor quality of today's software combined with
marginal user guides and HELP text makes access to software customer
support very difficult for deaf customers.
Other forms of physical disability such as blindness or loss of limbs
may also require special assistive tools.
Because some bugs and issues occur for hundreds or thousands of
customers, all bug reports need an effective taxonomy of symptoms so
they can be entered into a repository and analyzed by an expert system
for common causes and symptoms. These high-frequency problems need
to be conveyed to everyone in the customer-support organization. As the
bugs or problems are fixed or temporary solutions are developed, these
need to be provided to all support personnel in real time.
Some vendors charge for customer support calls. The main reason for
such charges is to cut down on the numbers of calls and thereby reduce
the need for customer support staff. Charging customers to report bugs
or for help in fixing bugs is a cynical and misguided policy. Companies
that do this usually have very unhappy customers who would gladly
migrate to other vendors. Better quality control is a more effective solu-
tion than charging for customer support.
All incoming problem reports that seem to be indicative of real bugs
in the software should trigger an immediate set of actions on the part
of the vendors:
1. The symptoms of the bug need to be analyzed using a standard tax-
onomy.
2. Analysis of the bug via static or dynamic analysis should be per-
formed at once.
3. The location of the bug in the application should be narrowed
down.
190
Chapter Three
4. The bug should be immediately routed to the responsible change
team.
5. Customers reporting the same bug should be alerted about its
status.
6. Repairs should be made available to customers as soon as possible.
7. If the bug is in reusable code from an external source, notification
should be made.
8. Severity levels and other topics should be included in monthly defect
reports.
Some large software companies such as IBM have fairly sophisticated
defect reporting tools that analyze bugs, catalog symptoms, route bugs to
the appropriate change team, and update defect and quality statistics.
Incidentally, since the example discussed here includes quality and
defect estimation capabilities, the tool should of course be used recur-
sively to estimate its own defect levels. That brings up the corollary
point that development methods such as TSP and PSP, static analysis,
and inspections that improve quality should also be used.
It is technically feasible to construct a customer-support expert system
that includes voice recognition; voice to text translation; and an arti-
ficial intelligence engine that could speak to customers, listen to their
problems, match the problems against other reports, provide status to
the customer, and for unique or special cases, transfer the customer to
a live human expert for additional consultation and support.
Indeed if expert analysis of reported defects and previous customer
calls were included in the mix, the AI engine could probably outperform
human customer support personnel.
Since this kind of an expert system does not depend upon human
specialists to answer the initial phone calls, it could lower the wait
time from less than 10 minutes, which is a typical value circa 2009, to
perhaps three rings of the phone, or less than 3 seconds.
A combination of high-quality reusable materials and support of
expert systems to analyze software defects could make significant
improvements in customer support.
Deployment and Customer Training in 2049
Applications such as the estimating tool used in this example are nor-
mally deployed in one (or more) of four different ways:
They are released on CD or DVD.
They are downloaded from the Web and installed by customers.
A Preview of Software Development and Maintenance in 2049
191
They can be run from the Web without installation (software as a
service).
They are installed by vendors or by vendor agents.
In 2009, the distribution among these four methods is shifting. The
relative proportions are CD installation about 60 percent, downloads
about 25 percent, vendor installs 10 percent, and web access about
5 percent.
If current trends continue, by 2049 the distribution might be web
access 40 percent, downloads 25 percent, CD installation 20 percent,
and vendor installation 15 percent. (Vendor installation usually is for
very large or specialized applications such as ERP packages, telephone
switching systems, robotic manufacturing, process control, medical
equipment, weapons systems, and the like. These require extensive
customization during the installation process.)
Although some applications are simple enough for customers to use
with only minimal training, a significant number of applications are
complicated and difficult to learn. Therefore, tutorial information and
training courses are necessary adjuncts for most large software pack-
ages. This training may be provided by the vendors, but a significant
third-party market exists of books and training materials created by
other companies such as book publishers and specialized education
groups.
Because of the high costs of live instruction, it can be anticipated
that most training circa 2049 will be done using prerecorded webinars,
DVDs, or other methods that allow training material to be used many
times and scheduled at the convenience of the customers.
However, it is also possible to envision expert systems and avatars
that operate in virtual environments. Such avatars might appear to be
live instructors and even answer questions from students and interact
with them, but in reality they would be AI constructs.
Because of the high cost of producing and distributing paper books
and manuals, by 2049 it can be expected that close to 100 percent of
instructional materials will be available either online, or in portable
forms such as e-book readers, and even cell phones and hand-held
devices. Paper versions could be produced on demand, but by 2049 the
need for paper versions should be much lower than in 2009.
Maintenance and Enhancement in 2049
Since the average life expectancies of software applications runs from 10 to
more than 30 years, a development process by itself is not adequate for a
true engineering discipline. It is also necessary to include maintenance
192
Chapter Three
(defect repairs) and enhancements (new features) for the entire life of
applications once they are initially developed and deployed.
In the software cost-estimating field discussed in this section,
COCOMO first came out in 1981, while Price-S is even older, and many
estimating tools were first marketed in the mid-1980s. As can be seen,
this business sector is already approaching 30 years of age. In fact, the
maximum life expectancy for large applications is currently unknown,
because many of them are still in service. A few applications, such as air
traffic control, may eventually top 50 years of continuous service.
Incidentally, the growth rate of software applications after their ini-
tial deployment is about 8 percent per calendar year, so after 20 to
30 years of usage, applications have ballooned to more than twice their
original size. Unfortunately, this growth is usually accompanied by
serious increases in cyclomatic and essential complexity; as a result
maintenance becomes progressively more expensive and "bad fixes" or
secondary defect injections made during changes increase over time.
To slow down the entropy or decay of aging legacy applications,
they need to be renovated after perhaps five to seven years of ser-
vice. Renovation would eliminate error-prone modules, refactor the
applications or simplify the complexity of the code, eliminate security
flaws, and possibly even convert the code to more modern languages
such as E. Automated renovation tools are available from several
vendors and seem to work well. One of these tools includes the abil-
ity to calculate the function point totals of applications as renovation
takes place, which is useful for benchmarks and studies of productiv-
ity and quality.
For the example estimating tool used here, new features will be added
at least once a year and possibly more often. These releases will also
include bug repairs, as they occur.
Because new programming languages come out at rates of about one
per month, and because there are already more than 700 programming
languages in existence, it is obvious that any estimating tool that sup-
ports estimates for coding must keep current on new languages as they
occur. Therefore, an intelligent agent will be kept busy scanning the Web
for descriptions of new languages, and for published reports on their
effects on quality and productivity.
Other new features will be gathered as an intelligent agent scans the
release histories of competitive estimating tools. For any commercial
application, it is important to be cognizant of the feature sets of direct
competitors and to match their offerings.
Of course, to achieve a position near the top of the market for software
estimating, mere passive replication of competitive features is not an
effective strategy. It is necessary to plan novel and advanced features
that are not currently offered by competitive estimating tools.
A Preview of Software Development and Maintenance in 2049
193
For the estimating example used in this discussion, a suite of new
and interesting features is being planned for several years out. These
include but are not limited to:
1. Side-by-side comparison of development methods (Agile, RUP, TSP,
etc.)
2. Inclusion of "design to cost" and "staff to cost" estimates
3. Inclusion of earned-value estimates and tracking
4. Estimates of impact of Six Sigma, quality function deployment, and
so on
5. Estimates of impact of ISO9000 and other standards
6. Estimates of impact of certification of personnel for testing, QA, and
so on
7. Estimates of impact of specialists versus generalists
8. Estimates of impact of large teams versus small teams
9. Estimates of impact of distributed and international development
10. Estimates of impact of multinational, multiplatform applications
11. Estimates of impact of released defects on customer support
12. Estimates of deployment costs for large ERP and SOA projects
13. Estimates of recovery costs for denial of service and other security
attacks
14. Estimates of odds of litigation occurring for outsource projects
15. Estimates of costs of litigation should it occur (breach of contract)
16. Estimates of patent licensing costs
17. Estimates of cost of patent litigation should it occur
18. Estimates of consequential damages for major business software
defects
19. Estimates of odds of litigation due to serious bugs in application
20. Integration of project history with cost accounting packages
It should be obvious that maintenance of software applications that
are constructed almost completely from reusable components derived
from a number of sources is going to be more complicated than main-
tenance in 2009. For the example application in this section, features
and code may have been acquired from more than a dozen vendors and
possibly from half a dozen in-house applications as well.
Whenever a bug is reported against the application, that same bug
may also be relevant to scores of other applications that utilize the same
194
Chapter Three
reusable component. Therefore, it is necessary to have accurate informa-
tion on the sources of every feature in the application. When bugs occur,
the original source of the feature needs to be notified. If the bug is from
an existing in-house application, the owners and maintenance teams of
that application need to be notified.
Because the example application operates on multiple platforms
(Windows, Apple, Linux, Unix, etc.), there is also a good chance that a
defect reported on one platform may also be present in the versions that
operate on the other platforms. Therefore, a key kind of analysis would
involve running static and dynamic analysis tools for every version when-
ever a significant bug is reported. Obviously, the change teams for all ver-
sions need to be alerted if a bug appears to have widespread impact.
Of course, this requires very sophisticated analysis of bugs to identify
which specific feature is the cause. In 2009, this kind of analysis is done
by maintenance programming personnel, but in 2049, extended forms
of static and dynamic analysis tools should be able to pin down bugs
faster and more reliably than today.
Maintenance or defect repairs circa 2049 should have access to a pow-
erful workbench that integrates bug reporting and routing, automated
static and dynamic analysis, links to test libraries and test cases, test
coverage analyzers, and complexity analysis tools. There may also be
automatic test case generators, and perhaps more specialized tools such
as code restructuring tools and language translators.
Because function point metrics are standard practices for benchmarks,
no doubt the maintenance workbench will also generate automated
function point counts for legacy applications and also for enhancements
that are large enough to change the function point totals.
Historically, software applications tend to grow at about 8 percent
per calendar year, using the size of the initial release in function points
as the starting point. There is no reason to think that growth in 2049
will be slower than in 2009, but there's some reason to think it might
be even faster.
For one thing, the utilization of intelligent agents will identify possible
features very rapidly. Development using standard reusable components
is quick enough so that the lag between identifying a useful feature and
adding it to an application will probably be less than 6 months circa
2049, as opposed to about 18 months circa 2009.
It is not uncommon circa 2009 for the original requirements and
design materials to fall out of use as applications age over the years.
In 2049, a combination of intelligent agents and expert systems will
keep the design current for as long as the application is utilized. The
same kinds of expert systems that are used to mine business rules
and algorithms could be kept in continuous use to ensure that the
A Preview of Software Development and Maintenance in 2049
195
software and its supporting materials are always at the same levels
of completeness.
This brings up the point that benchmarks for productivity and qual-
ity may eventually include more than 30 years of history and perhaps
even more than 50 years. Therefore, submission of data to benchmark
repositories such as ISBSG will be a continuous activity rather than a
one-time event.
Software Outsourcing in 2049
Dozens of outsourcing companies are in the United States, India, China,
Russia, and scores of other countries. Not only do outsource compa-
nies have to be evaluated, but larger economic issues such as inflation
rates, government stability, and intellectual property protection need
to be considered too. In today's world of financial fraud, due diligence in
selecting an outsourcer will also need to consider the financial integrity
of the outsource company (as demonstrated by the financial irregulari-
ties of Satyam Consulting in India).
In 2009, potential clients of outsource companies are bombarded
by exaggerated claims of excellence and good results, often without
any real history to back it up. From working as an expert witness in a
dozen lawsuits involving breach of contract by outsourcers, the author
finds it astonishing to compare the marketing claims made by the
vendors to the actual way the projects in court were really developed.
The marketing claims enumerated best practices throughout, but in
reality most of the real practices were astonishingly bad: inadequate
estimating, deceitful progress reports, inadequate quality control,
poor change management, and a host of other failures tended to be
rampant.
By 2049, a combination of intelligent agents and expert systems
should add some rigor and solid business insight into the topic of
finding suitable outsource partners. Outsourcing is a business deci-
sion with two parts: (1) whether outsourcing is the right strategy for a
specific company to follow, and (2) if outsourcing is the right strategy,
how the company can select a really competent and capable outsource
vendor.
The first step in determining if outsourcing is a suitable strategy is to
evaluate your current software effectiveness and strategic direction.
As software operations become larger, more expensive, and more
widespread, the executives of many large corporations are asking a
fundamental question: Should software be part of our core business?
This is not a simple question to answer, and the exploration of some
of the possibilities is the purpose of this chapter. You would probably
196
Chapter Three
want to make software a key component of your core business operations
under these conditions:
1. You sell products that depend upon your own proprietary software.
2. Your software is currently giving your company significant competi-
tive advantage.
3. Your company's software development and maintenance effective-
ness are far better than your competitors'.
You might do well to consider outsourcing of software if its relation-
ship to your core business is along the following lines:
1. Software is primarily used for corporate operations; not as a product.
2. Your software is not particularly advantageous compared against
your competitors.
3. Your development and maintenance effectiveness are marginal.
Over the past few years, the Information Technology Infrastructure
Library (ITIL) and service-oriented-architecture (SOA) have emerged.
These methods emphasize the business value of software and lead to
thinking about software as providing a useful service for users and
executives, rather than as an expensive corporate luxury.
Some of the initial considerations for dealing with the topic of whether
software should be an integral part of corporate operations or perhaps
outsourced include the following 20 points:
1. Are you gaining significant competitive advantage from your current
software?
2. Does your current software contain trade secrets or valuable pro-
prietary data?
3. Are your company's products dependent upon your proprietary
software?
4. How much does your current software benefit these business func-
tions:
A. Corporate management
B. Finance
C. Manufacturing and distribution
D. Sales and marketing
E. Customer support
F. Human resources
A Preview of Software Development and Maintenance in 2049
197
5. How much software does your company currently own?
6. How much new software will your company need in the next five
years?
7. How much of your software is in the form of aging legacy systems?
8. How many of your aging legacy systems are ITIL-compliant?
9. How many of your aging legacy systems are SOA-ready?
10. Is your software development productivity rate better than your
competitors?
11. Is your software maintenance more efficient than your competitors?
12. Is your time to market for software-related products better than
your competitors?
13. Is your software quality level better than your competitors?
14. Are you able to use substantial volumes of reusable artifacts?
15. How many software employees are currently on board?
16. How many software employees will be hired over the next five
years?
17. How many users of software are there in your company?
18. How many users of software will there be in five years?
19. Are you considering enterprise software packages such as SAP or
Oracle?
20. Are you finding it hard to hire new staff due to the personnel short-
age?
The patterns of answers can vary widely from company to company,
but will fall within this spectrum of possibilities:
A. If your company is a software "top gun" and a notable leader
within your industry, then you probably would not consider out-
sourcing at all.
B. At the opposite extreme, if your company trails all major com-
petitors in software topics, then outsourcing should be on the
critical path for immediate action.
In two other situations, the pros and cons of outsourcing are
more ambiguous:
C. Your software operations seem to be average within your indus-
try, neither better nor worse than your competitors in most
respects. In this case, outsourcing can perhaps offer you some
cost reductions or at least a stable software budget in the future,
if you select the right outsourcing partner.
198
Chapter Three
D. Another ambiguous outsourcing situation is this: you don't have
the vaguest idea whether your software operations are better or
worse than your competitors due to a chronic lack of data about
software in your industry or in your company.
In this situation, ignorance is dangerous. If you don't know in a quan-
titative way whether your software operations are good, bad, or indiffer-
ent, then you can be very sure that your company is not a top gun and
is probably no better than mediocre in overall software performance. It
may be much worse, of course. This harsh statement is because all of
the really good top-gun software groups have quality and productivity
measurement programs in place, so they know how good they are.
Your company might also compare a sample of recent in-house soft-
ware projects against industry benchmarks from a public source such as
the International Software Benchmarking Standards Group (ISBSG).
Once a company decides that outsourcing is a suitable business strat-
egy, the second part of the problem is to find a really competent out-
source partner. All outsource companies claim to be competent, and
many really are competent, but not all of them. Because outsourcing is
a long-term arrangement, companies need to perform serious due-dili-
gence studies when selecting outsource partners.
You may choose to evaluate potential outsource partners with your
own staff, or you can choose one or more of the external management
consultants who specialize in this area. In either case, the first step is
to dispatch an intelligent agent to bring back information on all of the
outsourcing companies whose business lines are similar to your busi-
ness needs: Computer Aid Incorporated (CAI), Electronic Data Systems,
IBM, Lockheed, Tata, Satyam (if it still exists), and many others.
Some of the information brought back by the intelligent agent would
include financial data if the company is public, information on past or
current lawsuits filed by customers, regulatory investigations against
the company by the SEC or state governments, and also benchmarks
that show productivity and quality results.
A fundamental decision in outsourcing in 2009 is to decide whether a
domestic or an international outsource partner is preferred. The interna-
tional outsource companies from countries such as India, China, or Russia
can sometimes offer attractive short-term cost reductions. However, com-
munication with international outsource partners is more complex than
with domestic partners, and other issues should be evaluated as well.
Recent economic trends have raised the inflation rates in India, China,
and Russia. The decline of the value of the dollar against foreign cur-
rencies such as the yen and pound have led to the situation that the
United States now is being considered as a major outsource location.
For example, IBM is about to open up a large new outsource center in
A Preview of Software Development and Maintenance in 2049
199
Dubuque, Iowa, which is a good choice because of the favorable business
climate and low labor costs.
Already costs in the United States are lower than in Japan, Germany,
France, and other major trading partners. If these trends continue (and
if the United States enters a recessionary period), the United States
might end up with cost structures that are very competitive in global
outsourcing markets.
However, by 2049, a completely different set of players may be involved
in global outsourcing. For example, as this is written, Vietnam is devel-
oping software methods fairly rapidly, and software expertise is expand-
ing in Mexico, Brazil, Argentina, Venezuela, and many other countries
south of the United States.
In fact, assuming some sort of lasting peace can be arranged for the
Middle East, by 2049, Iraq, Iran, Syria, and Lebanon may be signifi-
cant players in global technology markets. The same might occur for
Sri Lanka, Bangladesh, and possibly a dozen other countries.
By 2049, you should be able to dispatch an intelligent agent to bring
back information on every country's inflation rates, intellectual property
protection laws, numbers of outsource companies, software engineer-
ing populations, software engineering schools and graduates, local tax
structures, outsource company turnover rates; and other information for
helping to select an optimum location for long-range contracts.
If you are considering an international outsource partner, some of the
factors to include in your evaluation are (1) the expertise of the candi-
date partners for the kinds of software your company utilizes; (2) the
availability of satellite or reliable broadband communication between
your sites and the outsource location; (3) the local copyright, patent, and
intellectual property protection within the country where the outsource
vendor is located; (4) the probability of political upheavals or factors that
might interfere with transnational information flow; and (5) the basic
stability and economic soundness of the outsource vendor, and what
might occur should the vendor encounter a severe financial downturn.
The domestic outsource companies can usually offer some level of
cost reduction or cost stabilization, and also fairly convenient commu-
nication arrangements. Also, one sensitive aspect of outsourcing is the
future employment of your current software personnel. The domestic
outsource companies may offer an arrangement where some or all of
your personnel become their employees.
One notable aspect of outsourcing is that outsource vendors who spe-
cialize within particular industries such as banking, insurance, telecom-
munications, or some other sector may have substantial quantities of
reusable material available. Since reuse is the technology that gives
the best overall efficiency for software, the reuse factor is one of the key
reasons why some outsource vendors may be able to offer cost savings.
200
Chapter Three
There are ten software artifacts where reuse is valuable, and some of
the outsource vendors may have reusable material from many of these
ten categories: reusable architecture, plans, estimates, requirements,
design, source code, data, human interfaces, user documentation, and
test materials.
Some of the general topics to consider when evaluating potential out-
source partners that are either domestic or international include the
following:
The expertise of the outsource vendor within your industry, and for
the kinds of software your company utilizes. (If the outsource vendor
serves your direct competitors, be sure that adequate confidentially
can be assured.)
The satisfaction levels of current clients who use the outsource ven-
dor's services. You may wish to contact several clients and find out
their firsthand experiences. It is particularly useful to speak with
clients who have had outsource contracts in place for more than two
or three years, and hence who can talk about long-term satisfaction.
An intelligent agent might be able to locate such companies, or you
can ask the vendors for lists of clients (with the caveat that only happy
clients will be provided by the vendors).
Whether any active or recent litigation exists between the outsource
company and either current or past clients. Although active litigation
may not be a "showstopper" in dealing with an outsource vendor, it is
certainly a factor you will want to find out more about if the situation
exists.
How the vendor's own software performance compares against indus-
try norms in terms of productivity, quality, reuse, and other quantita-
tive factors using standard benchmarks such as those provided by the
ISBSG. For this kind of analysis, the usage of the function point metric
is now the most widely used in the world, and far superior to any
alternative metrics. You should require that outsource vendors have
comprehensive productivity and quality measurements and use func-
tion points as their main metric. If the outsource vendor has no data on
their own quality or productivity, be cautious. You might also require
some kind of proof of capability, such as requiring that the outsource
vendor be at or higher than level 3 on the capability maturity model
integration (CMMI) of the Software Engineering Institute (SEI).
The kinds of project management tools that the vendor utilizes. Project
management is a weak link of the software industry, and the leaders
tend to utilize a suite of software project management tools, includ-
ing cost estimation tools, quality estimation tools, software planning
tools, software tracking tools, "project office" tools, risk management
A Preview of Software Development and Maintenance in 2049
201
tools, and several others. If your candidate outsource vendor has no
quantitative estimating or measurement capabilities, it is unlikely
that their performance will be much better than your own.
These five topics are only the tip of the iceberg. Some of the topics
included in contractor evaluation assessments include (1) the project
management tools and methods used by the vendor, (2) the software
engineering tools and methods used by the vendor, (3) the kinds of qual-
ity assurance approaches used by the vendor, (4) the availability or lack
of availability of reusable materials, (5) the configuration control and
maintenance approaches used by the vendor, (6) the turnover or attri-
tion rate of the vendors management and technical staff, and (7) the
basic measurements and metrics used by the vendor for cost control,
schedule control, quality control, and so on.
The International Software Benchmarking Standards Group (ISBSG)
has collected data on more than 5,000 software projects. New data is
being collected at a rate of perhaps 500 projects per year. This data is
commercially available and provides useful background information for
ascertaining whether your company's costs and productivity rates are
better or worse than average.
Before signing a long-term outsource agreement, customers should
request and receive quantitative data on these topics from potential
outsource vendors:
1. Sizes of prior applications built in both function points and lines of
code
2. Defect removal efficiency levels (average, maximum, minimum)
3. Any certification such as CMMI levels
4. Staff turnover rates on an annual basis
5. Any past or current litigation against the outsourcer
6. Any past or present government investigations against the out-
sourcer
7. References to other clients
8. Quality control methods utilized by the outsourcer
9. Security control methods utilized by the outsourcer
10. Progress tracking methods utilized by the outsourcer
11. Cost-tracking methods utilized by the outsourcer
12. Certified reusable materials utilized by the outsourcer
Automated software cost-estimating tools are available (such as the
example tool used in this chapter) that allow side-by-side estimates for
202
Chapter Three
the same project, with one version showing the cost and schedule profile
using your current in-house development approaches, and the second
version giving the results based on how the outsource contractor would
build the same product using their proprietary or unique approaches
and reusable materials.
From working as an expert witness in a dozen lawsuits between out-
source vendors and their dissatisfied clients, the author has found sev-
eral key topics that should be clearly defined in outsource contracts:
1. Include anticipated learning curves for bringing the outsource vendor
up to speed for all of the applications that are included in the agree-
ment. Assume about one-third of an hour per function point for each
outsource team member to get up to speed. In terms of the schedule
for getting up to speed, assume about two weeks for 1,000 function
points, or six weeks for 10,000 function points.
2. Clear language is needed to define how changing requirements will be
handled and funded. All changes larger than 50 function points will
need updated cost and schedule estimates, and also updated quality
estimates. Requirements churn, which are changes that do not affect
function point totals, also need to be included in agreements.
3. The quality control methods used by the outsource vendor should
be provably effective. A requirement to achieve higher than 95 per-
cent defect removal efficiency would be a useful clause in outsource
agreements. Defect tracking and quality measurements should be
required. For applications in C, Java, or other supported languages
static analysis should also be required.
4. Tracking and reporting progress during software development proj-
ects has been a weak link in outsource agreements. Every project
should be tracked monthly, and the reports to the client should
address all issues that may affect the schedule, costs, or quality of
the projects under development. If litigation does occur, these reports
will be part of the discovery process, and the vendors will be deposed
about any inaccuracies or concealment of problems.
5. Rules for terminating the agreement by both parties should be
included, and these rules need to be understood by both parties
before the agreement is signed.
6. If penalties for late delivery and cost overruns are included in the
agreement, they should be balanced by rewards and bonuses for fin-
ishing early. However, quality and schedule clauses need to be linked
together.
Many outsource contracts are vague and difficult to administer.
Outsource agreements should clearly state the anticipated quality
A Preview of Software Development and Maintenance in 2049
203
results, methods for handling requirements changes, and methods of
monitoring progress.
Some of the software your company owns may have such a significant
competitive value that you may not want to outsource it, or even to let
any other company know of its existence. One of the basic prepara-
tory steps before initiating an outsource arrangement is to survey your
major current systems and to arrange security or protection for valuable
software assets with high competitive value.
This survey of current systems will have multiple benefits for your
company, and you might want to undertake such a survey even if you
are not considering outsource arrangements at all. The survey of current
and planned software assets should deal with the following important
topics.
This is an area where intelligent agents and automated business-rule
extraction tools should be able to offer great assistance by 2049. In fact,
most of the business rules, algorithms, and proprietary data should have
been mined from legacy applications and put into expandable and acces-
sible forms by means of AI tools and intelligent agents.
Identification of systems and programs that have high competitive
value, or that utilize proprietary or trade-secret algorithms. These
systems may well be excluded from more general outsource arrange-
ments. If they are to be included in an outsource contract, then special
safeguards for confidential factors should be negotiated. Note also
that preservation of proprietary or competitive software and data is
very delicate when international outsource contracts are utilized. Be
sure that local patent, copyright, and intellectual property laws are
sufficient to safeguard your sensitive materials. You may need attor-
neys in several countries.
Analysis of the databases and files utilized by your software appli-
cations, and the development of a strategy for preservation of con-
fidential data under the outsource arrangement. If your databases
contain valuable and proprietary information on topics such as trade
secrets, competitors, specific customers, employee appraisals, pending
or active litigation, or the like, you need to ensure that this data is
carefully protected under any outsource arrangement.
Quantification of the number of users of your key system, and their
current levels of satisfaction and dissatisfaction with key applications.
In particular, you will want to identify any urgent enhancements that
may need to be passed on to an outsource vendor.
Quantification of the size of the portion of your current portfolio that
is to be included in the outsource contract. Normally, this quantifica-
tion will be based on the function point metric and will include the size
204
Chapter Three
in function points of all current systems and applications for which
the outsource vendor will assume maintenance responsibility.
Analysis of the plans and estimates for future or partly completed
software projects that are to be included in the outsource arrange-
ment and hence developed by the outsource vendor. You will want to
understand your own productivity and quality rates, and then com-
pare your anticipated results against those the outsource vendor will
commit to. Here, too, usage of the function point metric is now the
most common and the best choice for outsourcing contracts.
Because outsource contracts may last for many years and cost mil-
lions of dollars, it is well to proceed with care and thoroughness before
completing an outsource contract.
As of 2009, there is no overall census of how long typical outsource
agreements last, how many are mutually satisfactory, how many are
terminated, and how many end up in court. However, the author's
work in litigation and with many customers indicates that 75 percent
of outsource agreements are mutually satisfactory; about 15 percent are
troubled; and perhaps 10 percent may end up in court.
By utilizing careful due-diligence augmented by intelligent agents
and expert systems, it is hoped that by 2049 more than 90 percent of
outsource agreements are mutually satisfactory, and less than 1 percent
might end up in litigation.
As the global recession lengthens and deepens, outsourcing may be
affected in unpredictable ways. On the downside, some outsource com-
panies and their clients may either (or both) go bankrupt. On the upside,
cost-effective outsourcing is a way to save money for companies that are
experiencing revenue and profitability drops.
A major new topic that should be added to outsource agreements from
2009 forward is that of what happens to the contract and to the software
under development in cases where one or both partners go bankrupt.
Software Package Evaluation
and Acquisition in 2049
In 2009, buying or leasing a software package is a troublesome area.
Vendor claims tend to be exaggerated and unreliable; software war-
ranties and guarantees are close to being nonexistent, and many are
actually harmful to clients; quality control even on the part of major
vendors such as Microsoft is poor to marginal; and customer support is
both difficult to access and not very good when it is accessed. There may
also be serious security vulnerabilities that invite hacking and theft of
proprietary data, or that facilitate denial of service attacks, as discussed
in Chapter 2 of this book.
A Preview of Software Development and Maintenance in 2049
205
In spite of these problems, more than 50 percent of the software run on
a daily basis in large corporations comes from external vendors or from
open-source providers. Almost all systems software such as operating
systems and telephone switching systems comes from vendors, as does
embedded software. Other large commercial packages include databases,
repositories, and enterprise-resource planning (ERP) applications.
Will this situation be much better in 2049 than it is in 2009? Hopefully,
a migration to construction from certified components (as discussed ear-
lier) will improve commercial software quality, security, and reliability
by 2049. It is hoped that improvements in customer support will occur
due to methods also discussed earlier in this chapter.
Prior to acquiring a software package in 2049, the starting point
would be to dispatch an intelligent agent that would scan the Web and
bring back information on these topics:
1. Information on all packages that provide the same or similar services
as needed
2. Reviews of all packages by journals and review organizations
3. Lists of all user associations for packages that have such associations
4. Information on the finances of public software vendors
5. Information on current and past litigation filed against software
vendors
6. Information on government investigations against software vendors
7. Information on quality results by static analysis tools and other
methods
8. Information on security flaws or vulnerabilities in the package
In 2009, software vendors usually refuse to provide any quantita-
tive data at all. Information on the size of applications, on productivity,
on customer-reported bugs, and even on the results of running static
analysis tools is not released to customers, with the exception of some
open-source packages. They also refuse to provide anything that passes
for a warranty or guarantee, other than something trivial or possibly
harmful (such as selling client information). Almost all software war-
ranties include specific disclaimers of any responsibilities for harm or
damages caused by bugs or security flaws.
A hidden but fundamental reason for poor software warranties is that
software controls so many key aspects of business, medicine, govern-
ment, and military operations that software failures can cause more
problems and expense than failures of almost any other kind of prod-
uct. Software bugs can cause death with medical equipment failures,
airplane and rocket malfunctions, air-traffic failure, weapons system
206
Chapter Three
failure, manufacturing shutdowns, errors in critical business data, and
scores of other really serious problems. If software companies should
ever become liable for consequential damages or business losses due to
software bugs, successful litigation could wipe out even major software
vendors.
Individuals and small companies that buy software packages at the
retail level have no power to change the very unprofessional marketing
approaches of software vendors. However, large companies, military
agencies, federal and state governments, and other large enterprises
do have enough clout to insist on changes in software package devel-
opment, warranties, guarantees, security control, quality control, and
other pertinent issues.
While intelligent agents and expert systems can help in minimizing
risks of buying packages with major quality and security flaws, it may
take government intervention to improve warranties and guarantees.
However, a good warranty would be such a powerful marketing tool
that if a major vendor such as IBM were to start to offer meaningful
warranties, all competitors would be forced to follow suit or lose most
of their business.
At the very least, software vendors should offer a full refund to dissat-
isfied customers for at least 90 days after purchase. While the vendors
might lose a small amount of money, they would probably make quite
a bit of additional revenue if this warranty were featured in their ads
and packaging.
For large clients that are acquiring major software packages from
vendors such as Microsoft, IBM, SAP, Oracle, and so forth, the following
information should be a precursor to actually leasing or purchasing a
commercial software product in 2049:
1. Size of the application in function points and lines of code
2. Quality control steps used during development
3. Security control steps used during development
4. Numbers of bugs and defects found prior to release of the product
5. Numbers of bugs and defects reported by customers of the product
6. Litigation against the vendor by dissatisfied customers
7. Anticipated customer support for major defect repairs
8. Anticipated defect repair turnaround after defects are reported
9. Guarantee of no charges to customers for reporting defects
10. Guarantee of no charges to customers for support by phone or e-mail
11. Guarantee of refund for product returns within 90 days of instal-
lation
A Preview of Software Development and Maintenance in 2049
207
Much of this information would rightly be regarded by the vendors
as being proprietary and confidential. However, since the information
would be going to major customers, no doubt it could be provided under
nondisclosure agreements.
The deepening and lengthening global recession is going to add new
problems to the software industry, including to commercial vendors. A
new clause that needs to be included in major software contracts from
2009 forward is what happens to the software, the warranty, and to
the maintenance agreements should either the vendor or the client go
bankrupt.
Technology Selection and Technology
Transfer in 2049
Two major weaknesses of the software industry since its inception have
been that of technology selection and technology transfer. The software
industry seldom selects development methods based on solid empirical
data of success. Instead, the software industry has operated more or less
like a collection of cults, with various methods being developed by char-
ismatic leaders. Once developed, these methods then acquire converts
and disciples who defend the methods, often with little or no historical
data to demonstrate either success or failure.
Of course, some of these methods turn out to be fairly effective, or at
least effective for certain sizes and types of software. Examples of effec-
tive methods include (in alphabetical order) Agile development, code
inspections, design inspections, iterative development, object-oriented
development (OO), Rational Unified Process (RUP), and Team Software
Process (TSP). Other methods that do not seem to accomplish much
include CASE, I-CASE, ISO quality standards, and of course the tradi-
tional waterfall method. For a number of newer methods, there is not yet
enough data to be certain of effectiveness. These include extreme pro-
gramming, service-oriented architecture (SOA), and perhaps 20 more.
That few projects actually measure either productivity or quality is one
of the reasons why it is difficult to judge effectiveness.
If software is to make real progress as an engineering discipline,
rather than an art form, then measurement and empirical results need
to be more common than they have been. What would be useful for the
software industry is a nonprofit evaluation laboratory that resembles
the Consumers Union or the Underwriters Laboratory, or even the Food
and Drug Administration.
This organization would evaluate methods under controlled condi-
tions and then report on how well they operate for various kinds of
software, various sizes of applications, and various technical areas such
as requirements, design, development, defect removal, and the like.
208
Chapter Three
It would be very interesting and useful to have side-by-side compari-
sons of the results of using Agile development, clean-room development,
intelligent-agent development, iterative development, object-oriented
development, rapid application development, the Rational Unified
Process (RUP), the Team Software Process (TSP), various ISO stan-
dards, and other approaches compared against standard benchmark
examples.
In the absence of a formal evaluation laboratory, a second tier for
improving software selection would be for every software project to col-
lect reliable benchmark data on productivity and quality, and to submit
it to a nonprofit clearinghouse such as the International Software
Benchmarking Standards Group (ISBSG).
Historical data and benchmarks take several years to accumulate
enough information for statistical studies and multiple regression anal-
ysis. However, benchmarks are extremely useful for measuring progress
over time, whereas evaluations at a consumer lab only deal with a fixed
point in time.
Even if development methods are proven to be visibly successful, that
fact by itself does not guarantee adoption or utilization. Normally, social
factors are involved, and most people are reluctant to abandon current
methods unless their colleagues have done so.
This is not just a software problem, but has been an issue with inno-
vation and new practices in every field of human endeavor: medical
practice, military science, physics, geology, and scores of others.
Several important books deal with the issues of technology selection
and technology transfer. Although these books are not about software,
they have much to offer to the software community. One book is Thomas
Kuhn's book The Structure of Scientific Revolutions. Another book is
Paul Starr's The Social Transformation of American Medicine (winner
of the Pulitzer Prize in 1982). A third and very important book is Leon
Festinger's The Theory of Cognitive Dissonance, which deals with the
psychology of opinion formation.
Another social problem with technology transfer is the misguided
attempts of some executives and managers to force methodologies on
unwilling participants. Forced adoption of methodologies usually fails
and causes resentment as well.
A more effective approach to methodology deployment is to start using
the method as a controlled experiment, with the understanding that
after a suitable trial period (six weeks to six months), the method will
be evaluated and either rejected or accepted.
When this experimental approach is used with methods such as formal
inspections, it almost always results in adoption of the technique.
Another troubling issue with technology selection is the fact that
many development methods are narrow in focus. Some work best for
A Preview of Software Development and Maintenance in 2049
209
small applications, but are ineffective for large systems. Others were
designed with large systems in mind and are too cumbersome for small
projects and small companies (such as the higher levels of the CMMI).
It is a mistake to assume that because a methodology gives good results
for a small sample, that it will give good results for every known size
and type of software application.
One of the valuable aspects of dispatching intelligent agents is that
they may have the ability to capture and display information about
the pros and cons of popular development methods such as Agile,
TSP, and other related topics such as CMMI, TickIT, ISO standards,
and so on.
It would be good if software practices were based on actual data and
empirical results in 2049, but this is by no means certain. Moving to
actual data will take at least 15 years, because hundreds of companies
will need to establish measurement programs and train practitioners
in effective measurement methods. Automated tools will need to be
acquired, too, and of course their costs need to be justified.
Another sociological issue that affects the software industry is that
a number of widely used measures either violate the assumptions of
standard economics, or are so ambiguous that they can't be used for
benchmarks and comparative studies. Both "lines of code" and "cost
per defect" violate economic principles and should probably be viewed
as professional malpractice for economic analysis. Other metrics such
as "story points" and "use-case points" may have limited usefulness for
specific projects, but cannot be used for wide-scale economic analysis.
Neither can such measures be used for side-by-side comparisons with
projects that don't utilize user stories or use-cases.
For meaningful benchmarks and economic studies to be carried out,
either the data must be collected initially using standard metrics such
as IFPUG function points, or there should be automated conversion
tools so the metrics such as "lines of code" or "story points" or "Cosmic
function points" could be converted into standard metrics. It is obvi-
ous that large-scale economic studies of either portfolios or the entire
software industry need to have all data expressed in terms of standard
metrics.
The common practice circa 2009 of using quirky and nonstandard
metrics is a sign that the software industry is not really an engineer-
ing discipline. The best that can be said about software in 2009 is
that it is a craft or art form that sometimes yields valuable results,
but often fails.
A study of technology transfer in IBM some years ago found that only
about one-third of applications were using what at the time were viewed
as being best practices. This led IBM to expend considerable resources
on improving technology transfer within the company.
210
Chapter Three
Similar studies at Hewlett-Packard and ITT also revealed rather
sluggish technology transfer and extremely subjective technology acqui-
sition. These are chronic problems that need a great deal more study
on the part of sociologists, industrial psychologists, and of course the
software engineering community itself.
Enterprise Architecture and Portfolio
Analysis in 2049
Once intelligent agents, expert design tools, and expert maintenance
workbenches become widely deployed, these will open up new forms of
work that deal with higher levels of software ownership at the enter-
prise and portfolio levels.
Today in 2009, corporations and government agencies own thou-
sands of software applications developed over many years and using
scores of different architectural approaches, design methods, devel-
opment methods, and programming languages. In addition, many
applications in the portfolios may be commercial packages such as
ERP packages, office suites, financial applications, and the like. These
applications are maintained at random intervals. Most contain sig-
nificant quantities of latent bugs. Some even contain "error-prone
modules," which are highly complex and very buggy code segments
where bad-fix injection rates of new bugs introduced via changes may
top 50 percent.
It would make good business sense to dispatch the same intelligent
agents and use the same expert systems to perform a full and careful
analysis of entire portfolios. The goal of this exercise is to identify qual-
ity and security flaws in all current applications, map out how current
applications interact, and to place every application and its feature
set on the map of standard taxonomies and standard features that are
being used to support development from reusable components.
An additional feature that needs expert analysis and intelligent
agents is identifying the portions of software that might need updates
due to changes in various government regulations and laws, such as
changes in tax laws, changes in governance policies, changes in privacy
requirements, and scores of others. Hardly a day goes by without some
change in either state or federal laws and regulations, so only a combi-
nation of intelligent agents and expert systems could keep track of what
might be needed in a portfolio of thousands of applications.
In other words, it would be possible to perform large-scale data mining
of entire portfolios and extract all algorithms and business rules utilized
by entire corporations or government agencies. Corporate data diction-
aries would also be constructed via data mining. Since large portfolios
may include more than 10,000 applications and 10 million function
A Preview of Software Development and Maintenance in 2049
211
points in their entirety, this work cannot easily be done by human beings
and requires automation to be performed at all.
No doubt there would be many thousands of business rules and many
thousands of algorithms. Once extracted, these obviously need to be
classified and assembled into meaningful patterns based on various
taxonomies such as the Zachman architectural approach and also other
taxonomies such as those that define application types, feature types,
and a number of others.
Not only would this form of data mining consolidate business rules
and assist in rationalizing portfolio maintenance and government, but
it would also introduce much better rigor in terms of economic analysis,
governance, quality control, and security controls.
A huge data dictionary and catalog could be created that showed the
impacts of all known government regulations on every application in the
corporate portfolio. This kind of work exceeds the unaided capabilities
of human beings, and only expert systems and AI tools and intelligent
agents are likely to be able to do it at all.
Few companies actually know the sizes of their portfolios in terms
of either function points or lines of code. Few companies actually know
their maintenance cost breakdowns in terms of defect repairs, enhance-
ments, and other kinds of work. Few companies know current quality
levels and security flaws in existing software. Few companies know how
many users utilize each application, or the value of the applications to
the organization.
By 2049, it is possible to envision a suite of intelligent agents and
expert systems constantly at work identifying flaws and sections of
legacy applications that need attention due to quality and security
flaws. The agents would be geographically dispersed among perhaps 50
different corporate development and maintenance locations. However,
the results of these tools would be consolidated at the enterprise level.
As this data is gathered and analyzed, it would have to be stored in
an active repository so that it could be updated essentially every day as
new applications were added, current applications were updated, and
old applications were retired. Some of the kinds of data stored in this
repository would include application size in function points and LOC,
defect and change histories, security status and known vulnerabilities,
numbers of users, features based on standard taxonomies, and relation-
ships to other applications owned by the enterprise or by suppliers or
customers to which it connects.
It is also possible to envision much better planning at the level of
enterprise architecture and portfolio management when corporate busi-
ness needs and corporate software portfolios are reliably mapped and all
known business rules and business algorithms have been consolidated
from existing portfolios via automated tools.
212
Chapter Three
Software portfolios and the data they contain are simultaneously the
most valuable assets that most corporations own, and also the most trou-
blesome, error-prone, and expensive to develop, replace, and maintain.
It is obvious that software needs to migrate from a craft that builds
applications line by line to an engineering discipline that can construct
high-quality and high-security applications from standard components.
A combination of intelligent agents, expert systems, architectural meth-
ods, and several kinds of taxonomies are needed to accomplish this. In
addition, automated methods of security analysis and quality analysis
using both static and dynamic analysis should be in constant use to keep
applications secure and reliable.
Some of the business purposes for this kind of automated portfolio
analysis would include corporate governance, mergers and acquisitions,
assessing the taxable value of software assets, maintenance planning,
litigation for intellectual property and breach of contract, and of course
security and quality improvement. As the economy moves through
another recessionary year, every company needs to find ways of lower-
ing portfolio maintenance costs. Only when portfolios can be completely
scanned and analyzed by expert applications rather than by human
beings can really significant economies be realized.
Portfolio analysis is especially important in the case of mergers and
acquisitions between large corporations. Attempting to merge the port-
folios and software organizations of two large companies is a daunt-
ing task that often damages both partners. Careful analysis of both
portfolios, both data dictionaries, and both sets of business rules and
algorithms needs to be carried out, but is very difficult for unaided
human beings. Obviously, intelligent agents and expert systems would
be very helpful both for due diligence and later when the merger actu-
ally occurs.
At the level of enterprise architecture and portfolio analysis, graphi-
cal representations would be valuable for showing software usage and
status throughout the enterprise. A capability similar to that used today
for Google Earth might start with a high-level view of the entire cor-
poration and portfolio, and then narrow the view down to the level of
individual applications, individual business units, and possibly even
individual functions and users.
The main difference between Google Earth and an overall representa-
tion of a corporate portfolio is that the portfolio would be shown using
animation and real-time information. The idea is to have continuous
animated representation of the flow of business information from unit
to unit, from the company to and from suppliers, and also to and from
customers.
One additional point is significant. Software portfolios are taxable
assets in the view of the Internal Revenue Service. There is frequent
A Preview of Software Development and Maintenance in 2049
213
tax litigation after mergers and acquisitions that deals with the origi-
nal development costs of legacy applications. It would be prudent from
the point of view of minimizing tax consequences for every company to
know the size of each application in the portfolio, the original develop-
ment cost, and the continuous costs of maintenance and enhancements
over time.
A Preview of Software Learning in 2049
Because technology transfer is a weak link in 2009, it is interesting to
consider how software topics might be learned by software profession-
als in 2049.
Considering technologies that are available in 2009 and projecting
them forward, education and learning are likely to be very different
in the future. This short discussion provides a hypothetical scenario of
learning circa 2049.
Assume that you are interested in learning about current software
benchmarks for productivity and quality circa 2049.
By 2049, almost 100 percent of all published material will be available
online in various formats. Conversion from one format to another will
be common and automatic. Automatic translation from one language to
another such as Russian to English will no doubt be available, too.
Copyrights and payments for published material will hopefully be
resolved by 2049. Ideally, text mining of this huge mass of material will
have established useful cross-references and indexing across millions
of documents.
First, your computer in 2049 will probably be somewhat different
from today's normal computers. It will perhaps have several screens and
also independent processors. One will be highly secure and deal primar-
ily with web access, while the other, also secure, will not be directly con-
nected to the Web. The second unit is available for writing, spreadsheets,
graphics, and other activities. Hardware security will be a feature of
both processors.
Computer keyboards may still exist, but no doubt voice commands
and touch-screens will be universally available. Since the technology
of creating 3-D images exists today, you may also have the capability
of looking at information in 3-D form, with or without using special
glasses. Virtual reality will no doubt be available as a teaching aid.
Because reading in a fixed position is soon tiring, one of the screens
or a supplemental screen will be detachable and can be held like a book.
The most probable format is for a screen similar to today's Amazon
Kindle or Sony PR-505. These devices are about the size and shape of
a paperback book. No doubt by 2049, high-resolution graphics and full
colors will also be available for e-books, and probably animation as well.
214
Chapter Three
Voice commands and touch screens will probably be standard, too.
Batteries will be more effective in 2049 as well, and using a hand-held
device for eight to ten hours on battery power should be the norm rather
than an exception as it is in 2009.
Other technical changes might modify the physical appearance of
computers. For example, flat and flexible screens exist in 2009, as
do eyeglasses that can show images on the lenses. Regardless of the
physical shape of computers, access to the Web and to online infor-
mation will remain a major function; security will remain a major
problem.
By 2049, basically all information will be online, and you will have a
personal avatar librarian available to you that is programmed with all
of your major interests. On a daily basis you will have real-time sum-
maries of changes in the topics that concern you.
You start your search for benchmark information by entering your
personal learning area. The area might appear to be a 3-D image of your
favorite campus with trees, buildings, and avatars of other students
and colleagues.
You might begin by using a voice or keyed query such as "Show me
current software productivity and quality benchmarks."
Your avatar might respond by asking for additional information to
narrow the search, such as: "Do you want development, maintenance,
customer support, quality, or security benchmarks?" You might narrow
the focus to "development productivity benchmarks."
A further narrowing of the search might be the question, "Do you want
web applications, embedded software, military software, commercial
applications, or some specific form of software?"
You might narrow the issue to "embedded software." Your avatar
might then state, "The International Software Benchmarking Standards
Group has 5,000 embedded applications from the United States, 7,500
from China, 6,000 from Japan, 3,500 from Russia, and 12,000 from other
countries. There are also 5,000 embedded benchmarks from other orga-
nizations. Do you want overall benchmarks, or do you wish to compare
one country with another?"
You might respond by saying "I'm interested in comparing the United
States, Japan, China, India, and Russia. For consistency, use only the
ISBSG benchmark data."
The avatar might also ask, "Are you interested in specific languages
such as E, Java, Objective C, or in all languages?" In this case, you might
respond with "all languages."
The avatar might also ask, "Are you interested in specific methods
such as Agile and Team Software Process, or in capability maturity
levels?" You might respond by saying, "I'm interested in comparing Agile
against Team Software Process."
A Preview of Software Development and Maintenance in 2049
215
Your avatar might then say, "For embedded applications about 1,000
in each country used Agile methods and about 2,000 used TSP methods.
Almost all embedded applications were at CMMI level 3 or higher."
At this point, you might say something like, "Create graphs that com-
pare embedded productivity levels by country for embedded applications
between 1,000 and 25,000 function points in size. Show a comparison
of Agile and TSP methods. Also show the highest productivity levels for
embedded applications of 1,000, 5,000, and 10,000 function points."
Within a few seconds, your initial set of graphs will be displayed. You
might then decide to refine your search by asking for annual trends
for the past ten years, or by including other factors such as looking at
military versus civilian embedded applications.
You might also ask your avatar librarian for the schedules of upcom-
ing webinars and seminars on benchmarks. You might also ask for sum-
mary highlights of webinars and seminars on benchmarks held within
the past six months.
At this point, you might also ask your avatar to send copies of
the graphs to selected colleagues who are working in the same area
of research. No doubt by 2049, all professionals will be linked into a
number of social networks that deal with topics of shared interest.
These networks occur already in 2009 using commercial services such
a Plaxo, LinkedIn, various forums, wiki groups, and other means. But
today's networks are somewhat awkward for sharing large volumes of
information.
Although this scenario is hypothetical and may not occur, the major
differences between learning in 2049 and learning in 2009 are likely to
include these key topics:
1. Much better security of computers than what is available in 2009.
2. The existence of AI avatars or intelligent agents that can assist
in dealing with vast quantities of information based on profiles of
personal interests.
3. Much better indexing and cross-referencing capabilities among
documents than what is available in 2009.
4. Workable methods for dealing with copyrights and payments across
millions of documents.
5. The accumulation of private "libraries" of online information that
meet your personal criteria. To be useful, intelligent agents will
create cross-references and indexes across your entire collection.
The bulk of the information will be available online, and much of
it can be accessed from hand-held devices equivalent to the Kindle
as well as from your computers, smart phones, and other wireless
devices.
216
Chapter Three
6. Schedules of all webinars, seminars, and other forms of communica-
tions that are in topics that match your personal interest profiles.
These can either be viewed as they occur, or stored for later viewing.
Your personal avatar librarian can also extract relevant informa-
tion in summary form.
7. Existence of specialized social networks that allow colleagues to
communicate and share research and data in topics such as soft-
ware productivity, security, quality, and other key issues.
8. Existence of virtual communities associated with social networks so
that you and your colleagues can participate in online discussions
and meetings in virtual environments.
9. Utilization of standard taxonomies of knowledge to facilitate orga-
nizing millions of documents that cover thousands of topics.
10. The development of fairly sophisticated filters to separate low-value
information from high-value information. For example, articles on
productivity that lack quantitative data would probably be of lower
value than articles containing quantitative data.
In 2009, vast quantities of data and information are available on
the Web and Internet. But the data is chaotic, unstructured, and very
inconsistent in terms of intellectual content. Hopefully by 2049, a com-
bination of standard taxonomies, metadata, and the use avatars and
intelligent agents will make it possible to gather useful information on
any known topic by filtering out low-value data and condensing high-
value data into meaningful collections.
Also by 2049, hundreds of colleagues in various fields will be linked
together into social networks that enable them to share data on a
daily basis, and to rapidly examine the state of the art in any field of
knowledge.
With so much information available, copyright and payment methods
must be robust and reliable. Also, security of both personal data collections
and libraries of online documents must be very robust compared with 2009
norms. Much of the information may be encrypted. Hardware security
methods will probably augment software security methods. But the key
topic for extracting useful information from billions of source documents
will be the creation of intelligent agents that can act on your behalf.
Due Diligence in 2049
Although the recession has slowed down venture capital investments and
brought software IPOs almost to a standstill, it has not slowed down merg-
ers and acquisitions. In fact, several merger companies such as Corum had
record years in 2008, which is counter to the recessionary trend.
A Preview of Software Development and Maintenance in 2049
217
Whenever due diligence is required, and it is always required for
mergers and acquisitions and private investments, it is obvious that
the combination of intelligent agents and expert systems would be dis-
patched to evaluate the portfolios and applications of both parties.
If the companies are medium to large in size, then they will each own
more than 1,000 applications that total to more than 1 million function
points. Really big companies can own ten times as much software as
this. Due diligence of an entire portfolio is far too difficult for unaided
humans; only intelligent agents and expert systems can handle such
large volumes of software.
After a merger is complete, both the software portfolios and software
development organizations of both parties will need to be consolidated,
or at least some applications will need to operate jointly.
Therefore, every application should be examined by intelligent agents
and expert systems for security flaws, latent defects, interfaces, pres-
ence of data dictionaries, reusable materials, and many other topics.
For venture investments in startup companies with perhaps only
one or two software applications, expert analysis of the software's qual-
ity, security vulnerabilities, and other topics would assist in judging
whether the investment is likely to be profitable, or may end up with
negative returns.
As previously mentioned, software is a taxable asset. Therefore, every
software application needs to keep permanent records of size, original
development costs, maintenance and enhancement costs, marketing costs,
and other financial data. Quality and reliability data should be kept too,
for aid in defense against possible lawsuits from clients or users.
Some of the topics that need to be evaluated during due diligence
activities include, but are not limited to, the following:
1. Protection of intellectual property in software assets (patents, trade
secrets)
2. On-going litigation (if any) for breach of contract, taxes, and so on
3. Benchmarks of productivity and quality for past applications
4. Quality control methods used for software development
5. Data on defects and reliability of legacy software
6. Data on customer satisfaction of legacy software
7. Security control methods used in software applications (encryption,
E, etc.)
8. Security control methods used at the enterprise level (firewalls,
antivirus, etc.)
9. Existence of business rules, algorithms, and so on, for legacy appli-
cations
218
Chapter Three
10. Enterprise architectural schema
11. Open-source applications used by the companies
12. Similar applications owned by both companies
13. How easily applications can be modified
14. Architectural compatibilities or differences
15. Compensation differences between organizations
Unless a company is a conglomerate and frequently acquires other
companies, the logistics of due diligence can be daunting. Professional
advice is needed from attorneys and also from specialists in mergers
and acquisitions. Additional advice may be needed from security and
quality consultants, and also advice from architecture specialists may
be needed.
By 2049, a combination of intelligent agents and AI tools should
also be available to assist in due diligence for mergers, venture capital
investments, and other key business purposes.
Certification and Licensing in 2049
Certification and licensing of software personnel are controversial topics.
Certification and licensing were also controversial in the medical field
and the legal field as well. If certification exists, then the opposite case
of decertification for malpractice would also exist, which is even more
contentious and controversial.
The history of medical certification is spelled out in Paul Starr's book
The Social Transformation of American Medicine, which won a Pulitzer
Prize in 1982. Since medicine in 2009 is the most prestigious learned
profession, it is interesting to read Starr's book and consider how medi-
cal practice in the 1850s resembled software circa 2009.
Curricula for training physicians were two-year programs, and there
were no residencies or internships. Many medical schools were run for
profit and did not require college degrees or even high school diplomas
for entry. Over half of U.S. physicians never went to college.
During training in medical schools, most physicians never entered
a hospital or dealt with actual patients. In addition to medical schools
that taught "standard" medical topics, a host of arcane medical schools
taught nonstandard medicine such as homeopathy. There were no legal
distinctions among any of these schools.
Hospitals themselves were not certified or regulated either, nor were
they connected to medical schools. Many hospitals required that all
patients be treated only by the hospital's staff physicians. When patients
entered a hospital, they could not be treated or even visited by their
regular physicians.
A Preview of Software Development and Maintenance in 2049
219
These small excerpts from Paul Starr's book illustrate why the
American Medical Association was formed, and why it wished to improve
physician training and also introduce formal specialties, licensing, and
certification. As it happened, it required about 50 years for the AMA to
achieve these goals.
If certification and licensing should occur for software, the approach
used for medical certification is probably the best model. As with early
medical certification, some form of "grandfathering" would be needed
for existing practitioners who entered various fields before certification
began.
What is an interesting question to consider is: What are the actual
topics that are so important to software engineering that certification
and licensing might be of value? In the medical field, general practitio-
ners and internists deal with the majority of patients, but when certain
conditions are found, patients are referred to specialists: oncology for
cancer, cardiology, obstetrics, and so forth. There are currently 24 board-
certified medical specialties and about 60 total specialties.
For software engineering, the topics that seem important enough to
require specialized training and perhaps examinations and board cer-
tification are the following:
1. General software engineering
2. Software maintenance engineering
3. Software security engineering
4. Software quality engineering
5. Large-system engineering (greater than 10,000 function points)
6. Embedded software engineering
7. Business software engineering
8. Medical software engineering
9. Weapons-system software engineering
10. Artificial-intelligence software engineering
There would also be some specialized topics where the work might or
might not be performed by software engineers:
1. Software metrics and measurement
2. Software contracts and litigation
3. Software patents and intellectual property
4. Software customer training
5. Software documentation and HELP information
220
Chapter Three
6. Software customer support
7. Software testing and static analysis
8. Software configuration control
9. Software reusability
10. Software pathology and forensic analysis
11. Software due diligence
12. Data and business rule mining
13. Deployment of intelligent agents
As time goes by, other topics would probably be added to these lists.
The current set considers topics where formal training is needed, and
where either certification or licensing might possibly be valuable.
As of 2009, more than a dozen software topics have various forms of
voluntary certification available. Some of these include software project
management, function point counting (for several flavors of function
points), Six Sigma, testing (several different certificates by different
groups), Zachman architectural method, and quality assurance.
As of 2009, there seems to be no legal distinction between certified
and uncertified practitioners in the same fields. There is not a great
deal of empirical data on the value of certification in terms of improved
performance. An exception is that some controlled studies have dem-
onstrated that certified function-point counters have higher accuracy
levels than uncertified function-point counters.
By 2049, no doubt other forms of certification will exist for software,
but whether software will achieve the same level of training, licensing,
and certification as medicine is uncertain.
In 2009, about one-third of large software projects are terminated
due to excessive cost and schedule overruns. A majority of those that
are finished run late and exceed their budgets. When delivered, almost
all software applications contain excessive quantities of defects and
numerous very serious security flaws.
It is obvious from the current situation that software is not a true
engineering discipline in 2009. If software engineering were a true dis-
cipline, there would not be so many failures, disasters, quality problems,
security flaws, and cost overruns.
If software engineering should become a licensed and certified occupa-
tion, then the issue of professional malpractice will become an impor-
tant one. Only when the training and performance of software personnel
reaches the point where project failures drop below 1 percent and defect
removal efficiency approaches 99 percent would "software engineering"
performance be good enough to lower the odds of wiping out the industry
due to malpractice charges. In fact, even 2049 may be an optimistic date.
A Preview of Software Development and Maintenance in 2049
221
Software Litigation in 2049
Litigation seems to be outside of the realm of the rest of the economy,
and lawsuits for various complaints will apparently increase no matter
what the recession is doing. The author often works as an expert wit-
ness in software breach of contract litigation, but many other kinds of
litigation including, but not limited to, the following are
1. Patent or copyright violations
2. Tax litigation on the value of software assets
3. Theft of intellectual property
4. Plagiarism or copying code and document segments
5. Violations of noncompetition agreements
6. Violations of nondisclosure agreements
7. Fraud and misrepresentation by software vendors
8. Fraud and misrepresentation by software outsourcers
9. Damages, death, or injuries caused by faulty software
10. Recovery of stolen assets due to computer fraud
11. Warranty violations for excessive time to repair defects
12. Litigation against executives for improper governance of software
13. Litigation against companies whose lax security led to data theft
14. Antitrust suits against major companies such as Microsoft
15. Fraud charges and suits against executives for financial
irregularities
The legal and litigation arena has much to offer the software com-
munity when it comes to searching and consolidating information. The
legal reference firm of Lexis-Nexis is already able to search more than
5 million documents from more than 30,000 sources in 2009. Not only
that, but legal information is already cross-indexed and much easier to
use for tracing relevant topics than software literature is.
From working as an expert witness in a number of lawsuits, the
author finds it very interesting to see how trial attorneys go about
their preparation. On the whole, a good litigator will know much more
about the issues of a case than almost any software engineer or software
manager knows about the issues of a new software application. In part
this is due to the excellent automation already available for searching
legal materials, and in part it is due to the organizations and support
teams in law firms, where paralegals support practicing attorneys in
gathering key data.
222
Chapter Three
Even the structure of a lawsuit might be a useful model for structur-
ing software development. The first document in a lawsuit is a com-
plaint filed by the plaintiff. Since most software applications are started
because of dissatisfaction with older legacy applications or dissatis-
faction with particular business practices, using the format of a legal
complaint might be a good model for initial requirements.
During the discovery period of a lawsuit, the defendants and the
plaintiffs are asked to provide written answers to written questions
prepared by the attorneys, often with the assistance of expert wit-
nesses. A discovery phase would be a good model for gathering more
detailed requirements and initial design information for software
projects.
At some point between the initial complaint and the completion of
the discovery phase, expert witnesses are usually hired to deal with
specific topics and to assist the lawyers in writing the deposition ques-
tions. The experts also write their own expert-opinion reports that draw
upon their knowledge of industry topics. For software litigation, experts
in quality control and software costs are often used. During software
projects, it would also be useful to bring outside experts for critical
topics such as security and quality where in-house personnel may not
be fully qualified.
After the discovery phase is complete, the next phase of a lawsuit
involves depositions, where the defendants, plaintiffs, witnesses, and
experts are interviewed and examined by attorneys for both sides of the
case. There is no exact equivalent to depositions in most software devel-
opment projects, although some aspects of quality function deployment
(QFD) and joint application design (JAD) do have slight similarities in
that they involve personnel with many points of view trying to zero in
on critical issues in face-to-face meetings.
Depositions are where the real issues of a lawsuit tend to surface.
Good litigators use depositions to find out all of the possible weaknesses
of the opposing side's case and personnel. It might be very useful to have
a form of deposition for large software projects, where stakeholders and
software architects and designers were interviewed by consultants who
played the parts of both plaintiff and defendant attorneys.
The value of this approach for software is that someone would play the
role of a devil's advocate and look for weaknesses in architecture, devel-
opment plans, cost estimates, security plans, quality plans, and other
topics that often cause major software projects to fail later on. Usually,
software projects are one-sided and tend to be driven by enthusiasts
who don't have any interest in negative facts. The adversarial roles of
plaintiff and defendant attorneys and expert witnesses might stop a lot
of risky software projects before they got out of control or spend so much
money that cancellation would be a major financial loss.
A Preview of Software Development and Maintenance in 2049
223
For software, it would be useful if we could achieve the same level of
sophistication in searching out facts and insights about similar projects
that lawyers have for searching out facts about similar cases.
Once intelligent agents and expert systems begin to play a role in soft-
ware development and software maintenance, they will of course also play
a role in software litigation. A few examples of how intelligent agents and
expert systems can support software litigation are shown next:
The search engines used by Lexis-Nexis and other litigation support
groups are already somewhat in advance of equivalent search capa-
bilities for software information.
Software cost-estimating tools are already in use for tax cases, where
they are used to model the original development costs of applications
that failed to collect historical data.
Static analysis of code segments in litigation where allegations of
poor quality or damages are part of the plaintiff claims should add a
great deal of rigor to either the side of the plaintiff or the side of the
defendant.
A new kind of litigation may soon appear. This is litigation against com-
panies whose data has been stolen, thus exposing thousands of custom-
ers or patients to identity theft or other losses. Since the actual criminals
may be difficult to apprehend or live in other countries (or even be other
national governments), it may be that litigation will blame the company
whose data was stolen for inadequate security precautions. This is a
form of consequential damages, which are seldom allowed by U.S. courts.
But if such litigation should start, it would probably increase rapidly.
Static analysis and other expert systems could analyze the applications
from which the data was stolen and identify security flaws.
Automatic sizing methods for legacy applications that create func-
tion point totals can be used for several kinds of litigation (tax cases,
breach of contract) to provide comparative information about the
application involved in the case and similar applications. Size corre-
lates with both quality and productivity, so ascertaining size is useful
for several kinds of litigation.
A new form of software cost-estimating tool (used as an example in
this chapter) can predict the odds of litigation occurring for outsource
and contract software development. The same tool predicts delivered
defects and problems encountered by users when attempting to install
and use buggy software.
The same software cost-estimating tool used in this chapter, already
operational in prototype form in 2009, can predict the costs of litigation
for both the plaintiff and defendant. It often happens that neither party
224
Chapter Three
entering litigation has any idea of the effort involved, the costs involved,
the interruption of normal business activities, and the possible freezing
of software projects. The prototype estimates legal effort, expert-witness
effort, employee and executive effort, and the probable duration of the
trial unless it settles out of court.
Static analysis tools can be used to find identical code segments
in different applications, in cases involving illegal copying of code.
(Occasionally, software companies deliberately insert harmless errors
or unusual code combinations that can serve as telltale triggers in
case of theft. These can be identified using intelligent agents or as
special factors for static analysis tools.)
A combination of static analysis tools and other forms of intelligent
agents can be used to search out prior knowledge and similar designs
in patent violation cases.
Software benchmarks and software quality benchmarks can be used
to buttress expert opinions in cases of breach of contract or cases
involving claims of unacceptable quality levels.
For litigation, depositions are held face-to-face, and the statements
are taken down by a court stenographer. However, for software meet-
ings and fact-gathering in 2049, more convenient methods might be
used. Many meetings could take place in virtual environments where
the participants interacted through avatars, which could either be
symbolic or actually based on images of the real participants. Court
stenographers would of course not be necessary for ordinary discus-
sions of requirements and design for software, but it might be of
interest to record at least key discussions using technologies such
as Dragon Naturally Speaking. The raw text of the discussions could
then be analyzed by an expert system to derive business rules, key
algorithms, security and quality issues, and other relevant facts.
A powerful analytical engine that could examine source code, perform
static analysis, perform cyclomatic and essential complexity analysis,
seek out segments of code that might be copied illegally, quantify size
in terms of function points, examine test coverage, find error-prone
modules, look for security flaws, look for performance bottlenecks, and
perform other kinds of serious analysis would be a very useful support
tool for litigation, and also for maintenance of legacy applications.
The pieces of such a tool exist in 2009, but are not all owned by one
company, nor are they yet fully integrated into a single tool.
Software litigation is unfortunate when it occurs, and also expensive
and disruptive of normal business. Hopefully, improvements in quality
control and the utilization of certified reusable material will reduce breach
of contract and warranty cases. However, tax cases, patent violations, theft
A Preview of Software Development and Maintenance in 2049
225
of intellectual property, and violations of employment agreements can
occur no matter how the software is built and maintained.
In conclusion, the software industry should take a close look at the legal
profession in terms of how information is gathered, analyzed, and used.
Summary and Conclusions
A major change in development between 2009 and 2049 will be that the
starting point circa 2049 assumes the existence of similar applications that
can be examined and mined for business rules and algorithms. Another
major change is the switch from custom design and line-by-line coding to
construction from reusable designs and reusable code segments.
For these changes to occur, a new kind of design and development sup-
port tools will be needed that can analyze existing applications and extract
valuable information via data mining and pattern matching. Intelligent
agents that can scan the Web for useful data and patent information are
also needed. Not only patents, but government rules, laws, international
standards, and other topics also need intelligent agents.
A final change is that every application circa 2049 should routinely
gather and collect data for productivity, quality, and other benchmarks.
Some tools are available for these purposes in 2009 as are the ISBSG
questionnaires, but they are not yet as widely utilized as they should be.
The goal of the software industry should be to replace custom design
and labor-intensive line-by-line coding with automated construction
from zero-defect materials.
As the global economy continues another year of recession, all com-
panies need to find ways of reducing software development and main-
tenance costs. Line-by-line software development is near the limit of its
effective productivity, and it seldom achieved effective quality or secu-
rity. New methods are needed that replace custom design and custom
line-by-line coding with more automated approaches.
Maintenance and portfolio costs also need to be reduced, and here too
intelligent agents and expert systems that can extract latent business
rules and find quality and security flaws are on the critical path for
improving software portfolio economics and security.
Readings and References
Festinger, Leon. A Theory of Cognitive Dissonance. Palo Alto, CA: Stanford University
Press, 1957.
Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago: University of Chicago
Press, 1970.
Pressman, Roger. Software Engineering ­ A Practitioners' Approach, Sixth Edition. New
York: McGraw-Hill, 2005.
Starr, Paul. The Social Transformation of American Medicine. Basic Books, 1982.
Strassmann, Paul. The Squandered Computer. Stamford, CT: Information Economics
Press, 1997.
This page intentionally left blank