Research
Evolution in Software Development
Introduction
Software
development research has changed significantly over the past 20 years. While in the late 1980s researchers were
concerned with user acceptance of computers in general (Forrest, Stegelin & Novak 1986), in the past few years
researchers have been more concerned with why people accept or reject certain
software products, and not others (Khanfar,
et al. 2008; Garrity et al. 2007).
This literature review will explore the changes in research methods by
taking a sampling of research articles from the late 1980’s, and articles from
recent years.
This literature
review will examine 27 peer reviewed articles relating to software
development. Sixteen of these articles were
written in the past 5 years and eleven were written in the late 1980’s. Using these articles, I will examine the
changes over time to the software research process.
By reviewing the
differences in software development research I will get a picture of where
software development research is going in the future. In this review I will look at information
such as sample size, methodology, statistical analysis being preformed, and
other factors such as use of students in research. This paper will show the trends in software
development research and how they affect researchers.
Literature Review
Research methods
Although the
basic way people research has not changed over the past 20 years, the tools
available to researchers and the methods that they tend to use has. In my study of 27 articles I found that 54% of
articles written before 1990 were qualitative, while only 6% of current
articles written in the past five years used a qualitative method.
Quantitative
methods are being used more frequently in recent years. This trend may reflect the ease in which data
is easily computer tabulated or may be due to a preference by researchers for
hard data. When Ali Montazemi researched
user satisfaction (1988), he chose to interview people. In these seminal types of interviews he found
information that quantitative research could have missed. Montazemi, for example, he found that 20% of
people would have preferred a query system that gave them “what-if”
scenarios. Today, this type of research today
would be a separate topic of decision support systems. In contrast, when Garrity et al. did his
research on user satisfaction on websites, he chose to have users use a quantitative
study. With modern tools Garrity et al
was able to do a more extensive statistical analysis using statistical
techniques such as average variance extracted, sum square, and f-test (p.
27). This type of analysis speaks to the
growing maturity of software development research.
Modern
researchers have more tools to analyze data.
Qualitative data lends itself more readily to this type of
analysis. Modern tools make it possible
to analyze more data. There are tools that exist now that did not exist 20
years ago. There are also more
categories of research than there were 20 years ago. In the research field of software development
researchers are looking at project management, testing, security, websites and
usability to name a few.
Sample
Populations
Comparing research between the two
periods shows that surveys today can have far larger populations. Data collected from the research on types of
samples from twenty years ago shown in Table 1 shows a sample size that is far
smaller than the sample size of data taken from fore recent times shown in in
table 2. The average sample size for
research taken 20 years ago is 113. The
average sample size for research taken today is 7150. This is primarily due to the availability of
data. Most of the data that was
collected 20 years ago was obtained through face to face surveys. One exception to this was a museum
experiment. In this experiment, museums
were equipped with a Hypertext display that allowed users to review information
about the museum. The data collection
was limited when one of the two monitors that they were using broke (Shneiderman et al, p. 49). The price of monitors has come down in recent
years and this problem would most likely be fixable in today’s environment.
Today researchers have access to more
tools that give them a much greater ability to review data. Crowston et al, for
example, was able to collect data on over 100,000 open source projects (2008). This type of data collection would be unheard
of 20 years ago. Large scale systems
like the Internet were not in place then and the ability to harvest vast
quantities of data was nonexistent.
Now
researchers have new techniques for sending surveys. Phone and mail surveys can be expensive in
both time and money. Modern surveys can
be done cheaper and more efficiently.
When one group of researchers, for example, wanted to have a large
sample size they sent emails out to 10,000 people, and 3276 people responded
(Tam et al. p. 280). First class postage
as of the time of writing this is 42 cents.
Sending this type of survey through the mail would cost $4,200. This type of expense is out of the range of
many research projects.
When looking only at direct
surveys of people without the help of modern data collection methods there is
virtually no difference between the two time periods. The data collected in the 1980’s had, on
average, surveyed 123 people while the data collected in the past few years had
on average 124. Interviewing and
surveying people clearly takes longer than more modern methods. While these interview may be necessary in
qualitative research the amount of data collected can be quite small compared
to data mining through already existing information systems or email surveys.
Statistical Analysis
Literature
from the two periods did not show a change in the way that statistics are done. The articles contained a wide variety of
methods. Earlier research involved more
averages. This can be attributed
however, to the number of qualitative studies.
Qualitative lend themselves more readily to averages.
One thing that is apparent in the studies
is that modern research employs more statistics.
With the
advent of research software, researchers are able to take the same data and
quickly run multiple statistical analyses to help determine the best
analysis. Table 3 shows the statistical
methods used in research in the 1980’s. In
table 3 there is only one test preformed on each set of data. In more recent studies, as seen in table 4, one
can see that researchers performed more kinds of statistical analysis. In the case of Banker et al the group
performed three tests to analyze their data (2006).
The analysis showed a propensity toward using
multiple statistical analyses in research in more recent research. This may be due to the more detailed work on
the subject. Modern research delves into
subject such as usability of websites. Research
from 20 years ago delved into broader questions such as the acceptance
of computers into workplaces.
Software
products such as SPSS make it possible to take a dataset and mine it for
correlation. Garriey et al (2007) did
this when they looked at three different statistics to come to their
conclusion. The trend now is to do a more in-depth analysis. Researchers have the ability to gather vast
amount of data and perform complex statistical analysis on them. Researchers 20 years ago did not have this
option. For them research often had to
be done by hand. This would naturally
lead to more work on the part of the researchers. Performing statistical analysis by hand can
be a time consuming process. While today
performing a complex statistical analysis could take hours in the past it could
take days. Statistical analysis without
the benefit of statistical software can be wrought with mathematical mistakes. Human error plays a larger factor when surveys
have been hand coded then that data has been hand analyzed.
Response
Rates
Response
rates between the two time periods vary.
In the 1980's researchers used more college students who were forced to
take the surveys (Jarvenpaa et al, 1988; Dos Santos et al, 1988; Kirs et al.
1989). They also relied on people volunteering (Jarvenpaa et al, 1988) more
often. These types of candidates often
do not represent the average person from the population. When a general inquiry is made for volunteers
or participation is compulsory there is no response rate. We see only two response rates in the earlier
surveys.
There
were two response rates out of our 10 research articles. Of these the response rates were 36% and
47%. These rates are far lower than the response
rates in more recent studies. The
response rates for data collected for people is 81%. Even email response rates in our survey are
32% which is closer to standards 20 years ago than today's standards.
Response
rates differ depending on the type of medium that people are using to
survey. An email to CEOs may have a
lesser response rate than someone standing at a shopping mall with a clip
board. The more people feel connected to the
researcher the more they are willing to participate in the research.
In articles
of the past 20 years there is a tendency away from giving money to participants
the way Luzi et al, did in their study on study on performance (1984). Giving money to people can be a great
motivating factor. This
motivation may affect research. For
example, in the Luzi et al article the money given away to top performers may
make them act in a way that they would not at work. In this way they made the respondents
compete. This type of competition could
introduce factors that the researchers may not want; the study would be
meaningless to any situation other than one where incentives are given
out. This type of research can introduce
skewed results. Some people may work
better under the pressure that money and competition brings while
others may get confused by the pressure.
It appears that in modern research this practice is less likely.
Research
Themes
Seven out
of the ten articles from the 1980's dealt with computer
usability issues. Jarvenpaa et al dealt
with how groups interact with computers (1988), Dos Santos et al dealt with
user interface issues (1988), Montazemi (1988), DeLone (1988), and Williams
dealt with user satisfaction. As
software development gets more mature researchers delves into new areas. What was once a new topic such as user
satisfaction has been replaced by more mature themes such as personalization
(Tam et al, 2005).
Topics
such as information overload have been explored and solutions such as drill
down menus and customization are common place.
In the 1980's these were still topics that needed discussion on the
foundation level(Dos Santos et al, 1988).These are all ideas that have evolved
as the technology evolved.
Today new
topics in software development are emerging.
Issues on new software products such as software that allows people to
share knowledge and work together are still in their infancy (Taylor,
2004). These issues will evolve as
technology evolves. Collaboration software
such as Google Document which allows people to work on the same document at the
same time is still in their infancy.
Wiki's are only a few years old.
As the ability for people to work together and share information grows the
need for research to help people use that technology to reach their goals will
also increase.
Four times in this review of recent materials there
are articles on software errors. Bugs in
software always been a major issue with software development, but now
researchers have better tools to help the programmer deal with these bugs. In the article A Replicated Survey of IT
Software Project Failures (2008) there is a meta-analysis of various reports
of why software fails. This body of
information did not exist a few years ago.
Now with the ability to collaborate researchers have access to a vast
hither unto unimagined body of knowledge.
Crowston
et al take this idea one step further in their article Bug Fixing Practices
within Free/Libre Open Source Software Development
Teams. This article takes 100,000
records from open source projects and analyzed the way software bugs are dealt
with.
Technology
has also given us the ability to do things that did not exist a few years
ago. For example Van Pham and Gopal et
al write about outsourcing. This
relatively new practice has undergone much research over the past few
years. It is a complicated issue. Off shoring software development may risk
handing over company secrets to people who do not work for the company (Van
Pham, 2006) but in doing so companies can create wealth by reducing the cost of
software development. This is a topic
that researchers have looked into in great detail over the past few years. As companies face economic difficulties, many
of them face the inevitably that they must reduce cost to stay in business. The debate over outsourcing and what to
outsource is a topic that will plaque researchers for years to come. There is no one right answer for every
company. In the end as a practitioner
one must use the available knowledge to make a decision. In research every decision answer is either proven
or disproved, as a practitioner decisions often have many sides to them. This discussion of off shoring is one of
those issues.
Another
topic in modern research is inner-organizational software development (Robey et
al, 2008). Gone are the days of standalone
software. Today organizations want
software to be able to communicate. When
software crashes the help desk needs to know.
When there is potential fraud detected in one system the other need to
be aware of it. Also the ability to
transfer information from one system to another is crucial in such systems as decision
support systems. In these types of
systems, information is gathered from various sources within the organization
and given to decision makers who can view the larger picture of the
organization.
In
software development inner-organizational software can help companies
collaborate and use their time more efficiently. Inner-organizational software is developed
with common interfaces such as XML or Application interfaces. Software packages that companies buy can also have
ways to interface with them programmatically.
Interoperability is standard practice for software being developed
today.
These
ways of working with software are becoming wide spread. In their article Theoretical Foundations of
Empirical Research on Inter-organizational Systems: Assessing Past
Contributions and Guiding Future Directions, Robey et al use seminal research
to show the foundations of inter-organizational systems (2008). The authors note that it took several years
for the idea off inter-organizational research begun to spread. In research
developing the seminal research is important but there may be a lag between the
research itself and the practical application of that research.
Management
of software development is an issue that has been the issue of a good deal of research.
Todays software products are more complicated and expensive. Researchers today are looking for ways of
minimizing risk. As well as the ability
to overcome things witch might lead the project to failure.
Researchers
are also looking at the way software is build. Sugumaran et al's article on
software time lines (2008) delves into ways that programmers can manage
expectations and develop time lines that will help lead to the success of the
project. This type of software reflects
what is going on in the industry with new project management methodologies such
as Last Planner and Scrum.
Personalization
is another issue researchers research today. Personalization is an emerging
technology. Software development of
personalization is a complex task. Tam
et al (2005) in their article Web Personalization as a Persuasion Strategy use
seminal research in the fields of psychology to argue their case. The easy availability of research in other
fields such as psychology help made new inroads in research. In their article they talk about how
familiar landmarks can help the user process the page more quickly. Users catch
more messages from the web page when there is some sort of familiarity. With the advent of technologies that allow
for more customized user interfaces such as hypertext markup researchers are concerned
with how the interface makes people feel.
Twenty years ago a button on a program would most likely be the one that
came standard with the operating system.
Today with web pages the button can have gradients, borders, and have
movement. Research is showing us this type of customization can really affect
use of the site.
This type
of cross-disciplinary research affects the way people make software
products. Articles involving biasness,
marketing and psychology are just some of the disciplines that researchers are
looking at when they research software development. Today software like video games can have
advertisement built into them. This type
collaboration between businesses was unheard of twenty years ago.
Authors
There are
more authors for each article in more recent articles. As technology allows people to work
collaboratively there is a greater collaboration among researchers. In the review of older articles there is a 60%
collaboration rate. In more recent articles that number has climbed up to
93%. Today researchers have the ability
to send drafts via email as well as chat on a free service. Twenty years ago the only options that a
person had if they did not live in the same town was to use the mail. This would have made collaboration
prohibitively difficult.
Subjects
In many of the older articles there is a
greater usage of students as subjects.
These students are often compulsorily made to answer questions. Fifty percent of the articles from the 1980's
used students for their research. While
only 12% of more recent article do. Chart
3 shows the difference in the two different groups.
The
disfavor of the use of students could be for a number of reasons. The first of which is that students are often
coerced into doing the surveys. They are
often a requirement for classes. This type of coercion can produce poor results
in the studies because students may not truly represent the larger population. In Jarvenpaa et al's research they used
students to test the usefulness of group collaboration software. The flaw in this study is that these students
are not the typical user of group software and thereby have no context for
witch to use it. If the researchers had
used business people they may have had a real world context for using the
software. This could have drastically
changed the outcome of the research.
When
there is no coercion researchers in the past often use bribes to encourage the
student to attend. These bribes can be as detrimental. Students may behave differently due to a
bribe. This can taint the results.
Analysis
Software
development research is going in many directions. Software development is a relatively new
field which is always changing. Research
on software development evolves as software evolves.
The
availability of data, and new ways to analyze data, is causing a steep increase
in the amount of quantitative research being done. Today’s researcher can gather data from
various sources. They can also access
enormous amounts of articles on the subject.
They have the ability to collaborate like never before. Researchers today have cross disciplinary
information readily available leaning to new ways of looking at software
development.
Today
when developing software, people use a broad range of information brought about
by research. Software developers of
today look at usability and customizability.
They look at the psychology of getting people to use their products and
the marketability of their products.
Conclusion
The
future of software engineering research is bright. There are new devices that require an
entirely new approach. Small devices
like cell phone are starting to get a good deal of use. In the future researchers will look at such
devices and look at all of the aspects that they did with software and web
development. Researchers will look at
why people use these devices. They will
look at ways of building quality software that provide the user with a sound
user experience. These types of devices
lend themselves to more research because for the first time location plays a
part of the equation. New devices are location
sensitive. Giving users contextual
information depending on the place they are in.
This
paper discusses how research has changed over the past 20 years. Although research methods have not changed
the ability to gather large amounts of data and then analyze them has. There are also more people willing to participate
in research.
Researchers are more willing to use
different methods to gather data such as email.
There are also trends away from using students in research.
Managing
projects has become an issue in software development. With large projects come large costs. Software research is looking into ways of
managing risk. Software research is also
looking into ways of determining the time and cost of software projects. There are also more options available to
software developers than ever before. Today’s
programmers can use geographic data from a service. They can interface with small devices such as
phone or have access to unprecedented processing power and storage through web
services and cloud computing. Today’s
software developers have access to tools such as knowledge bases and research
to aid in there professionalism.
The
research of the past has aided with things such as group interactions and user
interactions. The research of today
takes those seminal studies and builds upon them to take us toward the future
of research.
References
Armstrong, D. J., Nelson, H. J., Nelson, K. M., & Narayanan V. K. (2008).
Building the IT workforce of the future: the demand for more complex, abstract,
and strategic knowledge. Information Resources Management Journal,
21(2), 63-79. Retrieved March 1, 2009,
from ABI/INFORM Global database.
Banker, R. D., Bardhan, I., & Asdemir, O. (2006).
Understanding the impact of collaboration software on product design and
development. Information Systems Research, 17(4), 352-373,440. Retrieved March 1, 2009, from ABI/INFORM
Global database.
Brockhoff, & Klaus.
(1984). Forecasting quality and information. Journal of Forecasting,
3(4), 417. Retrieved March 1, 2009, from
ABI/INFORM Global database.
Capra, E., Francalanci, C., & Merlo, F. (2008). An
empirical study on the relationship between software design quality,
development effort and governance in open source projects. IEEE Transactions
on Software Engineering, 34(6), 765-782.
Retrieved March 1, 2009, from ABI/INFORM Global database.
Crowston, K., & Scozzi, B. (2008). Bug fixing practices
within free/libre open source software development teams. Journal of
Database Management, 19(2), 1-30.
Retrieved March 1, 2009, from ABI/INFORM Global database.
DeLone, & William H.
(1988). Determinants of success for computer usage in small business. MIS
Quarterly, 12(1), 51. Retrieved
March 1, 2009, from ABI/INFORM Global database.
Dos Santos, Brian L.,
Bariff, & Martin L. (1988). A study of user interface aids for model-oriented
decision. Management Science, 34(4), 461. Retrieved February 28, 2009, from ABI/INFORM
Global database.
Doll, William J.,
Torkzadeh, & Gholamreza. (1989). A discrepancy model of end-user
computing involvement. Management Science, 35(10), 1151. Retrieved March 1, 2009, from ABI/INFORM
Global database.
El Emam K., & Koru
A. (2008). A replicated survey of it software project failures. IEEE
Software, 25(5), 84-90. Retrieved
February 17, 2009, from ABI/INFORM Global database.
Garrity, E. J., O'Donnell, J. B., & Kim, Y. J., &
Sanders, G. L.. (2007). An extrinsic and intrinsic motivation-based model for
measuring consumer shopping oriented web site success. Journal of Electronic
Commerce in Organizations, 5(4), 18-38.
Retrieved March 1, 2009, from ABI/INFORM Global database.
Gopal, A., Sivaramakrishnan, K., Krishnan, M. S., &
Mukhopadhyay, T. (2003). Contracts in offshore software development: an
empirical analysis. Management Science, 49(12), 1671-1683. Retrieved March 1, 2009, from ABI/INFORM
Global database.
Jarvenpaa, Sirkka L.,
Rao, V. Srinivasan, Huber, &
George P. (1988). Computer support for meetings of groups working on unstruct. MIS
Quarterly, 12(4), 645. Retrieved
February 28, 2009, from ABI/INFORM Global database.
Khanfar, K., Elzamly, A., Al-Ahmad, W., El-Qawasmeh, E.,
Alsamara, K., & Abuleil S. (2008). Managing software project risks with the
chi-square technique. International
Management Review, 4(2), 18-29,77.
Retrieved March 1, 2009, from ABI/INFORM Global database.
Kirs, Peeter J.,
Sanders, Lawrence G., Cerveny,
Robert P., & Robey, Daniel. (1989).
An experimental validation of the Gorry and Scott Morton fr. MIS Quarterly,
13(2), 183. Retrieved February 28, 2009,
from ABI/INFORM Global database.
Luzi, A. D., & Mackenzie,
K. D. (1982). An experimental study of performance information systems. Management
Science (pre-1986), 28(3), 243.
Retrieved March 1, 2009, from ABI/INFORM Global database.
Lucas H. C. (1981). An
experimental investigation of the use of computer-based graphics in decision
making. Management Science, (pre-1986), 27(7), 757. Retrieved March 1, 2009, from ABI/INFORM
Global database.
Montazemi, & Ali Reza.
(1988). Factors affecting information satisfaction in the context o. MIS
Quarterly, 12(2), 239. Retrieved
February 28, 2009, from ABI/INFORM Global database.
Park C. W., Im, G., & Keil, M. (2008). Overcoming the mum
effect in it project reporting: impacts of fault responsibility and time
urgency*. Journal of the Association for Information Systems, 9(7),
409-431. Retrieved March 1, 2009, from
ABI/INFORM Global database.
Pendharkar, P. C.,
& Rodger, J. A. (2007). An empirical study of the impact of team
size on software development effort. Information Technology and Management,
8(4), 253-262. Retrieved March 1, 2009,
from ABI/INFORM Global database.
Robey, D., Im, G.,
& Wareham, J. D. (2008). Theoretical foundations of empirical research on
interorganizational systems: assessing past contributions and guiding future
directions. Journal of the Association for Information Systems, 9(9),
497-518. Retrieved March 1, 2009, from
ABI/INFORM Global database.
Shneiderman, B., Brethauer,
D., Plaisant, C., & Potter, R. (1989). Evaluating three museum
installations of a hypertext system. Journal of the American Society for
Information Science (1986-1998), 40(3), 172. Retrieved March 1, 2009, from ABI/INFORM
Global database.
Stegelin, F. E., & Novak, J. L. (1986). Attitudes of
agribusiness toward microcomputers. Agribusiness (1986-1998), 2(2),
225. Retrieved March 1, 2009, from
ABI/INFORM Global database.
Sugumaran, V., Tanniru, M., & Storey V. C. (2008). A
knowledge-based framework for extracting components in agile systems
development. Information Technology and Management, 9(1), 37-53. Retrieved March 1, 2009, from ABI/INFORM
Global database.
Tam, K. Y., & Ho, S. Y.. (2005). Web personalization as a
persuasion strategy: an elaboration likelihood model perspective. Information
Systems Research, 16(3), 271-291.
Retrieved March 1, 2009, from ABI/INFORM Global database.
Taylor, W. A. (2004).
Computer-mediated knowledge sharing and individual user differences: an
exploratory study. European Journal of Information Systems, 13(1),
52-64. Retrieved March 1, 2009, from
ABI/INFORM Global database.
Tsai, M. T., & Su, W. (2007). The impact of cognitive fit
and consensus on acceptance of collaborative information systems. The
Business Review, Cambridge, 8(2), 184-190.
Retrieved March 1, 2009, from ABI/INFORM Global database.
Van Pham K. (2006).
Strategic off shoring from a decomposed COO's perspective: a cross-regional
study of four product categories. Journal of American Academy of Business,
Cambridge, 8(2), 59-66. Retrieved
March 1, 2009, from ABI/INFORM Global database.