Wednesday, March 27, 2013

Research Evolution in Software Development

Research Evolution in Software Development

Introduction

Software development research has changed significantly over the past 20 years.  While in the late 1980s researchers were concerned with user acceptance of computers in general (Forrest, Stegelin &  Novak 1986), in the past few years researchers have been more concerned with why people accept or reject certain software products, and not others (Khanfar,  et al. 2008; Garrity et al. 2007).  This literature review will explore the changes in research methods by taking a sampling of research articles from the late 1980’s, and articles from recent years. 
This literature review will examine 27 peer reviewed articles relating to software development.  Sixteen of these articles were written in the past 5 years and eleven were written in the late 1980’s.  Using these articles, I will examine the changes over time to the software research process.
By reviewing the differences in software development research I will get a picture of where software development research is going in the future.  In this review I will look at information such as sample size, methodology, statistical analysis being preformed, and other factors such as use of students in research.  This paper will show the trends in software development research and how they affect researchers.

Literature Review

Research methods
Although the basic way people research has not changed over the past 20 years, the tools available to researchers and the methods that they tend to use has.  In my study of 27 articles I found that 54% of articles written before 1990 were qualitative, while only 6% of current articles written in the past five years used a qualitative method. 
Quantitative methods are being used more frequently in recent years.  This trend may reflect the ease in which data is easily computer tabulated or may be due to a preference by researchers for hard data.  When Ali Montazemi researched user satisfaction (1988), he chose to interview people.  In these seminal types of interviews he found information that quantitative research could have missed.  Montazemi, for example, he found that 20% of people would have preferred a query system that gave them “what-if” scenarios.  Today, this type of research today would be a separate topic of decision support systems.  In contrast, when Garrity et al. did his research on user satisfaction on websites, he chose to have users use a quantitative study.  With modern tools Garrity et al was able to do a more extensive statistical analysis using statistical techniques such as average variance extracted, sum square, and f-test (p. 27).  This type of analysis speaks to the growing maturity of software development research. 
Modern researchers have more tools to analyze data.  Qualitative data lends itself more readily to this type of analysis.  Modern tools make it possible to analyze more data. There are tools that exist now that did not exist 20 years ago.  There are also more categories of research than there were 20 years ago.  In the research field of software development researchers are looking at project management, testing, security, websites and usability to name a few. 
Sample Populations
Comparing research between the two periods shows that surveys today can have far larger populations.  Data collected from the research on types of samples from twenty years ago shown in Table 1 shows a sample size that is far smaller than the sample size of data taken from fore recent times shown in in table 2.  The average sample size for research taken 20 years ago is 113.  The average sample size for research taken today is 7150.  This is primarily due to the availability of data.  Most of the data that was collected 20 years ago was obtained through face to face surveys.  One exception to this was a museum experiment.  In this experiment, museums were equipped with a Hypertext display that allowed users to review information about the museum.  The data collection was limited when one of the two monitors that they were using broke (Shneiderman et al, p. 49).  The price of monitors has come down in recent years and this problem would most likely be fixable in today’s environment.
Today researchers have access to more tools that give them a much greater ability to review data. Crowston et al, for example, was able to collect data on over 100,000 open source projects (2008).  This type of data collection would be unheard of 20 years ago.  Large scale systems like the Internet were not in place then and the ability to harvest vast quantities of data was nonexistent.
            Now researchers have new techniques for sending surveys.  Phone and mail surveys can be expensive in both time and money.  Modern surveys can be done cheaper and more efficiently.  When one group of researchers, for example, wanted to have a large sample size they sent emails out to 10,000 people, and 3276 people responded (Tam et al. p. 280).  First class postage as of the time of writing this is 42 cents.  Sending this type of survey through the mail would cost $4,200.  This type of expense is out of the range of many research projects.
            When looking only at direct surveys of people without the help of modern data collection methods there is virtually no difference between the two time periods.  The data collected in the 1980’s had, on average, surveyed 123 people while the data collected in the past few years had on average 124.  Interviewing and surveying people clearly takes longer than more modern methods.  While these interview may be necessary in qualitative research the amount of data collected can be quite small compared to data mining through already existing information systems or email surveys.
Statistical Analysis
 Literature from the two periods did not show a change in the way that statistics are done.  The articles contained a wide variety of methods.  Earlier research involved more averages.  This can be attributed however, to the number of qualitative studies.  Qualitative lend themselves more readily to averages. 
One thing that is apparent in the studies is that modern research employs more statistics.
With the advent of research software, researchers are able to take the same data and quickly run multiple statistical analyses to help determine the best analysis.  Table 3 shows the statistical methods used in research in the 1980’s.  In table 3 there is only one test preformed on each set of data.  In more recent studies, as seen in table 4, one can see that researchers performed more kinds of statistical analysis.  In the case of Banker et al the group performed three tests to analyze their data (2006).
The analysis showed a propensity toward using multiple statistical analyses in research in more recent research.  This may be due to the more detailed work on the subject.  Modern research delves into subject such as usability of websites.  Research from 20 years ago delved into broader questions such as the acceptance of computers into workplaces.
Software products such as SPSS make it possible to take a dataset and mine it for correlation.  Garriey et al (2007) did this when they looked at three different statistics to come to their conclusion. The trend now is to do a more in-depth analysis.  Researchers have the ability to gather vast amount of data and perform complex statistical analysis on them.   Researchers 20 years ago did not have this option.  For them research often had to be done by hand.  This would naturally lead to more work on the part of the researchers.  Performing statistical analysis by hand can be a time consuming process.  While today performing a complex statistical analysis could take hours in the past it could take days.  Statistical analysis without the benefit of statistical software can be wrought with mathematical mistakes.  Human error plays a larger factor when surveys have been hand coded then that data has been hand analyzed.  
Response Rates
Response rates between the two time periods vary.  In the 1980's researchers used more college students who were forced to take the surveys (Jarvenpaa et al, 1988; Dos Santos et al, 1988; Kirs et al. 1989). They also relied on people volunteering (Jarvenpaa et al, 1988) more often.  These types of candidates often do not represent the average person from the population.  When a general inquiry is made for volunteers or participation is compulsory there is no response rate.  We see only two response rates in the earlier surveys.
There were two response rates out of our 10 research articles.  Of these the response rates were 36% and 47%.  These rates are far lower than the response rates in more recent studies.  The response rates for data collected for people is 81%.  Even email response rates in our survey are 32% which is closer to standards 20 years ago than today's standards.
Response rates differ depending on the type of medium that people are using to survey.  An email to CEOs may have a lesser response rate than someone standing at a shopping mall with a clip board.  The more people feel connected to the researcher the more they are willing to participate in the research.
In articles of the past 20 years there is a tendency away from giving money to participants the way Luzi et al, did in their study on study on performance (1984).  Giving money to people can be a great motivating factor.  This motivation may affect research.  For example, in the Luzi et al article the money given away to top performers may make them act in a way that they would not at work.  In this way they made the respondents compete.  This type of competition could introduce factors that the researchers may not want; the study would be meaningless to any situation other than one where incentives are given out.  This type of research can introduce skewed results.  Some people may work better under the pressure that money and competition brings while others may get confused by the pressure.  It appears that in modern research this practice is less likely. 
Research Themes
Seven out of the ten articles from the 1980's dealt with computer usability issues.  Jarvenpaa et al dealt with how groups interact with computers (1988), Dos Santos et al dealt with user interface issues (1988), Montazemi (1988), DeLone (1988), and Williams dealt with user satisfaction.  As software development gets more mature researchers delves into new areas.  What was once a new topic such as user satisfaction has been replaced by more mature themes such as personalization (Tam et al, 2005). 
Topics such as information overload have been explored and solutions such as drill down menus and customization are common place.  In the 1980's these were still topics that needed discussion on the foundation level(Dos Santos et al, 1988).These are all ideas that have evolved as the technology evolved. 
Today new topics in software development are emerging.  Issues on new software products such as software that allows people to share knowledge and work together are still in their infancy (Taylor, 2004).  These issues will evolve as technology evolves.  Collaboration software such as Google Document which allows people to work on the same document at the same time is still in their infancy.  Wiki's are only a few years old.  As the ability for people to work together and share information grows the need for research to help people use that technology to reach their goals will also increase.
Four times in this review of recent materials there are articles on software errors.  Bugs in software always been a major issue with software development, but now researchers have better tools to help the programmer deal with these bugs.  In the article A Replicated Survey of IT Software Project Failures (2008) there is a meta-analysis of various reports of why software fails.  This body of information did not exist a few years ago.  Now with the ability to collaborate researchers have access to a vast hither unto unimagined body of knowledge. 
Crowston et al take this idea one step further in their article Bug Fixing Practices within Free/Libre Open Source Software Development Teams.  This article takes 100,000 records from open source projects and analyzed the way software bugs are dealt with. 
Technology has also given us the ability to do things that did not exist a few years ago.  For example Van Pham and Gopal et al write about outsourcing.  This relatively new practice has undergone much research over the past few years.  It is a complicated issue.  Off shoring software development may risk handing over company secrets to people who do not work for the company (Van Pham, 2006) but in doing so companies can create wealth by reducing the cost of software development.  This is a topic that researchers have looked into in great detail over the past few years.  As companies face economic difficulties, many of them face the inevitably that they must reduce cost to stay in business.  The debate over outsourcing and what to outsource is a topic that will plaque researchers for years to come.  There is no one right answer for every company.  In the end as a practitioner one must use the available knowledge to make a decision.  In research every decision answer is either proven or disproved, as a practitioner decisions often have many sides to them.  This discussion of off shoring is one of those issues.
Another topic in modern research is inner-organizational software development (Robey et al, 2008).  Gone are the days of standalone software.  Today organizations want software to be able to communicate.  When software crashes the help desk needs to know.  When there is potential fraud detected in one system the other need to be aware of it.  Also the ability to transfer information from one system to another is crucial in such systems as decision support systems.  In these types of systems, information is gathered from various sources within the organization and given to decision makers who can view the larger picture of the organization.
In software development inner-organizational software can help companies collaborate and use their time more efficiently.  Inner-organizational software is developed with common interfaces such as XML or Application interfaces.  Software packages that companies buy can also have ways to interface with them programmatically.  Interoperability is standard practice for software being developed today.  
These ways of working with software are becoming wide spread.  In their article Theoretical Foundations of Empirical Research on Inter-organizational Systems: Assessing Past Contributions and Guiding Future Directions, Robey et al use seminal research to show the foundations of inter-organizational systems (2008).  The authors note that it took several years for the idea off inter-organizational research begun to spread. In research developing the seminal research is important but there may be a lag between the research itself and the practical application of that research.
Management of software development is an issue that has been the issue of a good deal of research. Todays software products are more complicated and expensive.  Researchers today are looking for ways of minimizing risk.  As well as the ability to overcome things witch might lead the project to failure.
Researchers are also looking at the way software is build. Sugumaran et al's article on software time lines (2008) delves into ways that programmers can manage expectations and develop time lines that will help lead to the success of the project.  This type of software reflects what is going on in the industry with new project management methodologies such as Last Planner and Scrum.
Personalization is another issue researchers research today. Personalization is an emerging technology.  Software development of personalization is a complex task.  Tam et al (2005) in their article Web Personalization as a Persuasion Strategy use seminal research in the fields of psychology to argue their case.  The easy availability of research in other fields such as psychology help made new inroads in research.  In their article they talk about how familiar landmarks can help the user process the page more quickly. Users catch more messages from the web page when there is some sort of familiarity.  With the advent of technologies that allow for more customized user interfaces such as hypertext markup researchers are concerned with how the interface makes people feel.  Twenty years ago a button on a program would most likely be the one that came standard with the operating system.  Today with web pages the button can have gradients, borders, and have movement. Research is showing us this type of customization can really affect use of the site. 
This type of cross-disciplinary research affects the way people make software products.  Articles involving biasness, marketing and psychology are just some of the disciplines that researchers are looking at when they research software development.  Today software like video games can have advertisement built into them.  This type collaboration between businesses was unheard of twenty years ago. 
Authors
There are more authors for each article in more recent articles.  As technology allows people to work collaboratively there is a greater collaboration among researchers.  In the review of older articles there is a 60% collaboration rate. In more recent articles that number has climbed up to 93%.  Today researchers have the ability to send drafts via email as well as chat on a free service.  Twenty years ago the only options that a person had if they did not live in the same town was to use the mail.  This would have made collaboration prohibitively difficult.   
Subjects
In many of the older articles there is a greater usage of students as subjects.  These students are often compulsorily made to answer questions.  Fifty percent of the articles from the 1980's used students for their research.  While only 12% of more recent article do.  Chart 3 shows the difference in the two different groups.
The disfavor of the use of students could be for a number of reasons.  The first of which is that students are often coerced into doing the surveys.  They are often a requirement for classes. This type of coercion can produce poor results in the studies because students may not truly represent the larger population.  In Jarvenpaa et al's research they used students to test the usefulness of group collaboration software.  The flaw in this study is that these students are not the typical user of group software and thereby have no context for witch to use it.  If the researchers had used business people they may have had a real world context for using the software.  This could have drastically changed the outcome of the research.
When there is no coercion researchers in the past often use bribes to encourage the student to attend. These bribes can be as detrimental.  Students may behave differently due to a bribe.  This can taint the results.
Analysis
Software development research is going in many directions.  Software development is a relatively new field which is always changing.  Research on software development evolves as software evolves. 
The availability of data, and new ways to analyze data, is causing a steep increase in the amount of quantitative research being done.  Today’s researcher can gather data from various sources.  They can also access enormous amounts of articles on the subject.  They have the ability to collaborate like never before.  Researchers today have cross disciplinary information readily available leaning to new ways of looking at software development.
Today when developing software, people use a broad range of information brought about by research.  Software developers of today look at usability and customizability.  They look at the psychology of getting people to use their products and the marketability of their products.

Conclusion
The future of software engineering research is bright.  There are new devices that require an entirely new approach.  Small devices like cell phone are starting to get a good deal of use.  In the future researchers will look at such devices and look at all of the aspects that they did with software and web development.  Researchers will look at why people use these devices.  They will look at ways of building quality software that provide the user with a sound user experience.  These types of devices lend themselves to more research because for the first time location plays a part of the equation.  New devices are location sensitive.  Giving users contextual information depending on the place they are in.
This paper discusses how research has changed over the past 20 years.  Although research methods have not changed the ability to gather large amounts of data and then analyze them has.  There are also more people willing to participate in research.
Researchers are more willing to use different methods to gather data such as email.  There are also trends away from using students in research.
Managing projects has become an issue in software development.  With large projects come large costs.  Software research is looking into ways of managing risk.  Software research is also looking into ways of determining the time and cost of software projects.  There are also more options available to software developers than ever before.  Today’s programmers can use geographic data from a service.  They can interface with small devices such as phone or have access to unprecedented processing power and storage through web services and cloud computing.  Today’s software developers have access to tools such as knowledge bases and research to aid in there professionalism. 
The research of the past has aided with things such as group interactions and user interactions.  The research of today takes those seminal studies and builds upon them to take us toward the future of research.

   
References

Armstrong, D. J., Nelson, H. J.,  Nelson, K. M., & Narayanan V. K. (2008). Building the IT workforce of the future: the demand for more complex, abstract, and strategic knowledge. Information Resources Management Journal, 21(2), 63-79.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Banker, R. D., Bardhan, I., & Asdemir, O. (2006). Understanding the impact of collaboration software on product design and development. Information Systems Research, 17(4), 352-373,440.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Brockhoff, & Klaus.  (1984). Forecasting quality and information. Journal of Forecasting, 3(4), 417.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Capra, E., Francalanci, C., & Merlo, F. (2008). An empirical study on the relationship between software design quality, development effort and governance in open source projects. IEEE Transactions on Software Engineering, 34(6), 765-782.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Crowston, K., & Scozzi, B. (2008). Bug fixing practices within free/libre open source software development teams. Journal of Database Management, 19(2), 1-30.  Retrieved March 1, 2009, from ABI/INFORM Global database.
DeLone, & William H.  (1988). Determinants of success for computer usage in small business. MIS Quarterly, 12(1), 51.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Dos Santos, Brian L.,  Bariff, & Martin L. (1988). A study of user interface aids for model-oriented decision. Management Science, 34(4), 461.  Retrieved February 28, 2009, from ABI/INFORM Global database.
Doll, William J.,  Torkzadeh, & Gholamreza. (1989). A discrepancy model of end-user computing involvement. Management Science, 35(10), 1151.  Retrieved March 1, 2009, from ABI/INFORM Global database.
El Emam K.,  & Koru A. (2008). A replicated survey of it software project failures. IEEE Software, 25(5), 84-90.  Retrieved February 17, 2009, from ABI/INFORM Global database.
Garrity, E. J., O'Donnell, J. B., & Kim, Y. J., & Sanders, G. L.. (2007). An extrinsic and intrinsic motivation-based model for measuring consumer shopping oriented web site success. Journal of Electronic Commerce in Organizations, 5(4), 18-38.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Gopal, A., Sivaramakrishnan, K., Krishnan, M. S., & Mukhopadhyay, T. (2003). Contracts in offshore software development: an empirical analysis. Management Science, 49(12), 1671-1683.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Jarvenpaa, Sirkka L.,  Rao, V. Srinivasan,  Huber, & George P. (1988). Computer support for meetings of groups working on unstruct. MIS Quarterly, 12(4), 645.  Retrieved February 28, 2009, from ABI/INFORM Global database.
Khanfar, K., Elzamly, A., Al-Ahmad, W., El-Qawasmeh, E., Alsamara, K., & Abuleil S. (2008). Managing software project risks with the chi-square  technique. International Management Review, 4(2), 18-29,77.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Kirs, Peeter J.,  Sanders, Lawrence G.,  Cerveny, Robert P.,  & Robey, Daniel. (1989). An experimental validation of the Gorry and Scott Morton fr. MIS Quarterly, 13(2), 183.  Retrieved February 28, 2009, from ABI/INFORM Global database.
Luzi, A. D.,  & Mackenzie, K. D. (1982). An experimental study of performance information systems. Management Science (pre-1986), 28(3), 243.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Lucas H. C.  (1981). An experimental investigation of the use of computer-based graphics in decision making. Management Science, (pre-1986), 27(7), 757.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Montazemi, & Ali Reza.  (1988). Factors affecting information satisfaction in the context o. MIS Quarterly, 12(2), 239.  Retrieved February 28, 2009, from ABI/INFORM Global database.
Park C. W., Im, G., & Keil, M. (2008). Overcoming the mum effect in it project reporting: impacts of fault responsibility and time urgency*. Journal of the Association for Information Systems, 9(7), 409-431.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Pendharkar, P. C.,  & Rodger, J. A. (2007). An empirical study of the impact of team size on software development effort. Information Technology and Management, 8(4), 253-262.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Robey, D.,  Im, G., & Wareham, J. D. (2008). Theoretical foundations of empirical research on interorganizational systems: assessing past contributions and guiding future directions. Journal of the Association for Information Systems, 9(9), 497-518.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Shneiderman, B., Brethauer, D., Plaisant, C., & Potter, R. (1989). Evaluating three museum installations of a hypertext system. Journal of the American Society for Information Science (1986-1998), 40(3), 172.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Stegelin, F. E., & Novak, J. L. (1986). Attitudes of agribusiness toward microcomputers. Agribusiness (1986-1998), 2(2), 225.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Sugumaran, V., Tanniru, M., & Storey V. C. (2008). A knowledge-based framework for extracting components in agile systems development. Information Technology and Management, 9(1), 37-53.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Tam, K. Y., & Ho, S. Y.. (2005). Web personalization as a persuasion strategy: an elaboration likelihood model perspective. Information Systems Research, 16(3), 271-291.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Taylor, W. A.  (2004). Computer-mediated knowledge sharing and individual user differences: an exploratory study. European Journal of Information Systems, 13(1), 52-64.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Tsai, M. T., & Su, W. (2007). The impact of cognitive fit and consensus on acceptance of collaborative information systems. The Business Review, Cambridge, 8(2), 184-190.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Van Pham K.  (2006). Strategic off shoring from a decomposed COO's perspective: a cross-regional study of four product categories. Journal of American Academy of Business, Cambridge, 8(2), 59-66.  Retrieved March 1, 2009, from ABI/INFORM Global database. 

Polymorphic Ajax: See How Polymorphism Can Speed Code Writing


Polymorphic Ajax: See How  Polymorphism Can Speed Code Writing
In computer science, polymorphism means allowing a single definition to be used with different types of data (specifically, different classes of objects).  Polymorphism along with modularity and encapsulation are building blocks of  Object Oriantated Programming(OOP).  Polymorphism can alow programmers to reuse code, reduce code complexity and decrease inconsistancys in code design.

When designing Javascript methods it is often helpful to define standard interfaces. For example, if an object is substanciated with a transparency method and another object is substanciated with a transperenct method then the programmer does not have to nessecarely create two seperate methods.  For example lets look at a transparent method.

Two Function Two Method Example:

function menu()
{
  this.m_div = document.getElementById("container");
  this.m_div.innerHTML = '<table>'
                                                    + '<tr>'
                                                    + '<td>Home</td>'
                                                    + '<td>Approach</td>'
                                                    + '</tr>'
                                              + '</table>'
 }
menu.prototype.transparent = function()
{
  if(this.m_div)
  {
    this.m_div.style.filter = "alpha(opacity:40)";//IE
    this.m_div.style.opacity = .7;// Safari , Firefox
  }
}

function menu2()
{
  this.m_div = document.getElementById("container");
  this.m_div.innerHTML = this.m_div.innerHTML = '<img src="image/trees.jpg" />';
}
menu2.prototype.transparent = function()
{
  if(this.m_div)
  {
    this.m_div.style.filter = "alpha(opacity:40)";//IE
    this.m_div.style.opacity = .7;// Safari , Firefox
  }
}

 
We can access these methods in another fuction. 

Calling Function:
function main()
{
  g_menu = new menu();
  g_menu.transparent();
  g_menu2 = new menu2();
  g_menu2.transparent();
}
 
If we use polymorphism then we can have reusable code. Shortening out development time.We would keep our original function but replace the methods with one function.


PolyMorphic Function:
function transparent(obj)
{
  if(obj.m_div)
  {
      obj.m_div.style.filter = "alpha(opacity:40)";//IE
      obj.m_div.style.opacity = .7;// Safari , Firefox
  }
}

 
To call this we would replace the method calls with function calls in which we have passed the object.

function main()
{
  g_menu = new menu();
  transparent(g_menu);
  g_menu2 = new menu2();
  transparent(g_menu2);
}

 
 
This example is short and shows a simple example of polymorphism.  The code savings in this example is short and does not show significant code savings.  However, as a project scales the need for reusable code becomes more apperent.  Polymorphism is not for every situration.  Given the right crecomstance this core of OOP programming can increase be a real boon to any project. 
 

New Developments in IT



Cloud computing, a hosted software service provider, is allowing companies gain a competitive advantage over their competition by giving them resources that are less expensive and cut down development time.  Cloud computing, however, is not a good fit for every organization.  Companies who are considering using cloud computing need to look at the legal and technical ramifications of doing business with cloud computing.  This paper will explore the organizational, technical, and legal ramifications of integrating cloud computing into a business. 

New Developments in IT

Cloud Computing is a technology that is gaining popularity with many companies.   Cloud computing is a hosted computing service that is offered by cloud computing providers. The users of cloud computing service are able offload the many of the traditional IT tasks. These services can take the place of hardware, software, or data entry staff.  Companies are using cloud computing to gain a strategic advantage over their competition.  Properly deciding to use these services for an organization requires that IT personnel know advantages and disadvantages of cloud computing. 

Services

  There are many different cloud computing services that are offered.  Although a website can exist entirely in the cloud, many companies are taking an ala-carte approach to cloud computing.  These companies find the services that meet a particular IT need.

Database

SimpleDB by Amazon is the leader in the database cloud arena.  SimpleDB is fast, cost effective, and is free when used with other Amazon Services or used below one million requests per month (Amazon, n.d.).  While Amazon’s SimpleDB is low cost and fast it does have limitations.  There is currently a limit of 5000 rows returned from a query.  SimpleDB does automatically many of the tasks data base administrators do such as backup and optimize queries.  Companies that use SimpleDB find that they are able to have a data storage solution without having to hire a database administrator to manage the data.
Microsoft is beta testing its answer to Amazons SimpleDB with the Azure platform (Microsoft, 2009).  The Azure platform gives users a virtual operating system in the cloud. This virtual server has access to a fully relational database.  Many companies that do not want any limits on their databases will choose this option.  Microsoft is currently giving this service away for free until February 2010.

File Storage
Amazon's Cloudfront has gained notoriety by Internet startups that choose to use their file storage cloud to house large amounts of user generated content (Kho, 2009).  Data stored on the Cloudfront can be shared like images on a website or privately stored such as information backups.
Amazon to Cloudfront has become a popular choice among Internet startups because it is able to scale instantly. For example, Animoto.com, a small Internet company of 25,000 registered users, was featured on the home page of Slashdot.com (Kho, 2009).  Over a three-day period increase the amount of registered users to 250,000.  If they had housed all of their IT assets in-house they could have needed up to 10 times the server capacity to stay operational.   Having unused excess server capacity can be costly.  Amazon’s Cloudfront allows companies to scale there processors and network bandwidth usage automatically.  Animoto.com was able to keep their website up and running during the spike in traffic. 

Testing
Software testing is an important part of developing software.   There are many different types of software testing companies.  Many companies test their software programmatically for bugs or perform a stress test of their software. 

Stress testing can be particularly difficult for website companies because often need a large number of computers pinging the website at the same time.  Many companies do not have the staff and machines necessary to perform a stress test effectively.  Skytap is website testing cloud-based service that can perform stress test on a website (Babcock, 2009).  Skytap’s testing software integrates with Microsoft’s Visual Studio, a software development platform, and allows company testers to upload test instructions to the Skytap cloud.  Skytap then uses their large bank of computers to run their stress test.   
Security

A company’s most vulnerable part of their IT infrastructure is the internet (George, 2009).  The internet has become an integral part of modern business.  Employees use web based tools and website as part of their jobs.  Many companies have chosen to use antivirus products and spyware software to help guard against security threats.  Clouds are being offered that will secure company's entire network through its enhanced security protocols.  This allows companies to eliminate the need to buy antivirus software and reduces the amount of computer updating the need to do.

Computing
Microsoft is offering virtual computers through their Azure platform (Microsoft, 2009).  Companies can upload their entire computing system to Azure’s platform.  Users can specify the amount of memory and processor speed of the servers they wish to use.  This allows companies  increase the amount of servers they can use without needing to spend the resources purchasing their own server farm.  Companies that require a large amount of processing power such as scientists can use Microsoft platform to scale their applications across many computers. 

Human Work

Many websites needing data reviews on products have found that Amazon’s Mechanical Turk provides them a way of getting a large number of data entry people to do simple tasks.  Companies like the Mechanical Turk solution because it is able to scale rapidly.  Hundreds of man-hours worth of work can be done in a single day.  Companies using the Mechanical Turk do not have to hire permanent employees or supply office space and computers to temporary employees.  Mechanical Turk allows companies to walk away when the project is completed.

Use Implications
There are many different kinds of services that are offered by cloud computer vendors.  This new way of doing business can help companies reduce cost and speed up the time to market of their products.  The challenge of modern business owners is to know what types of these services can help their company and what could leave them vulnerable.  Although cloud computing offers many advantages over traditional IT infrastructures it also requires a different set of skills.  Cloud computing requires that IT professionals have an understanding of the ramifications of working in the cloud. 

Cloud computing is not a good fit for every organization.  Companies need to look at their needs and compare them with the services that are being offered.  Companies also need to look at the legal ramifications of using services in the cloud.

Project Management
Project managers need understand the implications of working with cloud computing.  Cloud computing can lock companies into service level agreements, software architectures, and vendor limitations.  Project managers who are successful in developing projects in cloud computing environments need to understand how to maximize the benefits of cloud computing and minimize the risks.

Legal

Companies that are dealing with private information must be able to ensure that their service provider is in legal compliance (Bean, 2009).  Internal auditors may not be able to properly review the information as mandated by law.  They are different laws that apply to data hosted on the cloud.  If the police are looking for information host on the cloud they can do so without a warrant (Bean, 2009).  Information that is stored in a company's private servers is protected.  This lack of control can be costly for organizations that wish to keep their information private.
Contracts
In 2007 Carbinite Inc, a cloud service provider, lost the data they stored for 7500 clients (Zielinski, 2009).  Amazon's popular S3 went down for eight hours at one time.  Contracts help companies recoup losses when disaster strikes.  Carbinite, for example, paid customers for their outage.

Legal contracts also help companies deal with the legal standing of a company.  A company that is using a cloud that has its servers housed a different country may not have legal standing in the country that the client resides in.  If the client needs to sue they may have to take it up with the legal magistrates of the country where the servers are housed (Zielinski, 2009).

Many cloud providers have servers in multiple countries.  This helps them scale on a global basis.  Cloud service providers often place their servers several countries mitigate the risk of natural disasters bringing down their services.  When a natural disaster strikes in one area and the servers are brought off-line.  A cloud service provider that operates in different countries can continue to operate seamlessly when disaster strikes.  Having its data sent to several countries simultaneously, cloud computing can present companies with increased risk of legal implications in each of these countries.   

Updates
Updates of the cloud servers are taken care of by the service provider (Smith, 2009).  This can reduce the burden of server maintenance on the purchaser of the cloud service.  Many counties find they are able to reduce or eliminate the need for IT staff because of the ease of operating within the cloud. 
Software updates issued by the cloud provider can be a different situation.  Many cloud providers issue new updates to software they provide.  This can be problematic when dealing with hundreds of customers.  A software update may break existing customers systems.  Many cloud computing providers have dealt with this is by allowing the consumers of their product the ability to tell them which version they are using.  Versioning of cloud services helps consumers target a particular service version.  Updates may add have new features, however, upgrade the new version may require additional development and testing resources.  By allowing companies to target versions cloud providers are making it possible for the consumers of the cloud services the ability to upgrade services on their timeline.

External factors such as new versions by cloud providers may not be within the budget of project managers or within their SDLC plan.  This could throw project plans off track.  For example, if a company was developing a product depend on cloud computing and the new version came out during the middle of their development they may need to start from the beginning on some of their software developer plans.  Versioning allows them to target one stable implementation of the cloud computing service.

Development
Cloud computing requires that developers have a different mindset than traditional software developers to.  Software developers to use cloud computing must take a modular approach to developing software.  They must also consider the different types of resources they'll be using as they develop software that is dependent upon cloud computing resources.
Software as a Service
The term software as a service (SaaS) describes a subset of cloud computing.  Software as a service entails offering a program interface for a service that is hosted in the cloud (Gold, Night, Mohan, & Munro, 2004).  Companies such as Google have offered their mapping SaaS for years.  Users of SaaS can integrate this on-demand software product as part of their system (Williams, 2009).

Modular Programming
Companies that are consumers of cloud computing products are forced to develope their software around the constraints of cloud computing (Luthria & Rabhi, 2009).  Cloud computing forces software engineers to use a modularized approach to building software.  The software must be able to integrate cloud services into their systems.  This may not be the ideal solution for many companies.  Making too many requests to a cloud, for example, could result in a slow performing application.  Software developers may need to architect their software in a way that limits of the impact at the cloud will have on their systems.  Many companies find that they are able to have a faster build time by purchasing software components that are not cloud based.  These help the software developers build products faster and limit the impact of network latency.

SaaS’s use of Simple Object Access Protocol (SOAP), a XML-based cross-platform language, allows it to fit in with many companies service oriented architecture approach to developing systems.  In a service oriented architecture approach developing systems parts of the whole application are broken up into components and communicate via SOAP.  This allows many programs to use the same components.  Because SOAP is able to communicate with many languages on many different offering systems cloud computing allows companies to develop software on different operating systems and platforms that use the same cloud.
Latency
Sending information from a client to the cloud involves SOAP.  The use of XML requires a larger amount of data to be sent across the network (Conry-Murray, 2008).  This can slow systems down if they rely on a heavy amount of data to be sent.  Sending data to remote servers halfway across the world can also reduce the speed of the software being built.  Software with latency issues can appear to freeze out while the software waits for the information to come across the network.  Websites that have latency issues can appear to load very slowly.

When developing programs that use clouds as part of their architecture.  Developers need to be aware of the amount traffic they're sending in the limitations of their internal network.  Products are developed on the company's intranet that our bandwidth heavy can have an impact on other systems that use the intranet. 

Management
Unlike many of the modern advancements in technology cloud computing did not spring up from one persons idea.  Cloud computing has evolved slowly to meet the needs of businesses.  Cloud computing is a product that has been able to meet the needs of business, management and developers (Snyder, 2008).  In today's competitive business environment IT managers have embraced cloud computing as a way of cutting costs, reducing time to market, and using a platform that encourages growth.  Companies are using cloud computing reduce cost and free up capital.  For example, SAP Aktiengesellschaft, a large SAP software company is reducing the cost of traditional licincing fees associated with SAP by giving their clients to choice to use a cloud service from them rather than buy SAP on top of the software they provide (McGrath & MacMillan, 2009).
Cost
Capital expenditures make it difficult to start new companies or new projects.  Many companies require lengthy budget approval processes to buy new software.  Cloud computing provides a solution for this problem for many companies.  Systems that are housed in the cloud require only operational expenditures because of their pay-as-you-go payment plan.  This is making it possible for management to skip the budget approval process and make IT decisions that are important to their business. 

Many cloud computing companies such as Amazon and have a minimum threshold of requests before they start charging.  Amazon's simple DB does not charge until the application uses 25 machine hours or reaches a gigabyte of either storage or data transfer (Amazon n.d.).  This allows startup websites to develop their website without incurring any costs.  This also allows many small projects or hobbyist to build systems at no cost at all.

Many IT managers like the fact that there are many providers for the same types of systems.  If, for example, a company finds a lower cost provider for the same service they can switch.  Because virtually all cloud services use SOAP switching between providers is not difficult.

Organizational Strategy
Many CEOs like utility that cloud computing allows them (Luthria & Rabhi, 2009).   Cloud computing modular structured approach to help IT management to form an organizational strategy based on modular assets that they already possess.  Modularity of cloud computing allows companies to grow more rapidly because they do not have to reinvent their software and architecture for every project.

Cloud computing also allows companies to maintain a smaller IT staff.  This can change the primary role of IT in many organizations from maintenance to development.  This in turn can help companies grow faster.  Companies with less IT staff are able to be more flexible with their systems.  Companies that use cloud computing find that they do not have to wait for budget approval before implementing new systems that require servers.

Companies who wish to use cloud computing successfully must be able to live within the confines of the services being offered.  For many companies that wish to grow their features of a product control of the development process may be a large factor.  Cloud computing only provides a limited resource.  The business stake holders do not own the cloud service.  They have no real ability to have the service provider add features (Antonopoulos, 2009).

Organizational Structure
Large maintenance staffs are often not required on companies that use cloud computing (Luthria & Rabhi, 2009).  These companies can use their savings to hire a larger development team and produce the solution even sooner.  This shift the types of IT people a company needs can often mean they are able to produce products even faster by shifting the personnel in the IT department from maintenance to development.
Organizations that use cloud computing need to hire people who understand the legal implications of using cloud computing.  Organizations may also need to have lawyers review service level agreements and other documents pertaining to the services that are they provide. They may also need to hire project managers who understand the limitations that cloud computing puts on the development team.  Finally, they may need to hire developers who program in a language that is supported by cloud typical cloud providers. 

Future Growth
A survey published in December 2008 found that 90% of companies are planning to grow their use of SaaS (O’Sullivan, 2009).  Service oriented architecture is becoming the de-facto software development methodology for many companies.  SaaS a standardized SOAP communication protocol making it a good fit for future growth.

Many companies are using SOAP’s low overhead and cross-platform ability to develop systems that are compatible with newer devices such as smart phones and home gaming consoles.  In the future there could be many different types of systems with varying levels of capacity that would all need to use similar software.  More than any system of available today cloud computing allows companies to care for growth.
Cloud providers and clients are finding ways of communicating other that SOAP.  Some cloud companies are providing JSON support which allows web browsers to communicate directly with the cloud (Udell, 2006).  Companies are making software that allows people to use the cloud without using SOAP.  Bucket Explorer is a software product that allows anyone to put file on Amazon’s S3 with a simple user interface (Charmbal, n.d.).
Scalability
Cloud computing on-demand model allows companies to use only the amount of services that they need (O’Sullivan, 2009).  Many cloud computing service providers offer a free account for development purposes.  A company can use a very inexpensive or free portion of a server while they are in development and then scale up as needed.
Scalability can be a large problem for organizations.  Increasing the amount of servers may require new electoral lines be put in, new ventilation and air-conditioning systems be installed, additional space be rented to house the equipment, additional server licenses, and the cost of the equipment itself .  Adding additional servers to a project can be a very costly and time-consuming task.  The cloud computing pay-as-you-go system does not require any upfront costs.  In most cases the cloud automatically increases the amount of servers when it detects an additional load and placed on the cloud.  The customer is then built for what they use.
Internet-based companies find that they have highly fluctuating server loads.  These companies can easily receive 10 times the traffic on a normal day when they are featured on an Internet news sites.  For a company to be able to accommodate spikes in traffic without cloud computing would mean they would need to have 10 times the amount of capacity than they would use on a typical day.  During a typical day most of their servers would be unused.  Having large quantities of unused servers can be a cost burden for many Internet companies.
Maintenance
With cloud computing server maintenance is part of the service (O’Sullivan, 2009).  Companies that use cloud computing and not have to worry about upgrades and obsolete computers and operating systems.  Cloud providers free their subscribers from having to support their servers.
As most cloud services support older versions of their products companies who do not want to perform upgrades on their existing software have no reason to.  There is no reason to perform software maintenance unless a company wishes to upgrade their existing system.  This allows companies to make changes on their time schedule.
Business Perspective
From a business perspective cloud computing makes a good deal of sense.  It allows companies to quickly develop a system that is secure and can be easily integrated with system both internal and external to their company.
Fast to Market
Cloud computing componentized modular approach allows companies to develop software faster.  This can give companies a competitive advantage.  Companies who are able to deliver new software and software upgrades are more able to meet the needs of their customers.  The ability to deliver new products to market faster has been very desirable to web site companies.  In these types of companies there are usually several companies that compete with the each other for users.

Mash-up
Products that are available on the cloud can be shared with business partners.  A service that has been combined with another to create an additional product is called a mash-up (Fichter & Wisniewski, 2009).  Business people are realizing the advantage of using mash-up technology.  Google for example, offers it’s mapping software for free on the Internet.  CrimeReports.com, for example, gets data from Google maps and the publicly available information on crimes to create a report for a given area. 
Companies are able to provide part of their web software as a service.  Other companies can use these software products but usually are required to have a link back to the company that provided the service.  This has allowed many ecommerce companies to sell their products on other websites. 

Security
Security is a major concern for many companies.  Cloud computing can present a risk to security by having data that is readily available through the Internet without fire walls. Data is usually locked behind a lengthy username and access key.  In a company if a person who has access to the companies databases leave the company can simply remove that person’s access.  If, on the other hand, the company wishes to restrict a person from the cloud they would have to change the username and password for the cloud.  This would mean that every system that is accessing the cloud would not be able to communicate with it.  Software developers need to have a way of storing cloud passwords in a way that allows them to be changed across all their systems in a moment’s notice.

Platform Agnostic
In a traditional software development environment software is built for one specific operating system and software package.  For example, if the software is built using SQL Server and the F# programming language than the software would only be available to companies that used Microsoft Windows operating system.  Cloud computing, however, with its SOAP data transfer protocols allows software to that is developed on one operating system to be used by others.  Cloud service providers often have easy ways for developers to port their service to systems such as smart phones.  Many cloud providers provide software plug-ins for popular programming languages.  Companies who find that they are able to use a popular programming language are able to develop software even faster using cloud computing. 

Conclusion
Cloud computing is finding its way into many businesses.  Cloud computing provides a strategic advantage over traditional software development methods.  From a developer point of view computing provides a quick way to develop systems that are powerful, scalable and have had some level of bug testing.  From a project management point of view about computing aids in the design process because it has an easily quantifiable set of services.  These services help guide the project manager break the project into small tasks.  Having a cloud system that has been used by many people allows companies to reduce the amount of testers needed for project.  IT managers like cloud computing because it reduces the cost of software development and also reduces the amount staff they need to buy and maintain servers.  Business people like cloud computing because it is readily scalable.  With its service on-demand computing is able to grow with the company and is able to meet the company's needs when they see spikes in traffic.  This can reduce the threat of downtime in critical peak times.

Cloud computing is not without its risks, however.  Cloud computing can increase the risk of data being leaked.  Cloud computing can be network intensive.  A company that is rely on cloud computing must make sure that there networks are able to handle the traffic.

Companies using cloud computing should understand legal implications when choosing a service provider.  Unlike internal software if there is a problem with a cloud computing service is outside the organization the company must rely on the cloud provider to fix its issue.  The company’s contract determines what is owed them by the service provider.  The country must also willing to sue the service provider if they not living up to their contract.  Companies need to understand the risks of down time by a service provider. 
Cloud computing is not a panacea and is not the right solution for every company.  Companies that use cloud services cannot control the development of the services offered by cloud providers.  If the user of a cloud service needs new features added, they would have to wait for the cloud provider to develop it (Babcock, 2009).  When looking at cloud computing a couple he must weigh the pros and cons of using each service.  Cloud computing provides strategic advantages over traditional software development for many companies.  While cloud computing is a great solution for many industries such as the web industry it may also be a poor solution for companies that needs a more secure environment.  When deciding upon cloud computing IT professionals should compare the risks and rewards of developing using cloud computing.  

References
Apicella, M. (2006). The new NAS: fast cheap & scalable. InfoWorld, 28(5), 31-34. Retrieved November 11, 2009, from ABI/INFORM Global.
Conry-Murray, A. (2008). Startup city. InformationWeek, 1212(), 12. Retrieved November 9, 2009 from ABI/INFORM Global.
George, R. (2009). Cloudy, with chance of pain.. InformationWeek, 1244, 31-32. Retrieved November 11, 2009 from ABI/INFORM Global.
Kho, N. (2009). Content in the Cloud. EContent, 32(2), 26-30. Retrieved November 11, 2009 from ABI/INFORM Global.
Udell, J. (2006, September). Amazon.com’s rent-a-grid. InfoWorld, 28(36), 38. Retrieved November 24, 2009 from ABI/INFORM Global.
Williams, S. (2009). Web-based technology. Professional Safety, 58, 8. Retrieved November 11, 2009 from ABI/INFORM Global.