Tuesday, July 30, 2013

Cloud Workers: How Cloud Computing affects Testers


By Dugsong, via Wikimedia Commons
Cloud computing can present a unique set of challenges to testers.  Every system is not built the same.  Some are more stable and contains less bugs than others.  Some systems are similar to traditional software development while others represent a completely new way of doing things.


Some systems such as Infrastructure as a service (IaaS) are similar to traditional software development systems.  With these fimilar environmens testers do not have to change much of their methods of testing.


Other systems such as platform as a system (PaaS) can represent entirely new paradigms.  Multitendency system and databases, for example, are systems that exist across multiple systems.  Traditional test automation can involve inserting and them manipulating data.  If the testing system does not pin itself to one database database update latency may cause unexpected errors.  PaaS often do not have the same testing hooks that many mature software languages do.

Software as a Service (SaaS) can suffer from high loads.  The cloud provider may also not give the developers a test environment to work with. When they do find bugs it may be up to the cloud provider to fix.  The cloud provider may not be willing to fix the service before the systems is due to go into production. 

Many cloud systems change versions outside of the SDLC of the current project.  System that are tested and working may break without notice.  Systems with governors may work at the time of release but break under strain.  


In short testers are the ones who have the great burden working with could computing.  They need to find different ways of testing.  In the past testing mainly happened before deployment into production and was expected to remain working the same until the next release.  Cloud computing changes that.  Systems can easily break after deployment changing the way testers need to test.  

Monday, July 29, 2013

Cloud Workers: How Cloud Computing Affects Developers


Wikimania 2009 - Tim Starling
By Nicolas Goldberg,via Wikimedia Commons     
For developers cloud computing represents a new set of tools that can make development time faster by giving them ready made tools and components.  Not all components are made the same.  Some tools such as Microsoft's Azure and Amazon's AWS give a set of tools that most developers would not have access to.  These tools can come with caviots that can frustrate developers and delay projects.


Developing in the cloud gives developers access to more tools than ever before.  Software developers can take advantage of elastic computing to scale their applications.  They can globally distribute their assets to increase speed of websites.  Cloud computing increases time to market with many applications.


Cloud computing does come with limitations.  Some of these limitation are due to inherent factors such as the latency of transmitting information over the internet while other limitations come directly from the cloud provider to safeguard one user from taking down the system for everyone.  Developers need to make systems "cloud ready".  A cloud ready system is one that is tolerant to the limitations of cloud computing.

Over the past 20 years the software development industry has undergone an evolution into better systems and methods.  Design patterns such as the abstract factory pattern help developers complex systems that meets a specific need, development methodologies such as test driven development has help companies cut down development time.

Building in the Cloud

Systems that are not cloud ready can break when the system is under strain.  Latency of the internet can cause system to become prohibitively slow.  In modern development the software is tested and then put into production with a limited smoke test to make sure it works.  With cloud computing systems can fail long after they are deployed.  Software developers need to think critically about the limitations of the cloud they are using and adjust their development methods in a way that will work with the cloud.

Software developers need to have a way of monitoring their cloud based software to make sure that events such as strain or systems upgrades do not break the system.  With traditional software development after the deployment the systems is forgotten about until.  Cloud system's need monitoring in production.  Simply passing test in the development environment will not suffice in the cloud world.

Tuesday, July 16, 2013

Cloud Workers: How Cloud Computing affects Project Managers

Project Manager

US Army 51968 Charlie Wilcox, USASMDC-ARSTRAT Simulation Center program manager, left, talks with Ben Matthews, project manager, and Cathy Hatchett, security manager, both with GaN Corporation. GaN Corporation was
By Deborah Erhart, via Wikimedia Commons     
Cloud computing is a new paradigm for IT professionals.  For project managers cloud computing brings with it new tools that bring with them new possibilities.


Cloud computing services come with loaded with a predefined set of features giving access to functionality for no additional development time.  Many cloud computing providers such as Azure and Amazon's AWS have templated or pre-defined systems reduce the time to develop projects.  While others such as Salesforce have an app store that allows the easily addition of adding for low or no cost.

Cloud computing can be empowering to less technical workers such as project managers by giving them the ability to use and administer software without always involving other IT workers.  Some cloud systems allow non technical workers the ability to add on extra components the same way they would install an app for a phone.


While cloud computing has great benefits for organizations there are some pitfalls that project managers need to know to help them plan and reduce risk in their projects.  Software as a Service (SaaS) for many cloud providers can have updates that effect the project.  Cloud services that work one day can break then next.  Someone needs to be responsible within an organization to keep track of the emails from the cloud provider about scheduled outages, updates, and bug fixes.

When using cloud software half of the system is controlled by someone else.  What may be an emergency to an internal IT organization may not be to the cloud provider.  Many cloud provides have a 72 hour turn around time for their help desk.  But fixes or essential updates may take months to happen if they happen at all.

Cloud computing is also a fixed asset.  With traditional software projects software development teams can build highly flexible software.  Cloud computing is pre-defined resources that are inflexible.  Project managers need to learn about the limits of the system they are purchasing before deciding to go with them.


Cloud computing can help organizations succeed but it requires PMs to plan for delays due to the cloud provider.  Project managers need to also find the limitations of the cloud systems before deciding to go with the system.

Sunday, June 30, 2013

Cloud Workers: How Cloud Computing affects IT Managers

What businesses need to know before getting into the cloud.

In recent years cloud computing has received a lot of hype.  Many IT organizations are succeding with cloud computing while others flounder.  While the cloud provides some great advantages for many organizations it also has its pitfalls such as availability, latency, and rigid components.  This article talk about the pros and cons of cloud computing from different people within the organization.

IT Manager

US Navy 090929-N-8863V-016 Rear Adm. Kevin Quinn, commander, Naval Surface Force Atlantic, left, operates a portable pressure calibrator as Quality Manager Jeff Walden explains its use       
By Greg Vojtko, via Wikimedia Commons     


IT managers look to cloud computing for many reason. It can help reduce the IT burden to be experts on all of their systems. It can offset the need to purchase new hardware. It can help them focus on the it assets that are important to the company rather than spend time dealing with peripheral systems such as email. Many small to medium sized companies do not have the financial resources to hire a person who is an expert on all of there various systems. Rather they hire IT people who are generalist. They may also not have the resources to pay for a secure data center with guards and security audits. Some cloud providers offer geographic redundancy to alleviate the impact of a natural disaster. Some cloud providers automatically scale their services to meet the needs of the client. For IT managers to do this without cloud computing they would have to have excess servers on hand to meet the variable need.


For IT Managers the risk in using cloud computing is in the responsiveness to handle IT issues. SLA's in cloud computing favors the cloud provider. IT teams are often called upon to handle emergencies. With cloud computing the cloud computing provider determines what constitutes an emergency. An issue that kept the IT team at the company all night to fix the issue may now have a 72 hour response time for the problem email to be responded to.

Hero or Villain

IT people are often called upon to be the hero of the organization. They will often stay late or work weekends to make sure that the companies IT system are working. Many business people expect a certain level of service from their IT people. Cloud computing is essentially outsourcing a part of IT. The level of service is determined by the cloud provider and the SLA. In cloud computing each company does not negotiate its SLA with the cloud provider. The cloud provider generally writes a SLA that favors keeping their systems inexpensive and not liable for damages.

If expectations are not set with the business the IT team can seem like the villein. Problems with the cloud systems are submitted to the cloud provider and often times the IT worker can do no more until they receive a response. If the expectation of the business is that the IT workers will stay late and work weekends to fix the problem and instead IT submits a request to the cloud provider and wait 72 hours there may be hurt feelings. Also with custom systems provide greater flexibility and capability of what IT can do. Cloud systems are often rigid and fixed in what they can do. This often causes IT to say no to business requests or take a long time to develop a work around.

IT workers need to be versed in numerous systems. They need people who are knowledgeable in such dynamic fields as computer repair, networking, email systems, database systems, websites, and all of the custom software that the company bought such as as SharePoint and MSCRM. Cloud computing can free IT workers from maintaining systems that are not core to the business. Not all cloud systems are maintenance free. Some cloud systems are complex and require administrators who are versed on them; creating an additional burden on IT workers.

The key to understanding the impact of cloud computing on IT is in reading the documentation before starting a cloud project. Business should be aware of the limitations that cloud computing can create. Business should also be aware that they are outsourcing part of their IT and that they must live with the customer service stated within the SLA. Finally, the cloud systems should be evaluated to see if it frees IT up or creates additional needs on IT.

Friday, May 31, 2013

Cloud Workers: How Cloud Computing Affects Business Stake Holders

In recent years cloud computing has received a lot of hype.  Large established companies such as Microsoft are going “all in” with the cloud.  New cloud companies such as Salesforce are seeing tremendous growth.  Cloud listing websites such as cloudbook.net have a continuous increase in new providers.
Cloud providers list the many benefits of cloud services.  Cloud services can reduce the operational expenses by providing a pay as you go model that reduces the need for capital expenses such as servers and data centers.  Cloud computing is elastic, it can scale with demand.  Companies with variable demand do not have to purchase extra servers that go unused.

While the cloud provides some great advantages for many organizations it also has its pitfalls such as availability, latency, and rigid components.  This article talk about the pros and cons of cloud computing from different people within the organization.

Stake Holder

Wikimedia Foundation Marlita Kahn
By Lane Hartwell, via Wikimedia Commons
Business people looks to cloud computing to have systems that are lower cost, faster to produce. But they can also be limiting for an organization. The question business people need to ask is "is my company cloud ready".


Cloud computing give business people quick access to software. Many cloud software offerings can be tried before they are bought. Reducing the need for a selection process. Cloud computing relies on a subscription modle or pay-as-you-go model. Traditional software requires purchases of hardware and software before the project is started. These capital expenses can be quite large for the business. Cloud computing uses operational expence. Using opperational cost allows companies to start new projects without having to allocate large amounts of capital. This can help companies make


The fundamental difference for business people is that cloud computing software is often rigid. Custom built software is extremely flexable. The software build in house can be changed to meet changing buisness needs or changes in stragegy. Software bugs can be eaily fixed and issues addressed. Cloud computing companies can handle thousand of clients. They are often not responsive to the needs of indivdiual subscribers.

Finding the Ballence

Cloud computing can allow business to business to have more IT assets to help grow their company faster. It can also limit the growth of the company by locking it into a rigid system that does not meet its needs. When deciding if the cloud is right for the business one must first decide if the business can live with set IT assets or it needs custom software to grow. Some businesses follow a typical strategic rout. For them the cloud offers tools that are able to meet their needs. Other companies have a strategy that is not typical. For the company that has an out of the box strategy they may not be suitable to in an in the box software solution such as cloud computing.

Monday, April 1, 2013

Software as a Service

Advancements in stabilization in software development techniques as it led to companies being able to sell or give away a programmatic interface that allows developers to use their services.  Application interfaces (API) have created a World Wide Web that is programmable.  This programmable Web allows other companies access to a wide variety of tools. 
Companies who are able to use Web services and APIs are seeing a reduction in cost, quicker time-to-market, and a competitive advantage over their competition.
Not every company gains a competitive advantage to using APIs.  IT managers looking APIs must look at many factors to determine whether or not these types of services will benefit their company. 

Before the programmable Web revolution companies had to develop software often from basic building blocks into the finalize project.  This software was often tied down to the operating system it was built on.  If a company wanted to make a geographic information system they would need to start with the basic building blocks and work up from there.  These tasks could take months or years.  If they wanted to correlate their system with crime statistics they would have to make sure that there geographic information system was compatible with crime data provided by the government.  If the crime to a database was an Oracle database and the geographic information system was a MySQL database then the two databases often would have a great deal of trouble communicating with each other.  Today with readily available APIs from companies such as Google and from the government this type of data is no longer tied to a specific system.  Websites such as crimereports.com can be built in weeks and not years at very low costs.

Microsoft recently with the release of Windows 7 has moved many of the services that were traditionally on a computer to the Internet (“Clash of the clouds; Cloud computing“, 2009).  For services such as e-mail, and social networking Microsoft provides via an API that the operating system can use.  This API, however, is not limited to the Windows 7 platform.  The same information is accessible to a desktop computer can be accessed through a smart phone or a video game console.  Any device that has Web access can potentially use these services.  This frees the user from the computer and allows them to choose to use programs on number devices.  They can even log into other computers and have their information available to them.

Microsoft is launching a new platform for companies to develop APIs (Microsoft, n.d.).  Microsoft's new platform called Azure allows companies to run virtualized servers.  In the same way that the Windows 7 operating system allows people to access their systems remotely, Azure allows companies to host their programs remotely in a way that is accessible to a large number of people. 

Amazon has become a leader in the sale of APIs (Amazon, n.d.).  Companies can now build entire websites in which everything is hosted on the Amazon platform.  Amazon's SimpleDB allows companies to store their data remotely on Amazon's servers.  Amazon's Simple Storage Solutions (S3) allows companies to store a large amount of data.  Amazon provides low cost or free API services to many companies.  The reduction in cost coupled with the speed of delivery and low maintenance makes the Amazon API solutions desirable for many new companies.

Google has developed Google App Engine which is a hosting service for API systems (Arnold, 2009).  That combined with the dozens of API systems that Google gives away for free makes Google a leader in API delivery systems.

Many websites have integrated Google Map’s freely available API.  Websites called mash-ups combine information from several APIs.  Google maps have been a popular API because it allows people to visualize data.  For example, Redfin.com combines Google maps with real estate data to create a map of available homes for sale.

Google also offers a search service that allows companies to make a custom Google search on their website.  Google’s search API allows companies to customize the look and feel of the results.
Google offers programmatic access to its Google Docs program.  This allows companies to programmatically create web forms and spreadsheets easily.  Google is using this API to allow people access to their database API Google Base.  Through Google base people are able to manage other Google products such as Google’s Image storage system Picasa.

The Google Merchant API allows companies the ability to submit items for Google shopping.  Through the API merchants can set post products that they want represented in the Google product search.

The World Wide Web became popular in the 1990s.  It's only been in the past 10 years that we have seen companies offering programmatic interfaces.  Unlike many internet based products API’s are not the idea of one organization.  They have been developed slowly by many organizations that have found a need that they could fill.  Literature on the subject of APIs usually focuses around cost-benefit versus risk analysis.  Companies that use APIs typically have a competitive and it over traditional companies.  APIs, however, are not for every company.  Organizational structure, privacy concerns, and disaster planning are things to consider when thinking about using APIs.

Using APIs has become a cost-effective means for many startup Internet companies.  Paid APIs are done by a pay-as-you-go system this means that companies are able to start up at very low cost and grow as the revenue comes in.  In traditional Internet company would need:  servers, a server facility, licenses, and support staff just to get the website up and running.  This capital expense without any revenue coming in can be a barrier to entry for many entrepreneurs (O’Sullivan, 2009). 
Using APIs companies can host their websites remotely, use a database such as SimpleDB to house their database and use pre-existing services that allow them to build their websites faster.  Products such as he Amazon’s S3 allow companies to use store large amount of files without having to worry about bandwidth limitations and server storage space. (Smith, 2009)

Cost of Maintenance
Companies that house their products on the Web can do so without having to hire the traditional support staff as traditional companies do (Smith, 2009).  For example, database administrators are responsible for backing up the data, indexing data for faster delivery, replicating the data to other servers; maintain the hardware and many other tasks.  With Amazon's SimpleDB this is handled by Amazon itself.  Systems backups, indexing, and other tasks that visual database administrators do are done automatically by Amazon's database.

In 2007 Carbonite a storage provider lost the data it housed for 7500 customers.  Failures like these while rare still happen.  When a customer is looking at a potential API service provider they need to look at the level they are paying for. The document that specifies what the service provider is responsible for his called the service level agreement (SLA) (Zielinski, 2009).
  An API provider should have a SLA outlines their responsibilities.  When choosing an API provider business people need to look at factors that then would care about in their own organization (Zielinski, 2009).

Service level agreements should spell out exactly what the API is providing in terms of security, auditing, and disaster recovery.  In a service provider should fail to meet their responsibilities as outlined the contract they should be liable for losses (Ria & Chukwua, 2009).
Developing software and WebPages that use APIs require developers architected their software in a way that is compatible with APIs.  This usually means writing up their software into logical parts and then componentizing their software developer efforts. 
There are two ways that API services providers generally use to allow others to interface with their system.  The first is an XML-based cross platform Web communication standard called Simple Object Access Protocol (SOAP).  This allows developers working in different offering systems in different programming languages to share components easily.  A second way is for servers providers that cater to web pages using a technology called Ajax which is a client-side browser specific technology.  These API providers use the Web standard JavaScript Object Notation (JSON). SOAP is used primarily to transfer data from server to server; while JSON is used to transfer data from server to webpage. 

IT Architecture
Companies that use APIs are forcing themselves to take a modular approach to software development (Luthria & Rabhi, 2009).  While this can lead to delivery of their products quicker to market they can also lead to some other issues.  Service oriented architecture, for example, does not allow programmers to integrate their software as a whole.  Each functional part of the program must a broken down into components that then can indicate with each other via SOAP.  SOA transitions can happen both within the organization and externally to an API provider SOA fee uses XML protocols to send data back and forth between the different functional parts of the program.  XML can be very verbose in its transition of data.  This extra information that is passing through the company’s internal and external network can cause problems with network latency.  Latency can lead to software freeze ups and web pages that take too long to load.  Working with APIs, developers need to be mindful of the amount of data is transferred between systems and the capacity of the network.

Support staff at company providing APIs perform all the maintenance and updates necessary to keep their systems secure and running at peak performance (O’Sullivan, 2009).  In a traditional company is to be handled by the IT staff.  Offloading this task and reduce the number of IT staff that a company needs.
Versioning of APIs can be a difficult issue for API providers.  Over the years and API provider can have several different versions of their service available.  Issuing updates can break existing client’s software.  Many API providers force software developers to choose a version that they wish to use.  This prevents version conflicts and allows projects to target one statistic set of services offered by a API provider.  Because new versions are not forced upon API clients, when a new version comes out it is up to the client to choose whether or not they wish to upgrade their current version.  New versions of an API may contain bug fixes and new features but they also could destabilize a company’s current software product or webpage.  Because the client does not control when new versions come out there may be no development budget to implement and test the new versions.  Because there are some writers and clients are often out of sync with versioning and software development efforts clients need to be flexible enough to make changes whenever the need arises.   

Traditional software is constrained to an operating system and a set of hardware (O’Sullivan, 2009).  APIs, however, often defy these constraints so they can gain more clients.   Many APIs use the SOAP portal format which means that they are able to transmit not only work on websites and PC software but other devices such as smart phones and game consoles. 

Whether a company uses an API that is free or a paid subscription support of the product should be considered.  Many companies offering free APIs such as Flickr did not offer prompt support.  Rather they offer a wiki and a problem reporting form (O’Sullivan, 2009).

Implications for IT
Using APIs can be a strategic advantage for any companies.  Using APIs requires that the entire IT staff from management to testers must learn a different way of doing business.  APIs rely on outside parties to do their work.  IT teams must learn how to form a relationship with API providers.

Leadership among companies that use APIs must keep a portfolio of APIs that is able to help the company gain a strategic advantage over the competition.  Leaders at these organizations need to know the factors that make a strong API portfolio.  Factors such as cost, latency, security, and contracts are important factors in having a strong API portfolio. 

Using APIs can have a large cost advantage over traditional software developer efforts (Truitt, 2009).  Paid APIs are generally built on a pay-as-you-go.  This type of system can be a great benefit for companies that are starting out.  Traditionally a company that is starting what they to purchase servers, server space, IT staff to mention servers, user licenses, and a number of other capital intensive products.
Using APIs allows companies to pay for their systems as money comes at.  By paying for only what they need companies are able to better evaluate the costs effectiveness of the API.  Many API companies offer a no-cost trial of their product.  This allows companies to develop software risk and cost free.  By eliminating the large amount of capital needed to start a project companies are putting themselves better able to implement projects and pay for them with operational costs. 

API providers typically have a server farm that allows them to scale their products rapidly.  For many companies there is a cost barrier to implementing a server farm. 

Internet companies especially can have a large amount of fluctuating data (Waxer, 2009).  For these companies to remain operational during spikes in traffic they traditionally needed excess server capacity.  It is not uncommon for websites to receive 10 times the traffic they receive due to a front page story on a site like digg.com.  Company that do not use APIs would need 10 times the amount of excess capacity that they experience on a daily basis.  This may mean that they have 10 times the amount of servers and 10 times the server space.  This can be incredibly cost ineffective for these types of companies.  APIs provide a solution to this problem.  APIs automatically scale when may be too and the client is only charged for what they use.  This allows companies to seamlessly handle 10 times their typical traffic without incurring performance issues and costs concerns. 

One important factor to businesses that APIs provide is the ability to integrate their products offering with other websites.  E-commerce websites are able to post their products on Google shopping and Amazon through APIs (McCreary, 2009).  Companies are finding that developing relationships collaborative relationships with API providers can help them increase their revenues.

Legal Ramifications
While using APIs can give companies a strategic advantage, APIs are not suitable for every organization.  When using an API that is involved with records management companies must look at regulatory laws that govern the control of information.  The health and financial industries are wrought with regulatory laws that may make it impractical to use APIs.
Companies looking into using APIs for data storage should make sure that the service provider has a plan for backup, replication, and disaster recovery.  Audit controls are another important factor in choosing a company to store data.  Internal theft happens at many companies.  Audit control can help companies track potential embezzlement.

FRCP Related to Discovery
The US Federal Rules Regulatory Procedures (FRCP) states that companies that are in civil trials must be able to provide all electronically stored information (Gatewood, 2009).  Companies that store their data in the cloud with services such as Amazons SimpleDB must look at whether or not they are able to quickly get a copy of all of their data.  Amazons SimpleDB does allow quering of its database but has a row limit of 5000 results (Amazon n.d.). This could create a large burden for a company trying to retrieve all the data from its databases stored in SimpleDB.  A company that is expecting it may face civil trials may wish to think about storing data in a more traditional database. 

Regulatory Requirements
Personally Identifiable Information (PII) is subject to regulations on how it is stored and delivered to people.  These regulations vary by country. The place that hosts the server and where the data is stored is usually subject to the laws within that country (Gatewood, 2009).  API  distributors often have several data centers around the world.  These data centers help prevent against regional disasters and also can be used for increased demand within a certain region.  Companies that send PII to API servers are subject to laws in the countries that those servers are housed.

 The Sarbanes-Oxley act of 2002 made transparency in accounting mandatory for many organizations.  These companies must provide a way of locking share holders out of financial information during certain times (Gatewood, 2009).  Many APIs rely on a single user name and password to access the data.  For companies that need to lock down access during certain times this may not be possible with API providers.  Sarbanes-Oxley also requires that companies audit the access of data by people.  Companies also need to be able to remove people’s access that no longer works at the company.  With its username password combination data housed in the cloud may not have the capacity.         

Research Opportunities
Current research in the area of APIs has been limited to cost benefit analysis and risk reward analysis.  Most of these analyses are done through qualitative methodologies.  There exist research opportunities in this field for quantitative analysis.  Researchers could, for example, analyze the cost savings of companies that have switched from traditional software development is to cut getting.  Researchers can also look at the overall development time it takes companies to develop standard projects and project using APIs.

Using APIs as part of a software development project comes with risks and benefits.  Understanding the benefits and risks are key to making API use decisions.  APIs come with the promise of faster software development and decreased development cost and time.  APIs can also help companies integrate with partners more easily.  In today's business world partnerships and synergy can make or break a company.  APIs allows a company to easily port its software to many other platforms such as smart phones and game consoles. 

There are risks also associated with using APIs.  Companies that wish to store PII or financial data using an API may be at risk of violating laws and having an insecure website.  Companies using APIs must be able to quantify the risks and keep an API portfolio.  Company should understand the contracts for APIs when choosing a API provider.  These contracts should have the same requirements that a business would need internally if it were developing a project.  Backups and disaster recovery must be considered before deciding on a API provider.

Companies must also look at their own internal architecture to see if having an external API would cause slowdowns on their network.  APIs require that companies send data across the Internet.  If the company does not have a fast internal network this may cause an unacceptable level of delay when running applications. 
Amazon (n.d.). Amazon SimpleDB. Retrieved November 13 2009, from http://aws.amazon.com/simpledb/
Arnold, S. (2009, July). Google’s App Engine: getting serious about the enterprise market. KM World, 26.
Clash of the clouds; Cloud computing (2009, October 7). The Economist, 393(8653), 80.
Luthria, H., & Rabhi, F. (2009). Service Oriented Computing in Practice – An Agenda forResearch into the Factors Influencing the OrganizationalAdoption of Service Oriented Architectures. Journal of Theoretical and Applied Electronic Commerce Research, 4(1), 39-56.
McCreary, B. (2009). Web collaboration - How it is impacting business. American Journal of Business, 24(9), 7-9.
Microsoft (n.d.). Windows Azure platform. Retrieved November 5, 2009, from http://www.microsoft.com/windowsazure/
O’Sullivan, D. (2009). The Internet cloud with a silver lining. The British Journal of Administrative Management: manager, (), 20-21.
Ria, S., & Chukwua, P. (2009). Security in a Cloud. Internal Auditor, 66(4), 21-23.
Smith, R. (2009). Computing in the cloud. Research Technology Management, 52(5), 65-68.
Truitt, M. (2009). Editorial: Computing in the “Cloud”. Information Technology and Libraries, 28(3), 107-108.
Waxer, C. (2009, Feburary). Supercomputers For Hire. Fortune Small Business, 19(37), 37.

Archetecting with SaaS

“Is it hard? Not if you have the right attitudes. Its having the right attitudes thats hard.” 
Robert M. Pirsig Zen and the Art of Motorcycle Maintenance: An Inquiry into Values

The problem with SaaS
Software as a Service (SaaS) is the development paradigm of the cloud computing. Research shows that working with traditional application program interfaces (APIs) cause a great deal of problems for software developers (Robillard, 2009). APIs can be hard to learn, have incomplete documentation, and use different development paradigms. SaaS can have predefined limits, network latency issues, and represent a different programming paradigm (Microsoft Exchange Online, 2012; Salesforce, 2012).

Software development teams are finding that working with SaaS can be difficult. Project managers need to know the contractual obligations of the SaaS provider, plan for upgrades of the providers system, and deal with the additional risk of an changing unknown factor to their project(Karakostas, 2009). Software developers face the challanges of learning new sets of APIs, dealing with limitations, and working with different language and software development paradigms (Lawton, 2008; Salesforce, 2012). Software testers learn to deal with systems that can be in consistent such as some cloud based systems eventually concurrent data model.

Archetecting SaaS
Quality Attribute
Design Qualities
Ability to efficiently maintain system.
Reusable code reduces waste and promotes faster development times.
Runtime Quality
How responsive the application is to changes.
The amount of resources a system uses.
The ability of a system to add handle large amounts of requests.
Ease of use and learnability
How secure the system is.
Cost Effectiveness
The monetary return on the cost of the system.
The ability of the system to interact with other systems within the organization.
Architecting software solutions let developers achieve some measure of quality. There are different schools of thought on software architecture. Microsoft in their Architecture Guidelines primarily recommend 3-tier/n-tier architecture for most applications and client/server architecture for systems that have high performance needs such as websites or applications that deal with large amounts of data. Using design patterns is another school of thought. The concept of design patterns is that design patters (which are conceptual programming abstractions) allow programmers to rise above the details of implementation and discuss applications in broader forms. Another school of thought with architecting is using different programming languages to achieve efficiently. A software developer might use a functional language such as F# with scientific applications, an object orientated programming language such as C# for business applications, and a set based language such as SQL to handle data sets. Also there is service orientated architecture (SOA) and web services, which are web based and platform agnostic.
Architecture choices are generally based on perceived quality. One organization who values interoperability of their systems may choose SOA while another that favors high performing websites may choose client/server architecture. An organization that values extensibility and security may favor n-tier architecture. Software developers that want an easily testable application may choose a simple 3-tier application over a design pattern application.

Architecture may also be part of an overall business or IT strategy. Companies that wish to collaborate with other companies may choose to use a web service architecture. Companies wishing to build in their systems the ability to reuse systems in other systems may choose SOA.

Cloud providers also have needs that can address by architecture. The first is to follow a familiar SOA type of system. SOA systems generally do not deal with bulk data well. They are can be slow over an intranet over the internet they perform even worse. Second a cloud provider may need to account for a very large amount of traffic. They may choose a distributed database systems that may not be always concurrent across all servers. Finally, a service provider may need to protect itself from over use by customers.
Some providers such as Salesforce (2012) allow users of their APIs a choice of using their traditional API which supports one by one transactions and a bulk API that supports bulk transactions. Traditional APIs allow developers to easily develop software with such architecture as design patterns, n-tier, and object orientated programming. Bulk APIs are better suited for architecture where data is already represented in sets such as programming and client/server programming.

For the cloud provider the choice on what type of API to provide may be a function of the business needs. They may need all of these systems to be concurrent before allowing another transaction. A bulk API would allow clients to send data over and wait till it has been fully processed before allowing them to proceeded.
When working with SaaS, it helps to know the architecture of the provider. Does it have performance governors? Performance governors can be set by the cloud provider or just a byproduct of using the internet. Does the architecture of the cloud provider play well with the architecture of the current systems in place? A company that is using a work flow system is using a procedural programming architecture. This type of system is good for individual transactions but not bulk transactions. In the end the right architecture is one that works will with the needs of the company and the SaaS API.

Karakostas. B. (2009). Restructuring the IS curriculum around the theme of service orientation. IT Professional Magazine, 11, 59-63.  Retrieved from ABI/INFORM Global database.

Microsoft Exchange Online. (2012) Retrieved May 11, 2012 from http://download.microsoft.com/download/0/9/6/096C9441-8089-4655-ABB3-DC0ABA01A98D/Microsoft%20Exchange%20Online%20for%20Enterprises%20Service%20Description.docx

Robillard, M. P. (2009). What makes APIs hard to learn? Answers from developers. IEEE Software, 26(6), 27. Retrieved from ProQuest/UMI Database.
Salesforce.com (Feburary, 2012) Salesforce limits quick reference guide. Retrieved from https://login.salesf

Wednesday, March 27, 2013

Research Evolution in Software Development

Research Evolution in Software Development


Software development research has changed significantly over the past 20 years.  While in the late 1980s researchers were concerned with user acceptance of computers in general (Forrest, Stegelin &  Novak 1986), in the past few years researchers have been more concerned with why people accept or reject certain software products, and not others (Khanfar,  et al. 2008; Garrity et al. 2007).  This literature review will explore the changes in research methods by taking a sampling of research articles from the late 1980’s, and articles from recent years. 
This literature review will examine 27 peer reviewed articles relating to software development.  Sixteen of these articles were written in the past 5 years and eleven were written in the late 1980’s.  Using these articles, I will examine the changes over time to the software research process.
By reviewing the differences in software development research I will get a picture of where software development research is going in the future.  In this review I will look at information such as sample size, methodology, statistical analysis being preformed, and other factors such as use of students in research.  This paper will show the trends in software development research and how they affect researchers.

Literature Review

Research methods
Although the basic way people research has not changed over the past 20 years, the tools available to researchers and the methods that they tend to use has.  In my study of 27 articles I found that 54% of articles written before 1990 were qualitative, while only 6% of current articles written in the past five years used a qualitative method. 
Quantitative methods are being used more frequently in recent years.  This trend may reflect the ease in which data is easily computer tabulated or may be due to a preference by researchers for hard data.  When Ali Montazemi researched user satisfaction (1988), he chose to interview people.  In these seminal types of interviews he found information that quantitative research could have missed.  Montazemi, for example, he found that 20% of people would have preferred a query system that gave them “what-if” scenarios.  Today, this type of research today would be a separate topic of decision support systems.  In contrast, when Garrity et al. did his research on user satisfaction on websites, he chose to have users use a quantitative study.  With modern tools Garrity et al was able to do a more extensive statistical analysis using statistical techniques such as average variance extracted, sum square, and f-test (p. 27).  This type of analysis speaks to the growing maturity of software development research. 
Modern researchers have more tools to analyze data.  Qualitative data lends itself more readily to this type of analysis.  Modern tools make it possible to analyze more data. There are tools that exist now that did not exist 20 years ago.  There are also more categories of research than there were 20 years ago.  In the research field of software development researchers are looking at project management, testing, security, websites and usability to name a few. 
Sample Populations
Comparing research between the two periods shows that surveys today can have far larger populations.  Data collected from the research on types of samples from twenty years ago shown in Table 1 shows a sample size that is far smaller than the sample size of data taken from fore recent times shown in in table 2.  The average sample size for research taken 20 years ago is 113.  The average sample size for research taken today is 7150.  This is primarily due to the availability of data.  Most of the data that was collected 20 years ago was obtained through face to face surveys.  One exception to this was a museum experiment.  In this experiment, museums were equipped with a Hypertext display that allowed users to review information about the museum.  The data collection was limited when one of the two monitors that they were using broke (Shneiderman et al, p. 49).  The price of monitors has come down in recent years and this problem would most likely be fixable in today’s environment.
Today researchers have access to more tools that give them a much greater ability to review data. Crowston et al, for example, was able to collect data on over 100,000 open source projects (2008).  This type of data collection would be unheard of 20 years ago.  Large scale systems like the Internet were not in place then and the ability to harvest vast quantities of data was nonexistent.
            Now researchers have new techniques for sending surveys.  Phone and mail surveys can be expensive in both time and money.  Modern surveys can be done cheaper and more efficiently.  When one group of researchers, for example, wanted to have a large sample size they sent emails out to 10,000 people, and 3276 people responded (Tam et al. p. 280).  First class postage as of the time of writing this is 42 cents.  Sending this type of survey through the mail would cost $4,200.  This type of expense is out of the range of many research projects.
            When looking only at direct surveys of people without the help of modern data collection methods there is virtually no difference between the two time periods.  The data collected in the 1980’s had, on average, surveyed 123 people while the data collected in the past few years had on average 124.  Interviewing and surveying people clearly takes longer than more modern methods.  While these interview may be necessary in qualitative research the amount of data collected can be quite small compared to data mining through already existing information systems or email surveys.
Statistical Analysis
 Literature from the two periods did not show a change in the way that statistics are done.  The articles contained a wide variety of methods.  Earlier research involved more averages.  This can be attributed however, to the number of qualitative studies.  Qualitative lend themselves more readily to averages. 
One thing that is apparent in the studies is that modern research employs more statistics.
With the advent of research software, researchers are able to take the same data and quickly run multiple statistical analyses to help determine the best analysis.  Table 3 shows the statistical methods used in research in the 1980’s.  In table 3 there is only one test preformed on each set of data.  In more recent studies, as seen in table 4, one can see that researchers performed more kinds of statistical analysis.  In the case of Banker et al the group performed three tests to analyze their data (2006).
The analysis showed a propensity toward using multiple statistical analyses in research in more recent research.  This may be due to the more detailed work on the subject.  Modern research delves into subject such as usability of websites.  Research from 20 years ago delved into broader questions such as the acceptance of computers into workplaces.
Software products such as SPSS make it possible to take a dataset and mine it for correlation.  Garriey et al (2007) did this when they looked at three different statistics to come to their conclusion. The trend now is to do a more in-depth analysis.  Researchers have the ability to gather vast amount of data and perform complex statistical analysis on them.   Researchers 20 years ago did not have this option.  For them research often had to be done by hand.  This would naturally lead to more work on the part of the researchers.  Performing statistical analysis by hand can be a time consuming process.  While today performing a complex statistical analysis could take hours in the past it could take days.  Statistical analysis without the benefit of statistical software can be wrought with mathematical mistakes.  Human error plays a larger factor when surveys have been hand coded then that data has been hand analyzed.  
Response Rates
Response rates between the two time periods vary.  In the 1980's researchers used more college students who were forced to take the surveys (Jarvenpaa et al, 1988; Dos Santos et al, 1988; Kirs et al. 1989). They also relied on people volunteering (Jarvenpaa et al, 1988) more often.  These types of candidates often do not represent the average person from the population.  When a general inquiry is made for volunteers or participation is compulsory there is no response rate.  We see only two response rates in the earlier surveys.
There were two response rates out of our 10 research articles.  Of these the response rates were 36% and 47%.  These rates are far lower than the response rates in more recent studies.  The response rates for data collected for people is 81%.  Even email response rates in our survey are 32% which is closer to standards 20 years ago than today's standards.
Response rates differ depending on the type of medium that people are using to survey.  An email to CEOs may have a lesser response rate than someone standing at a shopping mall with a clip board.  The more people feel connected to the researcher the more they are willing to participate in the research.
In articles of the past 20 years there is a tendency away from giving money to participants the way Luzi et al, did in their study on study on performance (1984).  Giving money to people can be a great motivating factor.  This motivation may affect research.  For example, in the Luzi et al article the money given away to top performers may make them act in a way that they would not at work.  In this way they made the respondents compete.  This type of competition could introduce factors that the researchers may not want; the study would be meaningless to any situation other than one where incentives are given out.  This type of research can introduce skewed results.  Some people may work better under the pressure that money and competition brings while others may get confused by the pressure.  It appears that in modern research this practice is less likely. 
Research Themes
Seven out of the ten articles from the 1980's dealt with computer usability issues.  Jarvenpaa et al dealt with how groups interact with computers (1988), Dos Santos et al dealt with user interface issues (1988), Montazemi (1988), DeLone (1988), and Williams dealt with user satisfaction.  As software development gets more mature researchers delves into new areas.  What was once a new topic such as user satisfaction has been replaced by more mature themes such as personalization (Tam et al, 2005). 
Topics such as information overload have been explored and solutions such as drill down menus and customization are common place.  In the 1980's these were still topics that needed discussion on the foundation level(Dos Santos et al, 1988).These are all ideas that have evolved as the technology evolved. 
Today new topics in software development are emerging.  Issues on new software products such as software that allows people to share knowledge and work together are still in their infancy (Taylor, 2004).  These issues will evolve as technology evolves.  Collaboration software such as Google Document which allows people to work on the same document at the same time is still in their infancy.  Wiki's are only a few years old.  As the ability for people to work together and share information grows the need for research to help people use that technology to reach their goals will also increase.
Four times in this review of recent materials there are articles on software errors.  Bugs in software always been a major issue with software development, but now researchers have better tools to help the programmer deal with these bugs.  In the article A Replicated Survey of IT Software Project Failures (2008) there is a meta-analysis of various reports of why software fails.  This body of information did not exist a few years ago.  Now with the ability to collaborate researchers have access to a vast hither unto unimagined body of knowledge. 
Crowston et al take this idea one step further in their article Bug Fixing Practices within Free/Libre Open Source Software Development Teams.  This article takes 100,000 records from open source projects and analyzed the way software bugs are dealt with. 
Technology has also given us the ability to do things that did not exist a few years ago.  For example Van Pham and Gopal et al write about outsourcing.  This relatively new practice has undergone much research over the past few years.  It is a complicated issue.  Off shoring software development may risk handing over company secrets to people who do not work for the company (Van Pham, 2006) but in doing so companies can create wealth by reducing the cost of software development.  This is a topic that researchers have looked into in great detail over the past few years.  As companies face economic difficulties, many of them face the inevitably that they must reduce cost to stay in business.  The debate over outsourcing and what to outsource is a topic that will plaque researchers for years to come.  There is no one right answer for every company.  In the end as a practitioner one must use the available knowledge to make a decision.  In research every decision answer is either proven or disproved, as a practitioner decisions often have many sides to them.  This discussion of off shoring is one of those issues.
Another topic in modern research is inner-organizational software development (Robey et al, 2008).  Gone are the days of standalone software.  Today organizations want software to be able to communicate.  When software crashes the help desk needs to know.  When there is potential fraud detected in one system the other need to be aware of it.  Also the ability to transfer information from one system to another is crucial in such systems as decision support systems.  In these types of systems, information is gathered from various sources within the organization and given to decision makers who can view the larger picture of the organization.
In software development inner-organizational software can help companies collaborate and use their time more efficiently.  Inner-organizational software is developed with common interfaces such as XML or Application interfaces.  Software packages that companies buy can also have ways to interface with them programmatically.  Interoperability is standard practice for software being developed today.  
These ways of working with software are becoming wide spread.  In their article Theoretical Foundations of Empirical Research on Inter-organizational Systems: Assessing Past Contributions and Guiding Future Directions, Robey et al use seminal research to show the foundations of inter-organizational systems (2008).  The authors note that it took several years for the idea off inter-organizational research begun to spread. In research developing the seminal research is important but there may be a lag between the research itself and the practical application of that research.
Management of software development is an issue that has been the issue of a good deal of research. Todays software products are more complicated and expensive.  Researchers today are looking for ways of minimizing risk.  As well as the ability to overcome things witch might lead the project to failure.
Researchers are also looking at the way software is build. Sugumaran et al's article on software time lines (2008) delves into ways that programmers can manage expectations and develop time lines that will help lead to the success of the project.  This type of software reflects what is going on in the industry with new project management methodologies such as Last Planner and Scrum.
Personalization is another issue researchers research today. Personalization is an emerging technology.  Software development of personalization is a complex task.  Tam et al (2005) in their article Web Personalization as a Persuasion Strategy use seminal research in the fields of psychology to argue their case.  The easy availability of research in other fields such as psychology help made new inroads in research.  In their article they talk about how familiar landmarks can help the user process the page more quickly. Users catch more messages from the web page when there is some sort of familiarity.  With the advent of technologies that allow for more customized user interfaces such as hypertext markup researchers are concerned with how the interface makes people feel.  Twenty years ago a button on a program would most likely be the one that came standard with the operating system.  Today with web pages the button can have gradients, borders, and have movement. Research is showing us this type of customization can really affect use of the site. 
This type of cross-disciplinary research affects the way people make software products.  Articles involving biasness, marketing and psychology are just some of the disciplines that researchers are looking at when they research software development.  Today software like video games can have advertisement built into them.  This type collaboration between businesses was unheard of twenty years ago. 
There are more authors for each article in more recent articles.  As technology allows people to work collaboratively there is a greater collaboration among researchers.  In the review of older articles there is a 60% collaboration rate. In more recent articles that number has climbed up to 93%.  Today researchers have the ability to send drafts via email as well as chat on a free service.  Twenty years ago the only options that a person had if they did not live in the same town was to use the mail.  This would have made collaboration prohibitively difficult.   
In many of the older articles there is a greater usage of students as subjects.  These students are often compulsorily made to answer questions.  Fifty percent of the articles from the 1980's used students for their research.  While only 12% of more recent article do.  Chart 3 shows the difference in the two different groups.
The disfavor of the use of students could be for a number of reasons.  The first of which is that students are often coerced into doing the surveys.  They are often a requirement for classes. This type of coercion can produce poor results in the studies because students may not truly represent the larger population.  In Jarvenpaa et al's research they used students to test the usefulness of group collaboration software.  The flaw in this study is that these students are not the typical user of group software and thereby have no context for witch to use it.  If the researchers had used business people they may have had a real world context for using the software.  This could have drastically changed the outcome of the research.
When there is no coercion researchers in the past often use bribes to encourage the student to attend. These bribes can be as detrimental.  Students may behave differently due to a bribe.  This can taint the results.
Software development research is going in many directions.  Software development is a relatively new field which is always changing.  Research on software development evolves as software evolves. 
The availability of data, and new ways to analyze data, is causing a steep increase in the amount of quantitative research being done.  Today’s researcher can gather data from various sources.  They can also access enormous amounts of articles on the subject.  They have the ability to collaborate like never before.  Researchers today have cross disciplinary information readily available leaning to new ways of looking at software development.
Today when developing software, people use a broad range of information brought about by research.  Software developers of today look at usability and customizability.  They look at the psychology of getting people to use their products and the marketability of their products.

The future of software engineering research is bright.  There are new devices that require an entirely new approach.  Small devices like cell phone are starting to get a good deal of use.  In the future researchers will look at such devices and look at all of the aspects that they did with software and web development.  Researchers will look at why people use these devices.  They will look at ways of building quality software that provide the user with a sound user experience.  These types of devices lend themselves to more research because for the first time location plays a part of the equation.  New devices are location sensitive.  Giving users contextual information depending on the place they are in.
This paper discusses how research has changed over the past 20 years.  Although research methods have not changed the ability to gather large amounts of data and then analyze them has.  There are also more people willing to participate in research.
Researchers are more willing to use different methods to gather data such as email.  There are also trends away from using students in research.
Managing projects has become an issue in software development.  With large projects come large costs.  Software research is looking into ways of managing risk.  Software research is also looking into ways of determining the time and cost of software projects.  There are also more options available to software developers than ever before.  Today’s programmers can use geographic data from a service.  They can interface with small devices such as phone or have access to unprecedented processing power and storage through web services and cloud computing.  Today’s software developers have access to tools such as knowledge bases and research to aid in there professionalism. 
The research of the past has aided with things such as group interactions and user interactions.  The research of today takes those seminal studies and builds upon them to take us toward the future of research.


Armstrong, D. J., Nelson, H. J.,  Nelson, K. M., & Narayanan V. K. (2008). Building the IT workforce of the future: the demand for more complex, abstract, and strategic knowledge. Information Resources Management Journal, 21(2), 63-79.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Banker, R. D., Bardhan, I., & Asdemir, O. (2006). Understanding the impact of collaboration software on product design and development. Information Systems Research, 17(4), 352-373,440.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Brockhoff, & Klaus.  (1984). Forecasting quality and information. Journal of Forecasting, 3(4), 417.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Capra, E., Francalanci, C., & Merlo, F. (2008). An empirical study on the relationship between software design quality, development effort and governance in open source projects. IEEE Transactions on Software Engineering, 34(6), 765-782.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Crowston, K., & Scozzi, B. (2008). Bug fixing practices within free/libre open source software development teams. Journal of Database Management, 19(2), 1-30.  Retrieved March 1, 2009, from ABI/INFORM Global database.
DeLone, & William H.  (1988). Determinants of success for computer usage in small business. MIS Quarterly, 12(1), 51.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Dos Santos, Brian L.,  Bariff, & Martin L. (1988). A study of user interface aids for model-oriented decision. Management Science, 34(4), 461.  Retrieved February 28, 2009, from ABI/INFORM Global database.
Doll, William J.,  Torkzadeh, & Gholamreza. (1989). A discrepancy model of end-user computing involvement. Management Science, 35(10), 1151.  Retrieved March 1, 2009, from ABI/INFORM Global database.
El Emam K.,  & Koru A. (2008). A replicated survey of it software project failures. IEEE Software, 25(5), 84-90.  Retrieved February 17, 2009, from ABI/INFORM Global database.
Garrity, E. J., O'Donnell, J. B., & Kim, Y. J., & Sanders, G. L.. (2007). An extrinsic and intrinsic motivation-based model for measuring consumer shopping oriented web site success. Journal of Electronic Commerce in Organizations, 5(4), 18-38.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Gopal, A., Sivaramakrishnan, K., Krishnan, M. S., & Mukhopadhyay, T. (2003). Contracts in offshore software development: an empirical analysis. Management Science, 49(12), 1671-1683.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Jarvenpaa, Sirkka L.,  Rao, V. Srinivasan,  Huber, & George P. (1988). Computer support for meetings of groups working on unstruct. MIS Quarterly, 12(4), 645.  Retrieved February 28, 2009, from ABI/INFORM Global database.
Khanfar, K., Elzamly, A., Al-Ahmad, W., El-Qawasmeh, E., Alsamara, K., & Abuleil S. (2008). Managing software project risks with the chi-square  technique. International Management Review, 4(2), 18-29,77.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Kirs, Peeter J.,  Sanders, Lawrence G.,  Cerveny, Robert P.,  & Robey, Daniel. (1989). An experimental validation of the Gorry and Scott Morton fr. MIS Quarterly, 13(2), 183.  Retrieved February 28, 2009, from ABI/INFORM Global database.
Luzi, A. D.,  & Mackenzie, K. D. (1982). An experimental study of performance information systems. Management Science (pre-1986), 28(3), 243.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Lucas H. C.  (1981). An experimental investigation of the use of computer-based graphics in decision making. Management Science, (pre-1986), 27(7), 757.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Montazemi, & Ali Reza.  (1988). Factors affecting information satisfaction in the context o. MIS Quarterly, 12(2), 239.  Retrieved February 28, 2009, from ABI/INFORM Global database.
Park C. W., Im, G., & Keil, M. (2008). Overcoming the mum effect in it project reporting: impacts of fault responsibility and time urgency*. Journal of the Association for Information Systems, 9(7), 409-431.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Pendharkar, P. C.,  & Rodger, J. A. (2007). An empirical study of the impact of team size on software development effort. Information Technology and Management, 8(4), 253-262.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Robey, D.,  Im, G., & Wareham, J. D. (2008). Theoretical foundations of empirical research on interorganizational systems: assessing past contributions and guiding future directions. Journal of the Association for Information Systems, 9(9), 497-518.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Shneiderman, B., Brethauer, D., Plaisant, C., & Potter, R. (1989). Evaluating three museum installations of a hypertext system. Journal of the American Society for Information Science (1986-1998), 40(3), 172.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Stegelin, F. E., & Novak, J. L. (1986). Attitudes of agribusiness toward microcomputers. Agribusiness (1986-1998), 2(2), 225.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Sugumaran, V., Tanniru, M., & Storey V. C. (2008). A knowledge-based framework for extracting components in agile systems development. Information Technology and Management, 9(1), 37-53.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Tam, K. Y., & Ho, S. Y.. (2005). Web personalization as a persuasion strategy: an elaboration likelihood model perspective. Information Systems Research, 16(3), 271-291.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Taylor, W. A.  (2004). Computer-mediated knowledge sharing and individual user differences: an exploratory study. European Journal of Information Systems, 13(1), 52-64.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Tsai, M. T., & Su, W. (2007). The impact of cognitive fit and consensus on acceptance of collaborative information systems. The Business Review, Cambridge, 8(2), 184-190.  Retrieved March 1, 2009, from ABI/INFORM Global database.
Van Pham K.  (2006). Strategic off shoring from a decomposed COO's perspective: a cross-regional study of four product categories. Journal of American Academy of Business, Cambridge, 8(2), 59-66.  Retrieved March 1, 2009, from ABI/INFORM Global database.