What is Distributed Computing?
Distributed computing allows for applications to be distributed over a network of processors, thus harnessing the enhanced power of the collection of processors along with their related resources.
The concept of "distributed computing" has been familiar to the data management community for quite some time. Until recently, however, very few companies had actually embarked on the migration to this powerful new architecture.
Now that's all about to change. Because as enterprise information needs have grown in both volume and complexity, companies now recognize the need for a solution that reduces both technology and processing costs, while giving them the ability to move data quickly and efficiently across highly diverse platforms, machines and programming languages.
Distributed computing, using a tiered architecture is now becoming common place. No other tool delivers such a level of integration, flexibility and openness in meeting your needs for reliable computing.
To understand and prepare for the coming shift to distributed computing, we need to examine the business and technological forces driving this fundamental change.
With true distributed computing, companies can leverage CPU cycles by vastly reducing the sort/merge, transfer and processing cycles needed to update transactional data between mainframe and UNIX machines.
Distributed computing also offers significant savings in software maintenance costs because logic-driven applications are now centralized and managed in the server layer of a N-Tiered Client/Server architecture.
Those are the promises of truly distributed computing.
Making the transition to an enterprise-wide distributed computing solution takes commitment, a reasonable investment in new technologies, and in many cases the guidance of experienced distributed computing specialists.
By understanding the challenges and rewards of this new architecture, information managers can prepare their companies to take full advantage of this emerging standard.