Technologies for Distributed 
Computing
Cloud Computing

Technologies for Distributed Computing

Content Outline of the Article:

1.What is Distributed Computing

2.How Distributed Computing works

3.Benefits of Distributed Computing

4.Disadvantages of Distributed Computing

5.Major Distributed Computing Technologies

6.Conclusion

What is Distributed Computing?

Distributed computing is a model in which components of a software system are shared among multiple computers or nodes. Even though the software components may be spread out across multiple computers in multiple locations, they're run as one system. This is done to improve efficiency and performance. The systems on different networked computers communicate and coordinate by sending messages back and forth to achieve a defined task.

No alt text provided for this image

How Distributed Computing works

Distributed computing networks can be connected as local networks or through a wide area network if the machines are in a different geographic location. Processors in distributed computing systems typically run in parallel.

In enterprise settings, distributed computing generally puts various steps in business processes at the most efficient places in a computer network. For example, a typical distribution has a three-tier model that organizes applications into the presentation tier (or user interface), the application tier and the data tier. These tiers function as follows:

  1. User interface processing occurs on the PC at the user's location
  2. Application processing takes place on a remote computer
  3. Database access and processing algorithms happen on another computer that provides centralized access for many business processes

In addition to the three-tier model, other types of distributed computing include client-server, n-tier and peer-to-peer:

  • Client-server architectures. These use smart clients that contact a server for data, then format and display that data to the user.
  • N-tier system architectures. Typically used in application servers, these architectures use web applications to forward requests to other enterprise services.
  • Peer-to-peer architectures. These divide all responsibilities among all peer computers, which can serve as clients or servers.

No alt text provided for this image

Benefits of Distributed Computing

Distributed computing includes the following benefits:

  • Performance : Distributed computing can help improve performance by having each computer in a cluster handle different parts of a task simultaneously.
  • Scalability : Distributed computing clusters are scalable by adding new hardware when needed.
  • Resilience and redundancy : Multiple computers can provide the same services. This way, if one machine isn't available, others can fill in for the service. Likewise, if two machines that perform the same service are in different data centers and one data center goes down, an organization can still operate.
  • Cost-effectiveness : Distributed computing can use low-cost, off-the-shelf hardware.
  • Efficiency : Complex requests can be broken down into smaller pieces and distributed among different systems. This way, the request is simplified and worked on as a form of parallel computing, reducing the time needed to compute requests.
  • Distributed applications : Unlike traditional applications that run on a single system, distributed applications run on multiple systems simultaneously.

Disadvantages of Distributed Computing

Complexity

Distributed computing systems are difficult to deploy, maintain and troubleshoot/debug than their centralized counterparts. The increased complexity is not only limited to the hardware as distributed systems also need software capable of handling the security and communications.

Higher Initial Cost

The deployment cost of a distribution is higher than a single system. Increased processing overhead due to additional computation and exchange of information also adds up to the overall cost.

Security Concerns

Data access can be controlled fairly easily in a centralized computing system, but it’s not an easy job to manage security of distributed systems. Not only the network itself has to be secured, users also need to control replicated data across multiple locations.

Major Distributed Computing Technologies

There are three major distributed computing technology which are given below:

Mainframes:

Mainframes were the first example of large computing facilities which leverage multiple processing units. They are powerful, highly reliable computers specialized for large data movement and large I/O operations. Mainframes are mostly used by large organizations for bulk data processing such as online transactions, enterprise resource planning and other big data operations. They are not considered as a distributed system; however they can perform big data processing and operations due to their high computational power by multiple processors. 

One of the most attractive features of mainframe was the ability to be highly reliable computers that were always on and capable of tolerating failures transparently. Furthermore, system shutdown is not required to change its component. Batch processing is the important application of mainframes. Their popularity has been reduced nowadays.

No alt text provided for this image

Clusters:

Clusters have started as the low-cost alternative to the mainframes and supercomputer. Due to advancement of technology in mainframes and supercomputers, other hardware’s and machines have become cheap which are connected by high bandwidth networks controlled by specific software tools that manage the messaging system. Since the 1980s cluster has become standard technology for parallel and high-performance computing. Due to their low investment cost different research institutions, companies, universities now a day use clusters. 

This technology contributed to the evolution of tools and framework for distributed computing like Condor, PVM, MP. One of the attractive features of clusters is the cheap machines with high computational power to solve the problem. And clusters are scalable. Example of a cluster is amazon EC2 clusters to process data using Hadoop which has multiple nodes(machines) with master nodes and data nodes and we can scale it if we have a big volume of data.

No alt text provided for this image

Grids:

They appeared in the early 1990’s as the evolution of cluster computing. Grid computing can have an analogy with electric power grid which is an approach to use high computational power, storage services and other variety of services. Users can consume resources in the same way as use of utilities such as power, gas and water. Grids initially developed aggregation of geographically dispersed clusters by means of internet connections and clusters belonging to different organizations and arrangement is made to share computational power between those organizations. Grid is dynamic aggregation of heterogeneous computing nodes which can be both nationwide and worldwide.

Different development in technology has made possible in diffusion of computing grids which are:

  • becoming cluster as common resources
  • underutilization
  • Some problems with higher computational requirement and seems impossible from single cluster
  • high band network, long distance connectivity

No alt text provided for this image

Conclusion

Distributed computing helps improve performance of large-scale projects by combining the power of multiple machines. It’s much more scalable and allows users to add computers according to growing workload demands. Although distributed computing has its own disadvantages, it offers unmatched scalability, better overall performance and more reliability, which makes it a better solution for businesses dealing with high workloads and big data.

Distributing technology has led to the development of cloud computing.


This is all about my article on Technologies for Distributed Computing.

Thank you.

Can you please also help in making making a distributed computer for load balancing?

Like
Reply

To view or add a comment, sign in

More articles by Pravallika Kandi

  • Real Time Data Streaming System with AWS services.

    Building a Real Time Data Streaming System with AWS Kinesis, Lambda Functions and a S3 Bucket. Stream processing is…

  • Installing and Configuring SSM Agent on EC2 CentOS

    Installing and Configuring SSM Agent on EC2 CentOS using name 'MyEC2SSMRole'. Before moving on to the actual…

  • EmpEase Consultancy

    What is SDP-1? SDP-1 basically tests our knowledge in building a real world web application. It is actually part of our…

    1 Comment

Others also viewed

Explore content categories