This post is going to be mainly about the architectures employed by a distributed system and I don't mean this kind of architecture:
There are two main types of architectures that we shall be looking at those which are...
- Tightly Coupled- several machines which are highly integrated, which may seem and work like a single computer.
Example - Parallel computers where different machines with separate CPUs are connected via a fast network providing an illusion of shared memory through Distributed Shared Memory (DSM), without the need of message passing .
- and Loosely Coupled- such architectures do not share anything and they must communicate together
Example - P2P where processes on different machines must communicate together
Distributed Shared Memory (DSM)
as mentioned above DSM essentially provides an illusion of sharing memory through a network where several machines may be operating (see the graph below).
The sharing of memory brings with it many problems of its own, the main challenge is to minimise traffic across the network. Since the memory is shared then paging in and out of the memory will be an issue so that all data is available on the computer to the CPU where the process is executing, and this takes time! (To learn more about paging within operating systems click here)
Architectural Styles
- The Layered architectures imitate something similar to the OSI reference model as used in major computer networks such as the internet, where all the "lower layers provide some service to higher layers".
- The object-based architectures are much more direct where all the different components can interact directly with other components through a direct method call.
- Event based architectures are based on a publish and subscribe system, so there is no component to component communication/calls, instead all the objects which are subscribed to the service communicate through an 'event bus' (This is a lot like an address bus in OS).
- The Shared data space architectures support different components (or objects) by providing a persistent storage space for those components (such as a MySQL database). Everything relating to the components is available and exists and that data space, unlike event-based architectures where data is only sent and received by those components who have already subscribed.
Client-Server vs. P2P
To find out more about what exactly Client-server and P2P architectures are visit some of the previous blogs on the networks material here's a link.
- Client-Server - These architectures are widely used as they provide a functional specialisation (Where the client requests and the server processes the requests). However such architectures are centralized meaning that in the event of a failure there will be a single point which will have a fault and the whole system may suffer
- Pure P2P - Such architectures are symmetrical where the computers all have the same rights throughout the distributed system, unlike the client-server architecture. This allows the sharing of resources amongst a large number of participants, but this can also make finding a resource very challenging.
Multiple Servers
In architectures such as the Client-server and many P2P systems (such as Spotify) which utilise servers and hubs we need to consider the use of multiple servers, to increase the robustness and performance across the system. You may think that adding multiple servers will increase the throughput (Time taken to finish a job)...this is not the case, since the completion of a job is dependent upon time slices which are fixed and the time taken to complete a job is also fixed; therefore the throughput is dependent upon both these factors. By increasing the number of servers we can however increase the Redundancy, if one server fails the other servers can tolerate the fault.
Proxies and Caches - HAHA
Proxy servers increase the speed of resolving requests by caching the result for a small amount of time (See networks), also helping to anonomize information. However we must take into account load distribution- if a server can process more than one request than we must distribute the load amongst all the other servers so that they all have the same load.
Factors affecting process Interaction:
Managing concurrency and failure is the main problem faced when talking about distributed computing; Communication dictates everything in distributed systems, and this is prone to both concurrency and failure issues. No message is sent and received immediately, there is always a small lag period between the sending and receiving of messages, data, or information. Process execution speeds can also vary for different architectures which have known lower and upper bounds.
Wow Nida awesome post!!! XD
ReplyDeleteThank you :D
ReplyDeleteHope it's helpful!
VERY HELPFUL! :D thanks a lot!!!
ReplyDeletegot any more :)