Definition of a Distributed System
A distributed System is one in which independent, self-sufficient – often autonomous or heterogeneous – spatially-separated components must use a common interconnect to exchange information and coordinate actions, and allow the whole to appear to the user as one single coherent system.
Protocols
In order for separate and independent processes to make sense of the communication that they are engaging in, then they must follow a set of rules (protocols) of engagement and exchange.
A protocol is a set of rules for an InterProcess Communication (IPC). This protocol stipulates the precise sequence of events that must be enacted by the communicating processes for them to engage in communication and exchange information.
Some communication events are unicast, that is, from one single process to another single process. So a form of one-to-one communication. And then, some events are multicast – from one single process to many other processes, so a form of one-to-many communication.
API Calling
A protocol uses the API calls from IPC that were discussed in the Syncing and Blocking SEND and RECEIVE Processes page. These API calls were:
- SEND (receiver process) (data to be sent)
- RECEIVE – the argument for this operation is the location of where the data is to be placed, and possibly the sender process.
- ACCEPT -- a receiver process calls this to specify its readiness to engage in communication
- CONNECT -- the sender process calls this to initiate the engagement
- DISCONNECT -- either process must use this operation to clearly disengage from communication.
In this case, the Web Server is the SENDer and the Web Browser is the RECIEVEr.
Synchronization
REMEMBER, THERE IS NO GLOBAL CLOCK.
Say for example, we have a timestamp t for the Web Server. In reality, this time will be later in real time, than the timestamp t’ for the Web Browser, even though it is obvious that t’ < t.
If this is so, then the Web Browsers SEND, will have no RECEIVE counterpart in the Web Server. Because of this, data may not be exchanged between the Web Server and Web Browser.
There is one way however, to try and avoid this problem. That is:
- The Web Browser sends a Synchronous SEND. Ie, from the page mentioned above, a SEND that waits for acknowledgement that the required data has been placed into the buffer of the receiver process.
- The Web Server keeps on looping around RECEIVE, until a SEND has been caught from the Web Browser. Remember, a RECEIVE is always Synchronous.
Sockets
Sockets are a programming abstraction which is used to implement low-level IPC. For example, 2 processes exchange information by each process having a socket of its own. They can then read/write from/onto their sockets.
Sockets are created, and if in a client-server approach, then these sockets are prepared for sending and receiving messages. With this, we can see that sending data is just done by writing to ones socket, and receiving data is done by reading from the socket.
Client-Side Sockets
A Client-side socket is best understood as the endpoint of a conversation. For example, they are short-lived. In a web browser, a socket is created to send a request, receive the response, and then once that is done, the socket is destroyed (discarded).
In order to set up a client-side, we do as follows:
- Create a socket for a given transport (TCP) and IP (IPv4).
- Connect to a server on the port corresponding to the desired protocol. (80 for HTTP).
- When the connect returns without error, then the client-side set-up is complete :)
Once the socket is up, the message-exchange stage can take place:
- The client sends a request for a specific HTML page, (lets say http://computersciencesource.wordpress.com ) :P through its socket.
- Then, through the same socket, it waits for a response to come, and then processes it.
Once this is done, the socket is then discarded.
Server-Side Sockets
A server socket behaves more like a dispatcher. This is because they do not normally send or receive any data: typically, they simply listen for connections on the host and port that the socket server is bound to.
When a server socket gets one connection, it also gets a new socket in response to that event. It will then spawn a handler process, or thread, which is the one that actually uses the new socket to exchange messages with the client-connected socket.
After this, the server socket then simply goes back to listen for more connections.
To set up a server socket, we follow these steps:
- Create the server socket for a given transport (TCP) and IP (IPv4).
- Set the server socket options, such as, it this socket reusable?
- Bind the server socket to a host address (localhost) and a port (80).
- Start listening to connections on the server socket and set the maximum number that could be left waiting.
The connection handling stage is a loop, which:
- The server socket first blocks waiting connections to accept connections.
- When a connection is accepted, 2 pieces of information result: the new client-connected socket has been created and the host and the port of connecting client.
- The server process then handles the connection, for example it could spawn a dispatcher passing it the socket to be used and start it.
- The server process can then continue the loop.
- (the client-connected socket must be ordered to shutdown and close when the handler is done and returns to the parent).
No comments:
Post a Comment