An Ethernet switch utilizes a memory buffering method to store frames before forwarding towards the destination. The switch uses Buffering once the destination port is busy because of congestion. Consequently, frames should be buffered until transmitted. So with no effective memory buffering plan, frames are more inclined to be dropped anytime traffic oversubscription or congestion occurs.
During congestion in the port, the switch stores the frame until it transmitted. The memory buffer may be the area in which the switch keep data. There’s two ways of buffering:-
Port-based Memory Buffering
Within this type, all frames have stored common memory buffer as well as in queues that associated with specific incoming ports. Switches utilizing port buffered memory in this kind of buffering. In port buffering switch give each Ethernet port with some high-speed memory to buffer frames until transmitted. A drawback of port buffered memory may be the shedding of frames whenever a port has no buffers. It’s also feasible for just one frame to obstruct the transmission of all of the frames in memory due to a busy destination port. This delay occurs whether or not the other frames might be transmitted to spread out destination ports.
Shared Memory Buffering
A few of the earliest ‘cisco’ switches make use of a shared memory the perception of port buffering. Shared buffering deposits all frames right into a common memory buffer that the ports around the switch share. The quantity of buffer memory needed with a port dynamically allotted. The frames within the buffer have dynamically attached to the destination port. This enables the packet to get on a single port after which transmitted on another port, without moving it to a new queue.