Virtual Output Queues in Distributed Networking Systems: An Overview

In a distributed networking system, data transmission between nodes can become congested due to limited bandwidth and high traffic volume. To manage this congestion and ensure efficient data transmission, virtual output queues (VOQs) are utilized. In this blog, we will explain what virtual output queues are and how they work in distributed networking systems.

What are Virtual Output Queues?

Virtual output queues are a method used in computer networks to manage congestion by distributing incoming data packets across multiple output queues at the switch. Each queue corresponds to a different output port, and the data packets are stored in the queue until they can be transmitted. This helps to distribute the load evenly across multiple output ports and prevent congestion at a single port.

Why are Virtual Output Queues used in Distributed Networking Systems?

In a distributed networking system, there are multiple nodes that communicate with each other. When many nodes are transmitting data at the same time, the network can become congested, and data transmission can become slow or even come to a halt. VOQs are used in these systems to help manage the congestion and ensure efficient data transmission.

How do Virtual Output Queues work in Distributed Networking Systems?

The following diagram illustrates how VOQs work in a distributed networking system:

  1. Data packets are received at the switch: When data packets are received at the switch, they are stored in the input buffer.
  2. Data packets are assigned to an output queue: The switch uses a scheduling algorithm to determine which output queue the data packets should be sent to. The algorithm considers factors such as the available bandwidth and the amount of data in each queue.
  3. Data packets are transmitted: Once the data packets are assigned to an output queue, they are transmitted from the switch to the appropriate node. The data packets are transmitted in the order in which they were received, ensuring that the most recently received data packets are transmitted first.
  4. Feedback from the nodes: The nodes provide feedback to the switch, indicating the amount of available bandwidth and the amount of data in each queue. This information is used by the scheduling algorithm to determine how to distribute the data packets in the future.

Benefits of Using Virtual Output Queues in Distributed Networking Systems

  1. Improved bandwidth utilization: VOQs help to distribute the load evenly across multiple output ports, improving the utilization of available bandwidth.
  2. Reduced congestion: By distributing the load evenly, VOQs help to reduce congestion at a single port, ensuring efficient data transmission.
  3. Improved fairness: VOQs help to ensure fair distribution of available bandwidth among the nodes, preventing a single node from monopolizing the bandwidth.

In conclusion, virtual output queues are an important method used in distributed networking systems to manage congestion and ensure efficient data transmission. By distributing incoming data packets across multiple output queues, VOQs help to improve bandwidth utilization, reduce congestion, and ensure fair distribution of available bandwidth.

Categories: Programming

Network data pipeline of Broadcom Jericho

Introduction:

Network data pipelines play a crucial role in ensuring the efficient and seamless transfer of data over a network. Broadcom Jericho-based switches are a popular choice in modern data center networks due to their advanced data pipeline technology. In this blog post, we will take a closer look at the network data pipeline of a Broadcom Jericho-based switch and how it operates.

The Broadcom Jericho Data Pipeline:

The Broadcom Jericho data pipeline is a multi-stage process that consists of several components working together to transfer data over a network. The following diagram illustrates the main components of the Broadcom Jericho data pipeline:

  1. Input Ports: The input ports are responsible for receiving incoming data from other network devices. They have the capability to detect the incoming data speed and adjust accordingly.
  2. MAC Layer: The MAC (Media Access Control) layer is responsible for controlling access to the shared media. It handles the framing and error correction of incoming data and ensures that it is transmitted in a format that the other components in the pipeline can understand.
  3. Switch Fabric: The switch fabric is responsible for forwarding the data from one port to another within the switch. It operates in parallel to ensure that data is transmitted as quickly as possible.
  4. Output Ports: The output ports are responsible for transmitting the data to the other network devices. They are capable of adjusting the transmission speed to match that of the incoming data.
  5. Table Lookup: The table lookup component is responsible for determining the destination of the incoming data. It uses a forwarding table to determine the next hop for the data based on the destination address.
  6. Quality of Service (QoS): The QoS component is responsible for prioritizing the different types of data based on their importance. This ensures that critical data is transmitted first, reducing the likelihood of congestion and delay.
  7. Security: The security component is responsible for implementing various security measures to protect the data being transmitted. This can include firewalls, intrusion detection, and encryption.

Conclusion:

The Broadcom Jericho data pipeline is a complex but highly efficient system that enables fast and secure data transfer over a network. Its multi-stage architecture and advanced components ensure that data is transmitted quickly and accurately, helping to keep modern data centers running smoothly. Whether you are an IT professional or just curious about network technology, understanding the Broadcom Jericho data pipeline is an important step towards a better understanding of how data is transmitted over a network.

Categories: Programming

HackerRank – Array Manipulation

Problem

Starting with a 1-indexed array of zeros and a list of operations, for each operation add a value to each the array element between two given indices, inclusive. Once all operations have been performed, return the maximum value in the array.

Example

Queries are interpreted as follows:

    a b k
    1 5 3
    4 8 7
    6 9 1

Add the values of between the indices and

inclusive:

index->	 1 2 3  4  5 6 7 8 9 10
	[0,0,0, 0, 0,0,0,0,0, 0]
	[3,3,3, 3, 3,0,0,0,0, 0]
	[3,3,3,10,10,7,7,7,0, 0]
	[3,3,3,10,10,8,8,8,1, 0]

The largest value is

after all operations are performed.

Function Description

Complete the function arrayManipulation in the editor below.

arrayManipulation has the following parameters:

  • int n – the number of elements in the array
  • int queries[q][3] – a two dimensional array of queries where each queries[i] contains three integers, a, b, and k.

Returns

  • int – the maximum value in the resultant array

Input Format

The first line contains two space-separated integers and , the size of the array and the number of operations.
Each of the next lines contains three space-separated integers , and

, the left index, right index and summand.

Constraints

Sample Input

5 3
1 2 100
2 5 100
3 4 100

Sample Output

200

Explanation

After the first update the list is 100 100 0 0 0.
After the second update list is 100 200 100 100 100.
After the third update list is 100 200 200 200 100.

The maximum value is 200

Solution

After contemplating the popular approach for solving this, here is how I wrapped my head around it.

For every input line of a-b-k, you are given the range (a to b) where the values increase by k. So instead of keeping track of actual values increasing, just keep track of the rate of change (i.e. a slope) in terms of where the rate started its increase and where it stopped its increase. This is done by adding k to the “a” position of your array and adding -k to the “b+1” position of your array for every input line a-b-k, and that’s it. “b+1” is used because the increase still applied at “b”.

The maximum final value is equivalent to the maximum accumulated “slope” starting from the first position, because it is the spot which incremented more than all other places. Accumulated “slope” means to you add slope changes in position 0 to position 1, then add that to position 2, and so forth, looking for the point where it was the greatest. This was suggested by richardpvogt.

Code

long arrayManipulation(int n, int queries_rows, int queries_columns, int** queries) {

    
    long from=0,to=0,val=0,max=0,new_val=0;
    long *mat = (long *) malloc((n+1) * sizeof(long));
    for(int i=0;i<=n;i++){
      *(mat+i)=0;
    }
    
    for (int i = 0;i<queries_rows;i++) {
        from = queries[i][0];
        to = queries[i][1];
        val = queries[i][2];    
        
        printf("from-%d, to-%d, val-%d\n",from,to,val);   
        
        *(mat+from) += val;
        printf("*(mat+from) - %ld\n",*(mat+from));    
        if(to+1 <= n){
        *(mat+to+1) -= val;
        printf("*(mat+to+1) - %ld\n",*(mat+to+1));   
        } 

    }   
    
    for(int i=1;i<=n;i++){
        new_val += *(mat+i);
        printf("%ld\n",new_val);
        if(max<=new_val){
            max=new_val;
        }
    }
    
    return max;

}

int main()
{
    FILE* fptr = fopen(getenv("OUTPUT_PATH"), "w");

    char** first_multiple_input = split_string(rtrim(readline()));

    int n = parse_int(*(first_multiple_input + 0));

    int m = parse_int(*(first_multiple_input + 1));

    int** queries = malloc(m * sizeof(int*));

    for (int i = 0; i < m; i++) {
        *(queries + i) = malloc(3 * (sizeof(int)));

        char** queries_item_temp = split_string(rtrim(readline()));

        for (int j = 0; j < 3; j++) {
            int queries_item = parse_int(*(queries_item_temp + j));

            *(*(queries + i) + j) = queries_item;
        }
    }

    long result = arrayManipulation(n, m, 3, queries);

    fprintf(fptr, "%ld\n", result);

    fclose(fptr);

    return 0;
}

char* readline() {
    size_t alloc_length = 1024;
    size_t data_length = 0;

    char* data = malloc(alloc_length);

    while (true) {
        char* cursor = data + data_length;
        char* line = fgets(cursor, alloc_length - data_length, stdin);

        if (!line) {
            break;
        }

        data_length += strlen(cursor);

        if (data_length < alloc_length - 1 || data[data_length - 1] == '\n') {
            break;
        }

        alloc_length <<= 1;

        data = realloc(data, alloc_length);

        if (!data) {
            data = '\0';

            break;
        }
    }

    if (data[data_length - 1] == '\n') {
        data[data_length - 1] = '\0';

        data = realloc(data, data_length);

        if (!data) {
            data = '\0';
        }
    } else {
        data = realloc(data, data_length + 1);

        if (!data) {
            data = '\0';
        } else {
            data[data_length] = '\0';
        }
    }

    return data;
}

char* ltrim(char* str) {
    if (!str) {
        return '\0';
    }

    if (!*str) {
        return str;
    }

    while (*str != '\0' && isspace(*str)) {
        str++;
    }

    return str;
}

char* rtrim(char* str) {
    if (!str) {
        return '\0';
    }

    if (!*str) {
        return str;
    }

    char* end = str + strlen(str) - 1;

    while (end >= str && isspace(*end)) {
        end--;
    }

    *(end + 1) = '\0';

    return str;
}

char** split_string(char* str) {
    char** splits = NULL;
    char* token = strtok(str, " ");

    int spaces = 0;

    while (token) {
        splits = realloc(splits, sizeof(char*) * ++spaces);

        if (!splits) {
            return splits;
        }

        splits[spaces - 1] = token;

        token = strtok(NULL, " ");
    }

    return splits;
}

int parse_int(char* str) {
    char* endptr;
    int value = strtol(str, &endptr, 10);

    if (endptr == str || *endptr != '\0') {
        exit(EXIT_FAILURE);
    }

    return value;
}

Categories: Programming

Logistic Regression Code – Telecom Churn Example


Notice: Undefined offset: 0 in /home/jeswin/public_html/www.jeswin.com/wp-content/plugins/ff-block-gist-embed/src/render.php on line 50

Lets explore logisitic regression code done in python today. We have a dataset available for sample telecom provided where we have data of its customer who may or may not churn.

We have to make a prediction on the data set as accurately as possible.

Lets see how we can do that !


Simple Linear Regression – Python code


Notice: Undefined offset: 0 in /home/jeswin/public_html/www.jeswin.com/wp-content/plugins/ff-block-gist-embed/src/render.php on line 50

Here is sample code for Simple Regression that you can easily follow !

Github link : https://github.com/jeswinaugustine/machine_learning_code/blob/master/Linear%20regression/Simple_Linear_Regression.ipynb


Networking Interview Question – Switching

Top questions for switching !

  1. What is switching?
  2. Explain Ethernet Header?
  3. What is ARP?
  4. Explaining what is Proxy ARP, Gratuitous ARP, and Reverse ARP?
  5. What are the types of ARP packets?
  6. Explain ARP Header?
  7. Explaining end to end flow of ARP?
  8. Explain the ARP table?
  9. What are the important codes of ICMP?
  10. Hubs belong to one collision domain. Why?
  11. How are collision and broadcast domains defined in a L2switch?
  12. What is the standard range of VLANs? What is an extended range of VLANs? What are the reserved VLANs? When to use standard and extended VLANs?
  13. Why is the maximum number of VLANs 4096?
  14. What is the use of Management VLAN?
  15. What is a Native VLAN and why is it used? Do we need Native VLAN in new networks and how to disable it if not needed?
  16. How to configure interfaces in VLANs?
  17. How does the switch store all VLAN information?
  18. What is an access port?
  19. What is a trunk port?
  20. Types of frame tagging protocols?
  21. Explain diff between ISL and 802.1q ?
  22. Explain 802.1q header?
  23. Explain ISL header?
  24. How to configure access port and a trunk port?
  25. Troubleshooting trunk , what parameters should match on both sides of the trunk?
  26. What is VTP? Difference between VTP version 1,2,3?
  27. What are VTP different modes?
  28. What is a revision number and how to change a revision number when implementing a old switch in net network topology?
  29. What is VTP pruning?
  30. What are the advantages of linkgroups?
  31. What are the max number of links supported on a linkgroup when manually configured?
  32. What are max number of linkgroups supported on a switch?
  33. What are the match criteria for ether channel ports to be actve?
  34. Explain load balancing on etherchannel?
  35. Explain load balancing in etherchannel when source IP is used and when destination IP is used?
  36. Why do we use dynamic link aggregation protocol when we have a manual configuration possible?
  37. Which IP address and MAC address are used for link group?
  38. What is LACP
  39. What is the advantage of LACP? 8 ports as active and 8 ports as standby.
  40. How LACP works? How to decide which ports are active and which are standby?
  41. How to configure VRRP?
  42. Explain Packet of VRRP advertisement?
  43. How to load balance using VRRP?
  44. How pre-emption works in VRRP?
  45. What MAC address is used in VRRPs Virtual IP address?
  46. How does VRRP advertisements work?
  47. Explain the states of VRRP?
  48. How to implement switch port security?
  49. What is the use of private VLAN?
  50. What are the types of secondary VLANs?
  51. How to configure Private VLANs?
  52. What is DHCP spoofing? How is it implemented?
  53. What are Dynamic ARP inspections and how is it implemented?
  54. Explain ways of inter-VLAN communication?
  55. What is used by STP to build its topology?
  56. How often are BPDU sent out? To which address? On which interfaces?
  57. Explain the STP convergence process?
  58. Does STP support a backup root bridge?
  59. What is root secondary?
  60. Explain STP convergence with 3 switch, 4 switch and 5 switch diagram?
  61. Explain the STP port states?
  62. Explain STP timers? Explain why they are used and what is the default value of each?
  63. Explain BPDU packet format?
  64. Explain BPDU message types and when they are used?
  65. BPDU Flags?
  66. Explain How STP converges? 1) if direct topology change 2) indirect topology change 3) if insignificant topology change . Explain with diagram of three switch and 5 switch. What happens if a root port goes down? What happens if the designated port goes down?
  67. How to improve STP convergence times? 1) Port fast(blocking to forwarding direct and no TCN bpdu generated ) 2)uplink fast(forwarding delay eradicated by keeping standby port) 3) backbone fast (max age is eradicated)
  68. How will you protect against unexpected loss of BPDU?
  69. Explain RSTP port roles?
  70. Explain RSPT port states?
  71. Explain RSTP types of Ports ?
  72. Explain RSTP convergence process?
  73. Explain how RSTP handles Topology change?
  74. Difference between RSTP and STP?
  75. Explain MSTP .
  76. What is Multi VLAN port ?
  77. What is QinQ ( 802.1Q tunneling ) ?
  78. Explain Dynamic Trunking Protocol (DTP)
  79. What is VLAN Hopping?
  80. Explain Unidirectional Link Detection (UDLD)

Watch out for this space as this is a growing thread !

Categories: Networking, Programming

Euler Problem 4: Largest palindrome product : C Programming Solution

Problem 4: 

A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.

Find the largest palindrome made from the product of two 3-digit numbers.

Solution Approach:

Brute force method:

  1. Create a function that checks a number for being a palindrome.
  2. keep multiplying the numbers from 999 X 999 decretmenting one at a time.
  3. Find the multiplication and get the highest product and  print

Read More…

Categories: Programming