Exploring Java Code Samples: Understanding Time Complexity and Outputs

Sumeet Panchal
4 min readFeb 4, 2024

In software development, understanding the performance characteristics of code is essential for writing efficient and scalable applications. Time complexity analysis helps developers evaluate the efficiency of algorithms and identify potential bottlenecks. In this blog post, we’ll explore four Java code samples, analyze their time complexities, and provide explanations for their outputs.

Code Sample 1: Nested Loops

public int func(int n){
int a = 0;
for(int i = 0; i < n; i++) {
for(int j = n; j > i; j--) {
a = a + i + j;
}
}
return a;
}

Explanation:

This code sample consists of nested loops, resulting in an O(n²) time complexity. The output of the function varies with the input value n. To calculate the output, we iterate through each combination of i and j in the nested loops. For each combination, the expression a=a+i+j is evaluated and added to the accumulator a. For example, when n=3, the output is calculated as follows:

  • i=0, j=3: a=0+0+3=3
  • i=0, j=2: a=3+0+2=5
  • i=0, j=1: a=5+0+1=6
  • i=1, j=3: a=6+1+3=10
  • i=1, j=2: a=10+1+2=13
  • i=2, j=3: a=13+2+3=18

So, the final output is 18.

Code Sample 2: While Loop

void func(int n) {
int i = 1;
int s = 1;
while (s <= n) {
i++;
s = s + i;
System.out.println("Hello Students");
}
}

Explanation:

A while loop prints “Hello Students” until a certain condition is met. The time complexity is O(square-root(n)​). To calculate the output, we iterate through each value of i until s exceeds n, where s represents the sum of consecutive integers starting from 1. For example, when n=10, the output is calculated as follows:

  • i=2, s=3: prints “Hello Students”
  • i=3, s=6: prints “Hello Students”
  • i=4, s=10: prints “Hello Students”

So, “Hello Students” is printed 3 times.

Code Sample 3: Conditional ExecutionjavaCopy code

void func(int n) {
if (n == 1) {
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
System.out.println("Find my TC");
break;
}
}
}
}

Explanation: The inner loops are executed only if n=1. Otherwise, there is no output. To calculate the output, we check if n is equal to 1. If so, we execute the nested loops and print “Find my TC” once. For example, when n=5, no output is produced.

Code Sample 4: Nested Loops with Break Statements

void func(int n) {
for (int i = n / 2; i < n; i++) {
for (int j = 0; j + n / 2 < n; j++) {
for (int k = 1; k < n; k = k * 2) {
System.out.println("Nested time complexity");
}
}
}
}

Explanation: This code sample includes nested loops with a break statement in the innermost loop. The time complexity is O(nlogn). To calculate the output, we iterate through each combination of i, j, and k in the nested loops. For each iteration of the innermost loop, the statement “Nested time complexity” is printed. Since the innermost loop doubles the value of k until it exceeds n, the number of times “Nested time complexity” is printed depends on the value of n. For example, when n=5, “Nested time complexity” is printed 3 times.

Conclusion: Understanding the time complexity and outputs of code samples is crucial for writing efficient software solutions. By analyzing code snippets and their performance characteristics.

Types:

Time complexity is a measure used in computer science to describe the amount of time an algorithm takes to complete as a function of the length of its input. Several types of time complexities are commonly discussed:

  1. Constant Time (O(1)): The algorithm’s runtime does not depend on the size of the input. No matter how large the input is, the algorithm takes the same amount of time to complete.
  2. Logarithmic Time (O(log n)): The algorithm’s runtime grows logarithmically as the size of the input increases. Algorithms with logarithmic time complexity typically halve the search space in each step.
  3. Linear Time (O(n)): The algorithm’s runtime increases linearly with the size of the input. For each additional element in the input, the algorithm requires a proportional amount of time to process it.
  4. Linearithmic Time (O(n log n)): The algorithm’s runtime grows in proportion to the size of the input multiplied by the logarithm of the input size. Many efficient sorting algorithms, like Merge Sort and Quick Sort, have linearithmic time complexity.
  5. Quadratic Time (O(n²)): The algorithm’s runtime grows quadratically with the size of the input. For each element in the input, the algorithm may need to perform operations on all other elements.
  6. Cubic Time (O(n³)): The algorithm’s runtime grows cubically with the size of the input. Algorithms with cubic time complexity often involve three nested loops, each iterating over the input size.
  7. Exponential Time (O(2^n)): The algorithm’s runtime doubles with each additional element in the input. Exponential time complexity is often seen in brute-force algorithms that consider all possible combinations.
  8. Factorial Time (O(n!)): The algorithm’s runtime grows factorial with the size of the input. Factorial time complexity is typically seen in algorithms that generate all permutations or combinations of a set of elements.

Algorithms with time complexity beyond the types, such as quadratic time (O(n²)), cubic time (O(n³)), exponential time (O(2^n)), or factorial time (O(n!)), should be avoided whenever possible, especially for large input sizes. While they may be suitable for small datasets or specialized tasks, their runtime grows rapidly as the input size increases, making them impractical for many real-world applications.

In summary, algorithms with time complexity types like constant, logarithmic, linear, and linearithmic are generally considered the best choices for efficient and scalable solutions. However, the choice of the best time complexity type depends on the specific requirements and constraints of the problem at hand.

--

--