SE250:lab-1:sshi080

From Marks Wiki
Jump to navigation Jump to search

Introduction

First I had to remember how to write a program in C since I haven’t done so in months, after a bit of fiddling around I started to remember some of the things you need to do in C.

Then I wrote a for loop to calculate adding 1 to i for x amount of times.

The lab exercise sheet mentioned the clock() function. I have no idea how this function works initially so I had to go on google to search up the input/output characteristics of this function.

After some searching I put up some code to calculate the time before the for loop starts then immediately after it. I then calculated the difference between those times to get how long it look to calculate the additions in the for loop.

I realised the time difference wasn’t given in seconds, it was given in ticks. I found that if I divided the ticks by clocks_per_sec it will give the seconds.

Code

#include <stdio.h>
#include <time.h>

int main() {

	int i;
	clock_t time_now;
	clock_t time_after;
	double time;

	time_now = clock();

	for(i = 0; i < 2000000000; i++) {
		i = i+1;
	}

	time_after = clock();
	time = (time_after - time_now) / (double)CLOCKS_PER_SEC;

	printf("Processing %d additions: %.10lf seconds\n", i, time);

}

Results

I then tested for the following values:

  • I = 10,000,000; 0.03 seconds
  • I = 50,000,000; 0.18 seconds
  • I = 100,000,000; 0.34 seconds
  • I = 200,000,000; 0.63 seconds
  • I = 1,000,000,000; 3.3 seconds
  • I = 2,000,000,000; 6.32 seconds

Since I had spare time, I compiled and ran the code on the CS linux server. The results were very different

  • Processing 100,000,000 additions: 1.4800000000 seconds
  • Processing 500,000,000 additions: 7.4000000000 seconds
  • Processing 1,000,000,000 additions: 14.7300000000 seconds

The calculations took much longer, the 1 billion additions comparison were 6.32 on the local machine and 14.73 seconds on the linux server. It's safe to say that the linux server is much slower.