SE250:lab-1:klod004

From Marks Wiki
Jump to navigation Jump to search

The method that was used to create the program to calculate the time taken for the addition operation in the C program was a while loop. A number was declared and initiated at 0, and then a while loop was used to do an addition operation to the number till it reaches 1 million. Another variable was declared and initiated to the clock function. After the loop was executed and completed, the time was then calculated as: time = clock()-time; which means, the difference in time is equal to the current time minus the time stored before the loop was executed. This difference in time was then displayed on the console window. The time was given as 2 milliseconds. Therefore it takes C program 2 milliseconds to do the addition operation a million times, meaning one addition operation takes 2 nanoseconds. Some problems encountered were trying to remember how to use Visual Studio, the creation of a project was causing some concerns as there were errors with it, but these errors were resolved. Also, using the clock function was a bit difficult, but was eased with the help of the lab supervisor. Other then those problems, no other problems were noticed. Another build of the program featured doing the addition function a billion times. This yielded a result of 2154 milliseconds, which is approx. 2.154 nanoseconds. This result is consistent with the previous result. Results for type int: 2 nanoseconds per addition. Results for type long: 2.235 nanoseconds per addition. Results for type short: Short is too small a number therefor the calculation cannot be measured. Results for type float: 8 nanoseconds per addition. Results for type double: 8.6045 nanoseconds per addition. This is to prove that it takes the computer longer to add float and double numbers than long or int, due to the sheer magnitude of the float and double numbers, as in the amount of memory allocated to store these 2 variables is far greater than the int or long type.