Speed in scientific computing is measured in megaflops, gigaflops, teraflops and petaflops. A megaflop is 10^{6} floating point arithmetic operations (+, -, *, /) in one second What are a gigaflop, a teraflop and a petaflop? To determine the real megaflop rate of a given algorithm on a given computer you must first determine theoretically the total number of Floating point arithmetic operations the algorithm takes and then divide that by 106 times the total time taken to run the algorithm. Determine the speed of your matrix inverse function by timing how long it takes to invert a random matrix (you should use the function given in lecture to generate a random square matrix) of size N, where N takes the integer values:
i. 2 < N < 50, (~50 values)
ii. N = {55, 60, 65, ... 200} (~30 values in increments of 5)
iii. N = {225, 250, 275, ... 1000} (~30 value in increments of 25)
iv. N = {1200, 1400, ...,2000} (~10 values in increments of 200)
Plot Megaflops vs. ln2(N).