Finally, the last week has come week 12! And with it the last test!, Test #3 on Big-Oh and Big-Omega. I'm extremely glad that floating point numbers were not placed within this test, since as I have spoken in my previous post, of my confusion of floating point numbers! Overall, I believe I did fairly well on this test, though I feel a bit uneasy of my conclusions, since I wasn't exactly too sure in some cases as to whether I have fully concluded in respect to my introductions of each proof. Nevertheless, I can't complain, since I felt this test in sum wasn't terrible, mainly because I worked hard to study and complete assignment #3.
On the last day we spoke of condition numbers and how to derive it and to our surprise, the process of finding the formula for conditional numbers leads us to the derivative of the function. This derivative is multiplied by abs( x/ f(x)), which was taken out during the process of deriving the formula for the conditional number. Using this knowledge in action shows the conditional number of x^5 is... well as said before take the derivative and multiply that with abs(x/ the function itself (x^5) ), returning 5, I'm not exactly sure but I believe since the conditional number is a ratio of the relative error, then the smaller it is the more stability it should have. However, in the case of another function being made from this kind of process (ex/ |x*tanx|) I'm quite confused...
The next important point of conditional numbers are that:
"A well conditional number is necessary for stability, but it is not sufficient!"
Now, the question is what does stability mean in that sense? Well, an example of a unstable algorithm or expression is, 100 + 0.1 + 0.1, which will round down to 100 if the 0.1s aren't added first, as well as 11.1156 - 11.1264, rounding down to 0. [Catastrophic Cancellation!]. The quadratic formula was spoken upon next and then a method to approximate functions. The approximation of functions led us to the Taylor Polynomial/Series, and an island where all we had was the sand and a stick to calculate e^x.
In the last slide, we found we can break up the approximation and the function into two parts, to account for the source of error by using the triangle inequality, |g(x') - f(x)|, where g(x') is the approximated function of f applied to an approximate x value x'.
This about wraps up CSC165... and I have to say, I never expected this course to be so amazingly interesting, especially when I first heard it was a proof/logic filled course. Surprisingly I think I found this course more interesting than 148, though my final decision is yet to be decided.
Nevertheless, I had an excellent time in CSC165, thanks Danny! I hope to have you as a professor for another course in the later years, you're an awesome prof!
The End.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment