Monday, June 3, 2019

Comparing Convergence Of False Position And Bisection Methods Engineering Essay

Comparing Convergence Of False Position And Bisection Methods Engineering EssayExplain with example that rate of convergence of false position order is iterate-quick than that of the bisection system.IntroductionFalse position ruleIn numerical analysis, the false position method or regula falsi method is a infrastructure-finding algorithm that combines features from the bisection method and the second method.The methodThe first two iterations of the false position method. The red curve shows the function f and the blue lines are the se laughingstockts.Like the bisection method, the false position method starts with two points a0 and b0 such that f(a0) and f(b0) are of opposite signs, which implies by the intermediate respect theorem that the function f has a root in the interval a0, b0, assuming perseveration of the function f. The method proceeds by producing a sequence of shrinking intervals ak, bk that all contain a root of f.At iteration consider k, the numberis compute d. As explained below, ck is the root of the secant line through (ak, f(ak)) and (bk, f(bk)). If f(ak) and f(ck) swallow the same sign, then we set ak+1 = ck and bk+1 = bk, otherwise we set ak+1 = ak and bk+1 = ck. This process is repeated until the root is approximated sufficiently well.The above formula is also used in the secant method, but the secant method evermore retains the last two computed points, while the false position method retains two points which certainly bracket a root. On the other hand, the only residuum between the false position method and the bisection method is that the latter uses ck = (ak + bk) / 2.Bisection methodIn mathematics, the bisection method is a root-finding algorithm which repeatedly bisects an interval then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow.The method is relevant when we wish to solve the equation for the scalar variable x, where f is a continuous function.The bisection method requires two initial points a and b such that f(a) and f(b) have opposite signs. This is called a bracket of a root, for by the intermediate value theorem the continuous function f must have at least whiz root in the interval (a, b). The method now divides the interval in two by computing the midpoint c = (a+b) / 2 of the interval. Unless c is itself a rootwhich is very unlikely, but possiblethere are now two possibilities either f(a) and f(c) have opposite signs and bracket a root, or f(c) and f(b) have opposite signs and bracket a root. We select the subinterval that is a bracket, and apply the same bisection tone of voice to it. In this way the interval that might contain a zero of f is reduced in width by 50% at each step. We continue until we have a bracket sufficiently small for our purposes. This is similar to the computer science binary star Search, where the range of possible upshots is halved each iteration.Explicitly, if f(a) f(c) Advantages and drawbacks of the bisection methodAdvantages of Bisection MethodThe bisection method is always convergent. Since the method brackets the root, the method is guaranteed to converge.As iterations are conducted, the interval gets halved. So nonpareil can guarantee the decrease in the error in the solution of the equation.Drawbacks of Bisection MethodThe convergence of bisection method is slow as it is simply based on halving the interval.If one of the initial guesses is closer to the root, it will take larger number of iterations to reach the root.If a function is such that it just touches the x-axis (Figure 3.8) such asit will be unable to find the lower guess, , and upper guess, , such thatFor functions where there is a sign and it reverses sign at the singularity, bisection method may converge on the singularity (Figure 3.9).An example includeand, are valid initial guesses which satisfy.However, the function is not continuous and the theorem that a root exists is also not applicable.Figure.3.8. Function has a single root at that cannot be bracketed.Figure.3.9. Function has no root but changes sign.Explanation computer address code for False position method sheath code of False-position methodC code was written for clarity instead of efficiency. It was designed to solve the same problem as solved by the Newtons method and secant method code to find the positive number x where cos(x) = x3. This problem is transformed into a root-finding problem of the form f(x) = cos(x) x3 = 0.include include double f(double x)return cos(x) x*x*xdouble FalsiMethod(double s, double t, double e, int m)int n,side=0double r,fr,fs = f(s),ft = f(t)for (n = 1 n r = (fs*t ft*s) / (fs ft)if (fabs(t-s) fr = f(r)if (fr * ft 0)t = r ft = frif (side==-1) fs /= 2side = -1else if (fs * fr 0)s = r fs = frif (side==+1) ft /= 2side = +1else breakreturn rint main(void)printf(%0.15fn, FalsiMethod(0, 1, 5E-15, 100))return 0After running this code, the final function is a pproximately 0.865474033101614Example 1 take in finding the root of f(x) = x2 3. Let step = 0.01, abs = 0.01 and start with the interval 1, 2.Table 1. False-position method applied to f(x)=x2 3.abf(a)f(b)cf(c)UpdateStep Size1.02.0-2.001.001.6667-0.2221a = c0.66671.66672.0-0.22211.01.7273-0.0164a = c0.06061.72732.0-0.01641.01.73170.0012a = c0.0044Thus, with the third iteration, we note that the last step 1.7273 1.7317 is less than 0.01 and f(1.7317) Note that after three iterations of the false-position method, we have an acceptable answer (1.7317 where f(1.7317) = -0.0044) whereas with the bisection method, it took seven iterations to find a (notable less accurate) acceptable answer (1.71344 where f(1.73144) = 0.0082)Example 2Consider finding the root of f(x) = e-x(3.2 sin(x) 0.5 cos(x)) on the interval 3, 4, this clipping with step = 0.001, abs = 0.001.Table 2. False-position method applied to f(x)= e-x(3.2 sin(x) 0.5 cos(x)).abf(a)f(b)cf(c)UpdateStep Size3.04.00.047127-0.038 3723.5513-0.023411b = c0.44873.03.55130.047127-0.0234113.3683-0.0079940b = c0.18303.03.36830.047127-0.00799403.3149-0.0021548b = c0.05343.03.31490.047127-0.00215483.3010-0.00052616b = c0.01393.03.30100.047127-0.000526163.2978-0.00014453b = c0.00323.03.29780.047127-0.000144533.2969-0.000036998b = c0.0009Thus, after the sixth iteration, we note that the final step, 3.2978 3.2969 has a size less than 0.001 and f(3.2969) In this case, the solution we found was not as good as the solution we found using the bisection method (f(3.2963) = 0.000034799) however, we only used six instead of eleven iterations.Source code for Bisection methodincludeincludedefine epsilon 1e-6main()double g1,g2,g,v,v1,v2,dxint found,converged,ifound=0printf( enter the first guessn)scanf(%lf,g1)v1=g1*g1*g1-15printf(value 1 is %lfn,v1)while (found==0)printf(enter the second guessn)scanf(%lf,g2)v2=g2*g2*g2-15printf( value 2 is %lfn,v2)if (v1*v20)found=0elsefound=1printf(right guessn)i=1while (converged==0)printf(n iteration=%dn,i)g=(g1+g2)/2printf( impertinently guess is %lfn,g)v=g*g*g-15printf(new value is%lfn,v)if (v*v10)g1=gprintf(the next guess is %lfn,g)dx=(g1-g2)/g1elseg2=gprintf(the next guess is %lfn,g)dx=(g1-g2)/g1if (fabs(dx)less than epsilonconverged=1i=i+1printf(nth calculated value is %lfn,v)Example 1Consider finding the root of f(x) = x2 3. Let step = 0.01, abs = 0.01 and start with the interval 1, 2.Table 1. Bisection method applied to f(x)=x2 3.abf(a)f(b)c=(a+b)/2f(c)Updatenew b a1.02.0-2.01.01.5-0.75a = c0.51.52.0-0.751.01.750.062b = c0.251.51.75-0.750.06251.625-0.359a = c0.1251.6251.75-0.35940.06251.6875-0.1523a = c0.06251.68751.75-0.15230.06251.7188-0.0457a = c0.03131.71881.75-0.04570.06251.73440.0081b = c0.01561.71988/td1.7344-0.04570.00811.7266-0.0189a = c0.0078Thus, with the seventh iteration, we note that the final interval, 1.7266, 1.7344, has a width less than 0.01 and f(1.7344) Example 2Consider finding the root of f(x) = e-x(3.2 sin(x) 0.5 cos(x)) on the interva l 3, 4, this time with step = 0.001, abs = 0.001.Table 1. Bisection method applied to f(x)= e-x(3.2 sin(x) 0.5 cos(x)).abf(a)f(b)c=(a+b)/2f(c)Updatenew b a3.04.00.047127-0.0383723.5-0.019757b = c0.53.03.50.047127-0.0197573.250.0058479a = c0.253.253.50.0058479-0.0197573.375-0.0086808b = c0.1253.253.3750.0058479-0.00868083.3125-0.0018773b = c0.06253.253.31250.0058479-0.00187733.28120.0018739a = c0.03133.28123.31250.0018739-0.00187733.2968-0.000024791b = c0.01563.28123.29680.0018739-0.0000247913.2890.00091736a = c0.00783.2893.29680.00091736-0.0000247913.29290.00044352a = c0.00393.29293.29680.00044352-0.0000247913.29480.00021466a = c0.0023.29483.29680.00021466-0.0000247913.29580.000094077a = c0.0013.29583.29680.000094077-0.0000247913.29630.000034799a = c0.0005Thus, after the 11th iteration, we note that the final interval, 3.2958, 3.2968 has a width less than 0.001 and f(3.2968) Convergence RateWhy dont we always use false position method?There are times it may converge very, very slo wly.ExampleWhat other methods can we use? proportion of rate of convergence for bisection and false-position method

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.