https://dl.dropboxusercontent.com/u/23984607/SJUTresults/SJUTresults.xls

This software is in active development, so please download the latest copy immediately before processing results. For example, this version has course codes up through last semester, so it may be missing any renamed or new courses.

This version does merge multiple sets of results, contained in either multiple files, on multiple sheets in one file or multiple sections on a single sheet.

Note: While this software has subjected to testing and is considered reliable, the user should scan through the final results file to check that things like the course code, faculty , etc. are correct. A most important check is that the marks are OUT OF 100. * The software defaults to “out of 100” so be sure to select the “out of 50” option before processing, if necessary*.

Please direct any comments, questions or suggestions to either Mr. Salaman or Dr. Ham. An easy way to do that is by replying to this post on this website.

https://dl.dropboxusercontent.com/u/23984607/SJUTresults/SJUTresults.xls

]]>

**Remember***:* These are simply typical exam questions; they are ** NOT **the questions that will appear on the exam. Therefore, do not

* Relax* and let what you learned during the semester come out on its own.

- By hand (not recommended)
- With a spreadsheet (recommended)
- Remember that in both MS Excel and OpenOffice/LibreOffice Calc there are tools for doing ANOVA
- Excel: Data>Data Analysis
- Calc: Data>Statistics

- Sometimes the ANOVA function in Calc gives an error. If so, then in the cell where “Total” is calculated replace the absolute referencing by columns with a block that represents all the data. For example if the function is
replace it with**=DEVSQ($B$2:$B$6,$C$2:$C$6,$D$2:$D$6,$E$2:$E$6)****=DEVSQ(B2:E6)**

- Remember that in both MS Excel and OpenOffice/LibreOffice Calc there are tools for doing ANOVA
- Using an online ANOVA calculator. These all work
- http://vassarstats.net/ – Select
along the left hand side then the**ANOVA**calculator. This one is nice because it also does the Tukey HSD test.**One-way Anova** - http://easycalculation.com/statistics/one-way-anova.php
- http://onlinestatbook.com/stat_analysis/index.html – This is by the author of our book, and it has built in data sets, including Smiles & Leniency, but it is a little harder to enter the data than the Vassar calculator

- http://vassarstats.net/ – Select

You should be able to take this from data to ANOVA to Tukey HSD (if ANOVA gives you the green light)

Here is the data

```
575 565 600 725
542 593 651 700
530 590 610 715
539 579 637 685
570 610 629 710
```

Where the columns represent different levels of the factor Power and the rows are the samples for each power level.

]]>The purpose of the assignment was to demonstrate:

- That there are a variety of curves that could pass through
*n*points and that it is important to have an*objective*in choosing the appropriate curve. In this case there are two objectives:- Smooth curves, so the robot is not subjected to jerking motion
- Shortest curve, so the robot can pass the points most efficiently

- That working with splines requires handling a piecewise function. In this case the integration must be done piecewise.

This problem involves interpolation and calculus, so a good tool is Maxima, which has the ability to do both Lagrangian Interpolation and Cubic Splines plus the ability to find the derivatives and do the necessary integration.

Maxima is available for installation on a Windows PC as wxMaxima (download here), but we will use the online version at http://www.maxima-online.org. Click on that link to bring up the online Maxima calculator in a new window, then cut an paste the code below into the **Instructions to*** Maxima* box. Clicking on

To complete the assignment, repeat the calculation with the following (x,y) data, then send an e-mail to math@sjut.org containing:

- Name & Registration Number
- (x,y) data
- Brief comparison of the two path lengths and the shape of the curves –
*remembering the objective*. - The plot of the paths (either “Copy Image” and paste into the email or “Save Image As” and attach to the email).

**(x,y) data**

```
[2,4],[4,5],[5,6],[6,3],[8,2],[10.6,5]
```

**Maxima script to calculate the lengths:**

```
/* Enter the data */
A:matrix([2,7.2],[4.5,7.1],[5.25,6],[7.81,5],[9.2,3.5],[10.6,5]);
kmax:length(A)-1$
x0:lmin(list_matrix_entries(submatrix(A,2)))$
x1:lmax(list_matrix_entries(submatrix(A,2)))$
/* Nth order polynomial using Lagrangian interpolation */
load(interpol)$
f_nth: lagrange(A)$
df_nth: sqrt(1+(diff(f_nth,x)^2))$
L_nth:romberg(df_nth,x,x0,x1);
/* Cubic spline interpolation */
/* Calculate the splines */
f_cs:cspline(A)$
/* Pull out the cubic polynomial for each subinterval */
h1[i,j]:=1$
L_splines:0$
f_splines: matrix([0])$
for k:1 thru kmax do
( kill(h2,h3),
h2[i,j]:=A[k,1]+(i-1)/3*(A[k+1,1]-A[k,1]),
h3[i,j]:=ev(f_cs,x=C2[i,1]),
C1:genmatrix(h1,4,1), C2:genmatrix(h2,4,1), C3:C2^2, C4:C2^3,
C:addcol(C1,C2,C3,C4),
D:genmatrix(h3,4,1),
aCS:invert(C).D,
f_spline:matrix([1,x,x^2,x^3]).aCS,
f_splines:addrow(f_splines,[f_spline]),
df_spline:sqrt(1+(diff(f_spline,x))^2),
L_splines:L_splines+romberg(df_spline,x,A[k,1],A[k+1,1])
)$
/* The result */
f_splines:submatrix(1,f_splines);
L_splines;
L_nth;
L_diff:(L_nth-L_splines)/L_nth * 100;
plot2d ([f_nth, f_cs], [x, x0, x1], [legend, "Nth Order Polynomial", "Cubic Splines"])$
```

]]>Set 2: General Problems

]]>

General Exercise

Matrix Exercise

ODE Solving Exercise ]]>

Euler: Lecture Slide for viewing (direct download here)

Runge-Kutta: Lecture Slides for viewing (direct download here)

Finite Difference: Lecture Slide for viewing (direct download here)

]]>http://www.stat.yale.edu/Courses/1997-98/101/confint.htm

That final note is also below:

In this coverage of confidence interval, there are two different means and three different standard deviations. The means are that of the population, [latex]\mu=\sum x/N[/latex], and that of the sample, [latex]\overline{x}[/latex] or [latex]M=\sum x/N[/latex]. The definitions are the same for both, just that [latex]N[/latex] is that for the population in the first instance and for the sample in the second. The standard deviations have different definitions and one has a different meaning.

The first two are the standard deviation of the population

[latex]\sigma=\sqrt{\dfrac{\sum\left(x-\mu\right)^{2}}{N}}[/latex]

and the standard deviation of the sample

[latex]s=\sqrt{\dfrac{\sum\left(x-M\right)^{2}}{N-1}}[/latex].

The standard deviation of the sample is also known as the standard error of the sample, and it is a direct estimate of [latex]\sigma[/latex]. The last one is the standard deviation of the sample mean, [latex]\sigma_{M}=\sigma/\sqrt{N}[/latex], which is essentially a measure of how the sample means will vary from the population mean due to the fact it is a sample and not the whole population. Clearly [latex]\sigma_{M}[/latex] goes to zero as the sample size

approaches the population size. Given that [latex]s[/latex] is an estimate of [latex]\sigma[/latex] then [latex]\sigma_{M}\approx s/\sqrt{N}[/latex]. That is why the 95% confidence interval is either [latex]M\pm1.96\sigma/\sqrt{N}[/latex] or [latex]M\pm1.96\sigma_{M}[/latex] if [latex]\sigma[/latex] is known and [latex]M\pm t_{c}s/\sqrt{N}[/latex] or [latex]\mu\pm t_{c}\sigma_{M}[/latex], where [latex]t_{c}[/latex] is from the [latex]t[/latex] tables, if [latex]\sigma[/latex] is unknown and being estimated by [latex]s[/latex].