#### All Pages

Assignment Help:

A comprehensive regression analysis of the case study London has been carried out to test the 4 assumptions of regression:

1. Variables are normally distributed

2. Linear relationship between the independent and dependent variables

3. Homoscedasticity

4. Variables are measured without error

A preliminary analysis was carried out and there was no missing data or data that was subject to ambiguity. A set of descriptive statistics was generated of the variables Wfood, Totexp, Income, Age and Nk.  The descriptive statistics showed that there were no missing data and there was a small change of the mean and trimmed mean for each variable. Standard deviation and variance for Wfood and nk are low, age is relatively high and totexp and income are the highest. Wfood, totexp, income, age and nk have relatively high coefficients showing dispersion of data.

The box plots help to identify the quartiles, minimum and maximum values, skewness and their outliers for each variable visually but for nk it is not significant as the data is either 1 or 2. All box plots are positively skewed apart from the nk box plot. Outliers for wfood are 5, totexp are 47, income are 58, age are 27 and none for nk.

The correlation is to see the relationship between all the variables with the Pearson's correlation to see the linear relationship. Multicollinearity is present if Pearson's correlation is greater than 0.9 and that's the case for the following:

• Wfood and nk (highly correlated)
• Totexp and income (highly correlated)
• Totexp and age(highly correlated)
• Income and age (highly correlated)

In multiple regression equation wfood is the Y dependent variable however totexp, income, age and nk are the X independent variables. The standard error coefficient is close to the coefficient which indicates there is not a vast difference between the coefficient and its actual figure. The goodness of fit fits the multiple regression model as concluded from the hypothesis and at least one slope is not equal to zero. The components have been tested according to the hypothesis with the result as totexp and income not fitting the model but the constant, age and nk fitting the model as they greater than the critical T value.

The VIF shows multicollinearity between the variables and in this case the VIF suggests that wfood is not strongly correlated with other independent variables.  R-squared is relatively low which indicates that there is relatively low variation of wfood (Y) in relation to the linear relationship between the Y and X variables. The adjusted R-squared is a more accurate measure of the goodness of fit of the model and it is always lower than the R-squared.

The Durban Watson reveals the existence of autocorrelation and as it is 1.98307 there is no autocorrelation or first order autocorrelation.

The normal probability plot of Anderson Darling, Ryan-Joiner and Kolmogorov-Smirnov show that random errors are not normal distributed and the assumption of normality is satisfied as the probability plot is close to the straight line suggesting linearity of the model.

The histograms show skewness, kurtosis and the distribution of data for each variable graphically. All histograms are positively skewed apart from the nk histogram which is negative. Kurtois is the measure of the flatness of the distribution and Wfood and nk is relatively flat but income, totexp and age is relatively peaked compared to normal distribution.

The Lagrange Multiplier, Whites General, Glejser's and Park tests show that there is heteroscedasticity in the model but the Breusch-Pagan test shows there is no heteroscedasticity. In regards to autocorrelation as there is a large data sample of 1519 it is difficult to determine whether autocorrelation exists on the time series plots but concluded that no autocorrelation is present. D Using a remedial measure of weight least squares still shows that heteroscedasticity exists.

The Cross Correlation for RESI1 shows possibly negative autocorrelation however Autocorrelation Function for RESI1 and Partial Autocorrelation Function for RESI1 show there is no autocorrelation. The LBQ test and LM test also proves that there is no autocorrelation.

The revising of the model had been done in order to see if the assumptions of regression are met. There were 17 clearly visible outliers that were removed gathered from the time series plots and scatter plots and also the variable income was dropped as the results of the best subsets and F-Wald Test indicated it would be a better revision of the model as it was not an influential variable.

The r-squared was slightly increased which is better but not as much however majority of tests still indicate that there is still heteroscedasticity but no autocorrelation.

#### Quasi-experiment, Quasi-experiment is a term taken in use for studies whic...

Quasi-experiment is a term taken in use for studies which resemble experiments but are weak on some of the characteristics, particularly that allocation of the subjects to groups

#### Banach''s match-box problem, Banach's match-box problem : The person carrie...

Banach's match-box problem : The person carries two boxes of matches, one in his left and one in his right pocket. At first they comprise N number of matches each. When the person

#### Bubble plot, Bubble plot : A method or technique for displaying the observa...

Bubble plot : A method or technique for displaying the observations which involve three variable values. Two of the variables are used to make a scatter diagram and values of the t

#### Tests for heteroscedasticity, Lagrange Multiplier (LM) test The Null Hy...

Lagrange Multiplier (LM) test The Null Hypothesis - H0: There is no heteroscedasticity i.e. β 1 = 0 The Alternative Hypothesis - H1:  There is heteroscedasticity i.e. β 1

#### TIME SERIES, moving and semi average method graphical reprsentation

moving and semi average method graphical reprsentation

#### Canonical correlation analysis, Canonical correlation analysis : A process ...

Canonical correlation analysis : A process of analysis for investigating the relationship between the two groups of variables, by ?nding the linear functions of one of the sets of

#### Explanatory variables, The variables appearing on the right-hand side of eq...

The variables appearing on the right-hand side of equations defining, for instance, multiple regressions or the logistic regression, and which seek to predict or 'explain' response

#### Assignment, Hi there i have send mail on info@expertminds regarding assignm...

Hi there i have send mail on info@expertminds regarding assignment, i am waiting nearly 45 minutes for reply