Psychology Textbook Unit 10 Simple Linear Regression in R

Explore the Psychology Textbook Unit 10 Simple Linear Regression in R study material pdf and utilize it for learning all the covered concepts as it always helps in improving the conceptual knowledge.

Subjects

Social Studies

Grade Levels

K12

Resource Type

PDF

Psychology Textbook Unit 10 Simple Linear Regression in R PDF Download

. Simple Linear Regression in CASTRO Summary . In this unit , we will explain how to conduct a simple linear regression analysis with , and how to read and interpret the output that provides . Prerequisite Units Unit . Statistics with Introduction and Descriptive Statistics Unit . to Statistical Significance Unit . Simple Linear Regression Simple linear regression analysis in Whenever we ask some statistical software to perform a linear regression , it uses the equations that we described above to find the best fit line , and then shows us the parameter estimates obtained . Let see here how to conduct a simple linear regression analysis with We will use the of 50 participants with different amount of experience ( from to 16 weeks ) in performing a computer task , and their accuracy ( from to 100 correct ) in this task ( the same that we used in Unit ) Thus , Experience is the predictor variable , and Accuracy is the outcome variable . In the script below , the first line imports our by using Linear Regression in 121

the function , and assigns the data file to the object , read data file ( header TRUE , sep , fit linear model model ( Accuracy Experience , data ) summary ( model ) see the residuals plot plot ( model ) To conduct the linear regression in which we will mode Accuracy as a function of Experience , we use the ( function in , as you can see in the second line of code in the script . As usual in , we assign our linear regression model to an object that we will call model . Remember that works with objects , and that this name is arbitrary ( you could call the object to which the linear regression model is assigned or ) although it is convenient to use a name relatively simple and relevant for your task at hand . Within the parenthesis , you first include your variable , Accuracy in this case , and then your variable , Experience in this case , connected by the symbol . This reads as Accuracy as a function of Experience . Then , you indicate where your data are here , you include the object that you created for the that you are working with , In the next line , you ask to see the output for the linear Linear Regression in

regression analysis , using the summary ( on the model . And this is what you will obtain Call ( formula Accuracy Experience , data ) Residuals Min Median Max Coefficients Estimate . Error value ( Intercept ) Experience . codes . Residual standard error on 48 degrees of freedom Multiple , Adjusted squared on and 48 , value Call The first item shown in the output is the formula used to fit the data ( that is , the formula that you typed to request the regression analysis ) Residuals Here , the error or residuals are summarized . The smaller the residuals , the smaller the difference between the data that you obtained and the predictions of the model . The Linear Regression in

Min and Max tell you the largest negative ( below the regression line ) and positive ( above the regression line ) errors . Given that our accuracy scale is from to TOO , the largest positive error being and the largest negative error being do not seem terribly large errors . importantly , we can see that the median has a value of , very close to , and the first and third ( IQ and ) are approximately the same distance from the center . Therefore , the distribution of residuals seems to be fairly symmetrical . Coefficients This is the critical section in the output . For the intercept and for the predictor variable , you get an estimate that comes along with a standard error , a , and the significance level . The Estimate for the intercept or bo is the estimation of the analysis for the value of when is that is the predicted accuracy level ( here ) of someone who has weeks of experience with the task . Remember that you should only interpret the intercept if zero is within or very close to the range of values for your predictor variable , and talking about a zero value for your predictor variable makes sense . This may be a bit tricky in our example . The minimum amount of experience with the computer task among our participants is week , so the zero value seems to be close enough . However , realize that someone with zero experience with the task may not know what to do and may not be able to perform the task at all . Thus , the value for the intercept may not make sense . This is something that you have to evaluate when you know your methods and procedures well . The Estimate for our predictor variable , Experience , appears below . This is or the estimated slope coefficient or , simply , the slope for the linear regression . Remember that the slope tells us the amount of expected change in each unit change in . Here , we would say that for each additional week of experience with the task , accuracy is expected to improve points . Linear Regression in

Pay attention to the sign of the estimate for the predictor . If it is positive , then it represents the expected increase in the outcome variable by each increase in the predictor variable . If it is negative , then it represents the expected decrease in the outcome variable by each increase in the predictor variable . The . Error ( standard error ) tells you how precisely each of the estimates was measured . We want , ideally , a lower number relative to its coefficients The standard error is important for calculating the . The is calculated by taking the estimate for the coefficient and dividing it by the standard error ( for example , the in the Experience line is . This is then used to test whether or not the estimate for the coefficient is significantly different from zero . For example , if the coefficient is not significantly different from zero , it means that the slope of the regression line is close to being flat , so changes in the predictor variable are not related to changes in the outcome variable . refers to the significance level on the Remember that the value for a hypothesis test to be statistically significant is 005 ( see ) so that if the is less than , then the result is statistically significant . Here , the is very small , and uses the scientific notation for very small quantities , and that why you see the in the number . You just need to know that this is a tiny value . For values larger than , the exact number will appear . The next to the indicate the magnitude of the ( for , for , and for as described in the . codes line ) At the bottom of the output , you have some measures that also help to evaluate the linear regression model . We have not seen yet what many of those elements mean so , for the time being , just note that ( see ) is included here , indicating the amount of variance in Accuracy that is explained by Experience . always lies between and . An of means Linear Regression in

that the predictor variable provides no information about the outcome variable , whereas an ofl means that the predictor variable allows perfect prediction ofthe outcome variable , with every point of the exactly on the regression line . Anything in between represents different levels of closeness of the scattered points around the regression line . Here , we obtained an of , so you could say that Experience explains 88 of the variance in Accuracy . The difference between multiple and adjusted is negligible in this case , given that we only have on predictor variable . Adjusted takes into account the number of predictor variables and is more useful in multiple regression analyses . See video with a simple linear regression anal sis being conducted in Linear Regression in