Yes, statistically significant.
The confidence interval indicates that we can be 95% confident that the effect size of the tosca_guilt variable on word count is not zero, and does have an effect.
according to the tally function, it appears there were 89 narratives that scored "1" on empathy
tWC = b0 + b1(tosca_guilt) + b2(empathy) + error
Both...they both are smaller than a given alpha of 0.05
b0: 3.631, this is the prediction for tWC when tosca_guilt score is 0 and empathy score is 0.
b1: 0.916, this is the increment we would add to b0 when the tosca_guilt score increases by 1
b2: 0.086, this is the increment we would add to b0 when the empathy score increases by 1
the model does explain more variation than the empty model, because it's sum of squares (measure of error) is smaller than that of the empty model. According to the PRE, the model explains 0.088 of the error from the empty model.
The tosca_empathy_model is a significant predictor for the variable tWC, which represents the word count in narratives people wrote. In other words, someone's score on this guilt scale, and their score on showing empathy in their narrative, seems to be related to their word count. Both these variables together explain around 8% of the variation in word count.
apology = b0 + b1(tosca_guilt) + error
as a predictor, tosca_guilt is statistically significant. It has a p-value less than 0.05.
apology is a categorical variable, and is interpreted by glm() as binary, either 1 or 0, rather than with lm(), that would be able to include a range of numbers from 0 - 1.
the coefficient of tosca_guilt in lm() would indicate the increment to add to the intercept for every one-unit increase of tosca_guilt to get a prediction for the variable apology.
in glm(), the coefficient represents the increment of log odds we'd expect to see for every one-unit increase of tosca_guilt. In this case, it looks like the higher tosca_guilt, the more likely there will be an apology in the narrative
the ones that load best onto factor 1 are dat.gasp_4, dat.gasp_7, dat.gasp_8, and dat.gasp_12.
and for factor 2, dat.gasp_3, dat.gasp_5, dat.gasp_10, and dat.gasp_13