Active Entries
- 1: i've been up and down
- 2: answer to all answers i can find
- 3: who loves you pretty baby?
- 4: water, water everywhere, and all the boards did shrink
- 5: nothing i can see but you
- 6: ain't no lie baby bye bye bye
- 7: why we lift our voice
- 8: busy doing nothing, working the whole day through
- 9: gulped, swallowed or chewed, still worth a King’s ransom
- 10: where everyone is nicer
Style Credit
- Style: Over The Hills for Bannering by
- Resources: OpenClipart
Expand Cut Tags
No cut tags
no subject
Date: 2014-06-29 06:09 pm (UTC)I am rather surprised that the University of California, and Cornell thought it was appropriate, and that the PNAS published it. The paper mentions the data collection methods, but it doesn't say anything about whether the design passed an IRB. But then, this isn't the first time I've seen researchers with the attitude of 'it doesn't matter, it's only the internet'.
The paper doesn't even show what they say it shows. As they didn't actually measure mood, only use of positive and negative words, I think they're exaggerating their conclusions. The effect they've demonstrated isn't necessarily emotional contagion, it's the way the emotional environment affects what people think is appropriate to say out loud. If people make more positive or negative posts, it could be because their mood has actually been altered, but it could also be because they feel more or less comfortable about expressing those feelings. I didn't see any way they tried to differentiate or control for that.
Also, the actual effects they get are miniscule. The biggest effect they saw was that decreasing the number of positive posts seen decreased positive words in posts compared to the expected number by 0.1%. Big whoop. The effects are only significant because of the huge volume of data they could get.
Not that the feeble results make the experimental protocol any less wrong, of course.
(Link to the full article.)