The use of correlation here is absurd, as this statistical test proves nothing about causality. The article could have employed a Fisher exact test using a 2 by 2 contingency table to determine of there is a significant difference. For the observed data represented in the 2×2 table (aa=10, ab=14, ba=4, bb=12) p=0.329 so this is definitely NOT significant.
Are you aware of the definition of correlation? Correlation says nothing about causality, it just talks about the strength of the relationship between two variables. Also, your p-value for the Fisher’s test is completely wrong (how did you get that?) – R gives the p-value for this test as 1. Finally, Phi correlation (look it up – it’s calculated exactly the same as the Pearson correlation that was employed here) is entirely appropriate for this data set, even though it’s an odd choice simply because it’s so uncommon these days. I don’t feel like registering here to post with a name, but I’ve also mentioned this in the "debunking" of the stats in this post over at Slashdot.
Welcome! OmniNerd's content is generated by nerds like you. Learn more.