Sunday, October 11, 2015

Changes in the Algorithms of Research and Facebook


The clip above from the Daily Show (the first on this blog with new host Trevor Noah) discusses how traditional journalism is being supplanted by computer algorithms that can create legible news articles from multiple sources on the web.  In my field I have recently become acquainted with new methods such as natural language processing, agent based models, and markdown files.

Natural language processing is a method that works like an internet search engine where thousands (possibly millions) of text files are searched for certain topics or word patterns that are relevant to a given research question.  Agent based models are a simulation method for social interactions in a large organization such as a hospital.  Markdown files are files in the stat package R that synthesizes the results of an analysis in R and places them in a text file such as html, Microsoft Word, or a pdf file that is coherent.  

Cutting edge methods like these are automating processes that used to be carried out by humans to produce faster and, hopefully, more accurate results.  To keep competitive one needs to understand and adapt to these algorithms.  One cannot understand them when the algorithms are proprietary but one can use experiments to get an idea of how they work.  

I have written before about how Facebook and other social media sites use algorithms to give viewers what they want to see in their news feed to keep them engaged with the site.  Mat Honan of Wired magazine did an experiment to see what would happen if he clicked like on every post for 48 hours.  He says that very soon there were almost no posts from friends on his timeline, they were all political.  By the next day the posts moved far to the right.  

Elan Morgan at medium.com tried the opposite for two weeks (not liking any posts but she did comment on posts she liked) and said that her timeline improved.  Before she says:


You would think that liking certain updates on Facebook would teach the algorithm to give you more of what you want to see, but Facebook’s algorithm is not human. The algorithm does not understand the psychological nuances of why you might like one thing and not another even though they have comparatively similar keywords and reach similar audiences, so when I liked several videos and images of heartwarming animal stories, Facebook’s algorithm gave me more animal stories, but many of them were not heartwarming. They depicted inhumane treatment. Apparently, Facebook’s algorithm mistook my love for animals as a desire to see images of elephants being brutalized.

After she would see more posts from friends and had better interactions with them.  She kept no statistics so it was a qualitative study.  I'll increase the number of subjects in her experiment to two.  I will compile statistics on my timeline but I need a way of classifying the posts I see on my timeline.  Here is a sample of the first 10 posts I saw in my news feed.


Post Type
Freq
Friend/family
2
Friend/Political
2
friend/share
2
Post from group
1
Post from page
2
promoted post
1



I will look in two weeks to see if the frequencies change in two weeks to see if the types of posts change.

**Related Posts** 

Twitter Primary: Following and Presidential Candidate Support

 

It's All About The Likes

 

The Ethics of Social Media Manipulation

Facebook Primary 2016, August Update, Does it Predict Support?