Social media is a permanent fixture in online and offline communications. Whether it be looking for a new job, talking to old friends, or sharing stories on a blog, the versatility of social media allows it to be widely used by different demographics all around the world. The international nature of social media is appealing to people who want to explore the world without leaving their houses but is a determent to the governments that want to regulate it. These large multi-national companies have made themselves a permanent fixture within the communication industry and are using that as leverage to make their permanence known. This has created issues for governments who attempt to regulate the content hosted by the companies. With the rampant spread of fake news and disinformation, governments in recent years have been trying to limit the powers and capabilities of these giants. The unregulated nature of social media has given tech companies unfettered access to the ideas and minds of the people that use their websites. In recent years this has led to an onslaught of fake news, the growth of radical ideas, and media surveillance. This paper will talk about the growth of “big tech.” The effect of fake news and the interactions between people, the government, and social media companies. Finally, the use of social media as a surveillance tool and its effect on people’s trust in the government.
Modern social media companies are made up of a twisted web of algorithms, knowledge sharing, false accountability, and money, lots of money. This was not always the case; the dawn of social media was seen throughout the 1970s and the creation and use of media compression algorithms. These algorithms allowed users to send messages, photos, audio, and in some cases stream videos to other users through email and other dial-up compatible software. Throughout the 1980s and early 90s this software was being reconfigured and updated to allow users to share more and adapt to new additions like JPEG images, and Bulletin Board Systems (BBS). BBS is often considered as being the first widely used social media platform. The program centered around online bulletin boards that acted as forums where people could chat with one another and play games. Running off dial-up internet meant that forums were local and often small in size. Due to the localized nature, it is hard to say how many BBS were running and how many users each had. Internet speculation and BBS archive sights claim that there were about 17 million users worldwide and about 66,000 forums.[1] The success of BBS allowed for other forums and chat room-based systems to arise and see general success. Platforms like America Online (AOL) and SixDegrees.com became widely used and essentially marked 1997 as the beginning of modern social media. As the internet became more accessible more websites and chatrooms arose. 1999-2003 saw the creation of foundational sites like Myspace (2003), Skype (2003), LinkedIn (2002), Wikipedia (2001), MSN Messenger (1999), and LiveJournal (1999). Many of these social media websites saw great success riding off the dot.com bubble. Others found success in younger demographics and ad revenue. 2004 marked the emergence of Facebook, as well as the acquisition of social media platforms by larger corporations.[2] In 2005 Myspace was acquired by News Corporation for 580 million USD. The same year Skype was bought out by eBay for 2.5 billion USD. 2006 saw Condé Nast Publications’ acquisition of Reddit for 10 million USD. This marked the beginning of social media consolidation and the rise of “big tech.”
“Big tech” refers to the largest and most dominant information technology companies. Google (Alphabet), Amazon, Facebook, Apple form the “big four” (GAFA) companies, Microsoft is often grouped in to form the fifth member of the group (GAFAM) and is sometimes replaced by Netflix (FAANG). In the Internet Health Report 2020, it was reported that of the top 18 most used social media platforms, Facebook-owned four of them.[3] The top 18 also have media giants like YouTube (Google), Twitter, and the Chinese up-and-comer TikTok (ByteDance).[4] All 18 of these companies have over 300 million monthly users, with Facebook at number one with just over 2.7 billion users.[5] This means that the vast majority of people who use social media are using software developed by one of these companies. Additionally, these companies act as communication channels for about half of all internet users worldwide.[6] The scale of social media is unperilled to anything we have seen before, making it a permanent fixture in society today. A company like YouTube, whose parent company Google has been sued by the United States Department of Justice for breaking anti-trust monopoly regulations,[7] has a colossal say in what people see and do on the internet. With about 2 billion monthly active users in 91 countries and 80 different languages, YouTube’s users consume over a billion hours of video content daily.[8] This gets blown out of proportion when considering Facebook and its 2.1
billion daily users across all its platforms.[9]
Due to the wide reach and diverse audiences these social media companies have, issues arise when it comes to regulating content put on the websites and the companies themselves. In a 2018 study conducted by the Pew Institute, 27% of Canadian respondents reported they use social media sites -Facebook and Twitter- multiple times a day to get news. 15% reported that they use social media once a day to get their news.[10] Results in the US reflect a similar trend with 28% and 11% respectively.[11] Social media sites have the capabilities to acts as primary sources and information sites for people around the world. These websites offer a diverse range of information from almost every corner of the world. This gives users a sense of choice as they are able to choose what they see and when they see it. This can become problematic when the information delivered is inaccurate and false. The information available through social media is a mixed bag when it comes to quality and accuracy, therefore this paper will first establish the difference between misinformation, disinformation, and fake news. Merriam-Webster Dictionary defines misinformation as being “incorrect or misleading information” regardless of the intent to mislead.[12] Disinformation on the other hand is defined as “false information deliberately and often covertly spread to influence public opinion or obscure the truth.[13]” Lastly, fake news as defined in the book Fake News Understanding Media and Misinformation in the Digital Age as “purposefully crafted, sensational, emotionally charged, misleading or fabricated information that mimics the form of mainstream news.[14]” This definition excludes websites that focus on parody and satire (The Onion, The Beaverton) as their stories are not claiming to be real and/or factual. Fake news is news produced by individuals who are not concerned with gathering and reporting information to the world, but rather with generating profit through the social media circulation of false information mimicking the style of contemporary news.[15] In recent years the title “fake news” has been used to discredit and assault credible institutions/people/journalists. Likewise, “fake news” can be used to categorize factual news/information that is contrary to one’s dispositions and ideals. This is where the scale of these companies can become an issue, as content moderation is not an easy task.
Generally, content moderation is conducted through artificial intelligence (AI) software. This is not always the best way of moderating as AI programs are not adept at seeing the grey area when it comes to online interactions. Context and nuance often go ignored by AI resulting in “false positives” (flagging and taking down innocent posts) and “false negatives” (missing violent and other undesirable posts).[16] This has resulted in companies hiring teams of human content moderators whose jobs are to review posts and determine if they abide by the website’s Terms of Service/Community Guidelines. These jobs are less than glamorous, and in recent years have been considered as being the “worst job in history.[17]” Employees that deal with extreme and violent content have been known to developed symptoms of post-traumatic stress disorder.[18] Additionally, some content moderators after repeated exposure to certain materials began to “embrace the fringe viewpoints of the videos and memes that they are supposed to moderate.[19]” A different challenge that social media websites face is determining how things get categorized and what things get flagged and taken down. There is an inherent degree of subjectivity when it comes to posting things on the internet. As said by Susan Etlinger in her article Models for Platform Governance she states, “one person’s protest is another person’s riot.[20]” This encapsulates the issue social media companies and content moderators have when it comes to determining the “safeness” of content.
This can become an issue when dealing with the spread of misinformation, disinformation, and fake news. In a 2020 study done about American’s perception of fake news, it was noted that liberals associated the term “fake news” with politics (particularly with then-President Donald Trump), whereas conservatives associated fake news with the media.[21] The report also points out that media relating and favouring opposing political ideologies is targeted as being fake news. 75% of conservatives viewed ‘liberal’ media establishments (CNN) as fake news. Contrasted with the 59% of liberals who say ‘conservative’ media establishments (Fox) are fake news.[22] Although there is a degree of objectivity when it comes to determining what is fake and what is not, it is hard for social media companies to regulate and monitor content without alienating a subsection of their users. Between October of 2019 and June of 2020 YouTube hosted 8,105 videos that dealt with disinformation pertaining to COVID-19 (>1% of COVID-19 related videos).[23] Researchers found that on average it took YouTube 41 days to remove the videos. Before the removal of these videos, they collectively gained more than 20 million views and 71 million other reactions (likes/comments) on other websites like Facebook, Twitter, and Reddit before the videos were deleted (this adds up to more views than YouTube’s top five English language news sources combined.). The report also found that Facebook onlyflagged 55 of the videos as containing false information.[24]
Removing content is only half the issue as the purposeful spread of misinformation is never-ending. People who interreact with disinformation and fake news on social media sites are more likely to get recommended similar posts that also contain disinformation/fake news. When fake news and disinformation are geared towards a particular ideology, this can create a rabbit hole that people can find themselves trapped in. In some cases, social media algorithms have acted as an alt-right pipeline. This is usually done through 3 stages. The first is through normalization, a website like Facebook may continuously recommend posts to their users that may initially seem harmless.[25] This may come in the form of pictures, memes, and jokes that entice users through “edgy humor.” This may also come in the form of online personalities like PewDiePie a YouTuber who with over 100 million subscribers has been known to spew alt-right claims and beliefs. The second step is acclimation.[26] After something becomes normalized people want more information to affirm their newfound beliefs. At this stage, people want to listen to and see other people who share the same values as them. This has sparked the idea of “red pill culture” in reference to the Matrix movies and the characters taking a red pill that allows them to see the realities of the world. This idea is supported through personalities like, Steven Crowder, Alex Jones, and Milo Yiannopoulos. These individuals are known for spouting contrary ideas that often get them in hot water with mainstream media. On the other hand, many more extreme people view them as being “alt-light” or watered-down depictions of far-right conservatism. This balance makes them approachable to individuals who are newly normalized and who are acclimating to the alt-right. The third and final phase is dehumanization, a so-called prerequisite for violence and other abhorrent behaviour.[27] For people in this stage they are usually in an echo chamber of thoughts and rarely see varying opinions. There is also a degree of moral superiority among these individuals as they see anyone who does not fit their images as being inferior. This clears the way for genocidal behaviour and other white supremacist ideals. Every step of this alt-right pipeline is perpetuated through social media and the algorithms that run it. Deleting one post is not going to solve the issues that occur and the perpetuation of fake news and disinformation.
This idea was magnified in 2017 with the rise of QAnon in the US and Internationally. QAnon was able to target people with no predisposition to alt-right ideas and convince them otherwise. This was achieved through marketing campaigns and in-house social media algorithms. QAnon theories were supported by the government (then President Donald Trump), promoted by the Russians,[28] and hosted through social media platforms. Fake news and disinformation are a common occurrence on social media websites, and it is starting to seep offline, deteriorating people’s trust in government institutions. Social media is creating a paradox in its effect on democracy. In some countries like Malaysia, social media and other digital tools have allowed activists to publish stories that are critical of the authoritarian regime. Similar experiences can be seen in countries like Venezuela and Nigeria where everyday ordinary people are able to hold their government to some degree of accountability by chronicling and publishing their abuses to the rest of the world.[29] Social media has been used as an organizational tool during protests and a way for people to take a stand against/for something. Additionally, social media has allowed people across the world and the socioeconomic spectrum to unite and share
their demands on how and what directions they want their societies to go.
At the same time, social media has also taken a hit to democracy and the democratic process. As previously stated, social media has created an echo chamber when it comes to alt-right and anti-democratic views. Websites like Facebook have been known for perpetuating obscure beliefs that people have eventually latched onto. Social media sites have also begun affirming people’s confirmation biases when it comes to information that people see. Issues like these are only magnified when the people in power support and allow for it to happen. This was seen in the rise of Donald Trump and the idea that as of 2015, 25% of US national elections were determined by Google’s search engine. Furthermore, it was determined that Google could sway 20% of undecided voters into voting for a particular candidate.[30] Social media has also been used as a tool to affirm autocratic regimes. In countries like Cuba, “legal” internet usage is hard to come by. Internet provided by the state is confined to set “Wi-Fi-Hotspots” scattered around Havana, Santa Clara, and Santiago de Cuba and is paid for by the minute. The websites that are accessible through the state-run internet are heavily censored and usage is heavily monitored. This is reflected in other countries like China where the government has created a software called “the Great Cannon” which is tasked with monitoring people’s internet usage and digital online footprint.[31] The state then determines if the website is “safe” for its citizens and will often get “big tech” to create China-only versions of the websites. If the government sees activity that they do not like they remove the website and ban its usage. Social media can also be used as a divisive tool that authoritarian/populist leaders can use to divide and alienate a portion of their citizens. This can be seen in India and the mass discrimination of Muslim people perpetuated by President Narendra Modi, the BJP government, and non-Muslim citizens.
Social media companies are stuck in a tricky position when it comes to moderating and flagging content. The looming question of what censorship is and what is not is something that social media companies and internet users have been trying to grapple with. Social media claims to be for the people. With mission statements like “to give people the power to build community and bring the world closer together” (Facebook) or “to give everyone a voice and show them the world” (YouTube).[32] These companies are attempting to give power to the people, and they seem to be unprepared to take it back. Things only get more complicated as the exact way to regulate social media companies seems to be the billion-dollar question. The worldwide nature of social media means that it spans multiple countries, each with different languages, cultures, and laws. There is also a lack of digital norms when it comes to what is allowed and what is not. It is nearly impossible to make software that meets the social expectations of every country, without defaulting to either the lowest common denominator -which will not be enough in some countries- or the highest common denominator -which may be overboard in some countries. This intern requires companies to become flexible and adaptable to changes in variables that could deliver undesirable outcomes. Governments around the world have begun drafting legislation that outlines accountable parties when it comes to the spread of misinformation. Germany, the UK, and the US all have legislation pending or approved that will place accountability on the tech companies themselves.[33] Italy has legislation pending that will make website administrators, internet service providers, schools, and individuals accountable regarding the spread, creation, and education pertaining to fake news.[34] Canada currently has legislation in action that places accountability onto mass media as a whole.[35]
Although there seems to be a consensus on the need for regulation, some scholars believe that individual state governments are not enough to combat disinformation and fake news. Some scholars claim that the technology that creates fake news must also be regulated. With technology evolving daily, it is hard for governments to stay on top of the new cyber trends. One such trend is “deep fakes” and the creation of false videos. A deep fake is “a video that has been edited using an algorithm to replace the person in the original video with someone else (especially a public figure) in a way that makes the video look authentic.[36]” These videos can range from being harmless to defaming and dangerous. A 2018 paper about the effect of deep fakes pointed out that deep fakes can affect people’s privacy, the democracy of a country, as well as national security. The authors pointed out that a manipulated video of a public official taking a bribe or a fake video of officials declaring a missile strike could be detrimental to national security and the safety of nations.[37]
Additionally, some other scholars believe that the regulation of social media companies must focus on more than just content, as these companies embody more than the content they host on their websites. In Robert Fay’s article about digital global governance framework, he claims that social media regulation must be a team effort with international coordination and engagement from stakeholders.[38] He continues by drawing parallels to the Great Recession (2008) and the need for an international regulatory framework around large “too big to fail” institutions. Fay also says that these companies are running off “light-touch” regulations and are relying on people’s trust. Economic regulations also come into play when talking about intellectual property and the monetary value that comes out of it. Recently the Australian government passed a law that requires Facebook, Google, and other social media sites that display news to pay the publishers. The law sets out to “address the power imbalances between digital platform services and Australian news businesses.[39]” The News Media and Digital Platforms Mandatory Bargaining Code gives publishers leverage when it comes to negotiating with social media companies. Before the amendments, Facebook and Google could take published news from news sites and display it on their websites. None of the money generated from displaying the news went back to the companies that originally made it, leading in lost revenues for the news companies.
Another area of concern when it comes to social media is privacy. With social media accessible with the tap of a few buttons there are bound to be some concerns when it comes to privacy and the sharing of information. There are two ways that privacy can be breached. The first is through user error.[40] Someone posts an incriminating or compromising post on a social media website and it spreads or something that was meant as a private message gets posted on a timeline. These are cases where an individual’s actions only harm themselves. While these are problems that happen to people and sometimes have serious ramifications, they are not the fault of the tech companies. The second category, however, is often referred to as “the big data problem.[41]” In these cases, the individual being harmed is not involved in the posting and sharing of information. This usually takes form in individuals posting discriminating content with the intent to harm other individuals. Privacy breaches are not only committed by users but also the companies themselves. The sharing and selling of metadata/personal information is a common practice for social media companies. This is usually done for advertising purposes and optimizing the advertiser’s reach through demographics targeting. This was seen during the Cambridge Analytica scandal and the collection of about 87 million Facebook user’s personal data.[42]
Another way that privacy is being breached is through social media surveillance. In recent years some governments have been buying software that they can use to spy on their citizens. Authorities in Iran have a 42,000-member volunteer team tasked with monitoring online speech.[43] Similar situations can also be found in China where individuals monitor content and then report problematic content/individuals to the authorities. Advancement in technology has only made this easier and AI technology is getting to the point where it can monitor people independently and figure out things like one’s political beliefs, religious beliefs, sexual orientation, and other social interactions. In a study conducted by Freedom House, 40 of 64 countries covered in the study had advance social media surveillance programs.[44] This adds up to about 89% or 3 billion internet users. This type of mentality is not reserved for autocratic, anti-democratic regimes. Under the guise of counterterrorism, US data-mining companies have been known to receive money from the Central Intelligence Agency in return for information about people.[45] This is justified as being precautionary measures, but officials have been known to use this information to find people’s political views, track students’ behavior, and monitor activists and protesters. In some cases, this has created a “chilling effect” among journalists and activists as they are afraid the government may act against them. As a means of self-censorship journalists in Canada have been known to limit their speech when talking about particular topics because of surveillance under Bill C-51(Anti-terrorism Act, 2015).[46]
Social media is not going anywhere anytime soon, and it will only become more prevalent as technology develops and expands into people’s lives. This means the government, tech companies, and consumers must be aware of its limitations and determents. Regulators today and in the future are going to have to continuously keep up with new trends and technologies if they want to make meaningful and practical rules, laws, and regulations. The same can be said for the governments who wants to limit the spread of fake news and disinformation. In conclusion, this paper is not out to deter people from using social media but rather outlines some of the faults in the systems that run them.
[1] Frank Robbins, “FidoNet History Timeline,” FidoNet History Timeline (The FidoNet Showcase Project (FNSP), November 26, 2001), https://elsmar.com/pdf_files/fidonet-info.txt.
[2] “Web History Timeline.” Pew Research Center: Internet, Science & Tech. Pew Research Center, March 11, 2014. https://www.pewresearch.org/internet/2014/03/11/world-wide-web-timeline/.
[3] Solana Larsen, ed., “Internet Health Vitals: Facts and Figures – The Internet Health Report 2020,” Internet Health Report 2020 (Mozilla Foundation, January 2021), https://2020.internethealthreport.org/slideshow-internet-health/#5.
[4] Ibid
[5] Ibid
[6] Etlinger, Susan, and Centre for International Governance Innovation. Models for Platform Governance. Report. Centre for International Governance Innovation, 2019. 20-26. doi:10.2307/resrep26127.6.
[7] “Justice Department Sues Monopolist Google For Violating Antitrust Laws,” The United States Department of Justice, October 21, 2020, https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws.
[8] Ibid
[9] Ibid
[10] Amy Mitchell et al., “Publics Globally Want Unbiased News Coverage, but Are Divided on Whether Their News Media Deliver,” Pew Research Center’s Global Attitudes Project (Pew Research Center, December 30, 2019), file:///C:/Users/icecr/AppData/Local/Temp/Pew-Research-Center_Publics-Globally-Want-Unbiased-News-Coverage-but-Are-Divided-on-Whether-Their-News-Media-Deliver_Topline_2017.01.11.pdf.
[11] Ibid
[12] Misinformation,” Merriam-Webster (Merriam-Webster), https://www.merriam-webster.com/dictionary/Misinformation.
[13] “Disinformation,” Merriam-Webster (Merriam-Webster), https://www.merriam-webster.com/dictionary/disinformation.
[14] Melissa Zimdars, “Introduction,” in Fake News: Understanding Media and Misinformation in the Digital Age (MIT Press, 2020), pp. 1-12. (pp. 2)
[15] Melissa Zimdars, “Introduction,” in Fake News: Understanding Media and Misinformation in the Digital Age (MIT Press, 2020), pp. 1-12. (pp. 2)
[16] Etlinger, Susan, and Centre for International Governance Innovation. Models for Platform Governance. Report. Centre for International Governance Innovation, 2019. 20-26. doi:10.2307/resrep26127.6.
[17]Ibid
[18] Casey Newton, “The Trauma Floor: The Secret Lives of Facebook Moderators in America,” The Verge (The Verge, February 25, 2019), https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona.
[19] Ibid
[20] Etlinger, Susan, and Centre for International Governance Innovation. Models for Platform Governance. Report. Centre for International Governance Innovation, 2019. 20-26. doi:10.2307/resrep26127.6.
[21] Linden, Sander van der, Costas Panagopoulos, and Jon Roozenbeek. “You Are Fake News: Political Bias in Perceptions of Fake News.” Media, Culture & Society 42, no. 3 (April 2020): 460–70. https://doi.org/10.1177/0163443720906992.
[22] Ibid
[23] Aleksi Knuutila et al., “ComProp: Covid-Related Misinformation on YouTube: The Spread of Misinformation Videos on Social Media and the Effectiveness of Platform Policies,” DemTech (Oxford Internet Institute, September 18, 2020), https://demtech.oii.ox.ac.uk/research/posts/youtube-platform-policies/#continue.
[24] Aleksi Knuutila et al., “ComProp: Covid-Related Misinformation on YouTube: The Spread of Misinformation Videos on Social Media and the Effectiveness of Platform Policies,” DemTech (Oxford Internet Institute, September 18, 2020), https://demtech.oii.ox.ac.uk/research/posts/youtube-platform-policies/#continue.
[25] Munn, Luke. 2019. “Alt-Right Pipeline: Individual Journeys to Extremism Online”. First Monday 24 (6). https://doi.org/10.5210/fm.v24i6.10108.
[26] Ibid
[27] Munn, Luke. 2019. “Alt-Right Pipeline: Individual Journeys to Extremism Online”. First Monday 24 (6). https://doi.org/10.5210/fm.v24i6.10108.
[28] Robert S. Mueller, Rosalind S. Helderman, and Matt Zapotosky, The Mueller Report (New York: Scribner, an imprint of Simon & Schuster, 2019). (pp. 62, 545)
[29] Yascha Mounk, “Social Media,” in The People vs. Democracy: Why Our Freedom Is in Danger
[30] Janna Anderson and Lee Rainie, “Conerns about Democracy in the Digital Age,” Pew Research Center: Internet, Science & Tech (Pew Research Center, February 21, 2020), https://www.pewresearch.org/internet/2020/02/21/concerns-about-democracy-in-the-digital-age/.
[31] Danny O’Brien, “China’s Global Reach: Surveillance and Censorship Beyond the Great Firewall,” Electronic Frontier Foundation, December 29, 2019, https://www.eff.org/deeplinks/2019/10/chinas-global-reach-surveillance-and-censorship-beyond-great-firewall.
[32] Etlinger, Susan, and Centre for International Governance Innovation. Models for Platform Governance. Report. Centre for International Governance Innovation, 2019. 20-26. doi:10.2307/resrep26127.6
[33] Haciyakupoglu, Gulizar, Jennifer Yang Hui, V. S. Suguna, Dymples Leong, and Muhammad Faizal Bin Abdul Rahman. COUNTERING FAKE NEWS: A SURVEY OF RECENT GLOBAL INITIATIVES. Report. S. Rajaratnam School of International Studies, 2018. 5-13. Accessed March 25, 2021. http://www.jstor.org/stable/resrep17646.5.
[34] Ibid
[35] Ibid
[36] “What Is a Deepfake?,” Merriam-Webster (Merriam-Webster), https://www.merriam-webster.com/words-at-play/deepfake-slang-definition-examples.
[37] Robert Chesney and Danielle Keats Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” SSRN Electronic Journal, July 21, 2018, https://doi.org/10.2139/ssrn.3213954.
[38] Fay, Robert, and Centre for International Governance Innovation. Models for Platform Governance. Report. Centre for International Governance Innovation, 2019. 27-31. Accessed March 25, 2021. doi:10.2307/resrep26127.7.
[39] “Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2021,” Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2021 (PARLIAMENT AUSTRALIA, n.d.), https://parlinfo.aph.gov.au/parlInfo/search/display/display.w3p;query=Id%3A%22legislation%2Fems%2Fr6652_ems_2fe103c0-0f60-480b-b878-1c8e96cf51d2%22;rec=0.
[40] Matthew Smith et al., “Big Data Privacy Issues in Public Social Media,” 2012 6th IEEE International Conference on Digital Ecosystems and Technologies (DEST), 2012, https://doi.org/10.1109/dest.2012.6227909.
[41] Ibid
[42] Hanna Kozlowska, “The Cambridge Analytica Scandal Affected Nearly 40 Million More People than We Thought,” Quartz (Quartz, April 24, 2018), https://qz.com/1245049/the-cambridge-analytica-scandal-affected-87-million-people-facebook-says/.
[43] Shahbaz, Adrian, and Allie Funk. “Social Media Surveillance.” Freedom House, 2019. https://freedomhouse.org/report/freedom-on-the-net/2019/the-crisis-of-social-media/social-media-surveillance.
[44] Ibid
[45] Ibid
[46] “Canada: Freedom on the Net 2019 Country Report,” Freedom House, https://freedomhouse.org/country/canada/freedom-net/2019#B

Leave a reply to The History of Social Media (TL;DR) – Rebecca's Thoughts Cancel reply