The Tech Industry: Decisions and Impacts | Teen Ink

The Tech Industry: Decisions and Impacts

May 11, 2022
By sarah-f BRONZE, Ellicott City, Maryland
sarah-f BRONZE, Ellicott City, Maryland
1 article 0 photos 0 comments

Have you heard the name Frances Haugen in recent news? Do you know who she is? If you don’t, she is the famous whistleblower of Facebook’s current privacy allegations. Besides these privacy issues, there are several more problems with Facebook and many more tech companies. One of these controversial topics is their algorithms, which can influence user opinions. Twitter, Facebook, and multiple social media companies make use of algorithms that keep track of user activity. In addition to social media, other tech companies make decisions and take actions that may affect users. Google is developing a new plan to get rid of cookies. Parler’s decisions with its lax moderation of its own site eventually led to its own collapse in the aftermath of the Capitol attack. In the political sphere, there is rampant spread of disinformation that also influences society and decisions they make. Social media has enormous impacts on its users. Its lack of conventional gatekeeping that many other news providers such as paper editors and TV producers have enabled so much more information to be shared (Bali 14). However, there are positive effects of these decisions as well. Tech companies can take action to improve society by facilitating the way people, groups, and organizations from all over the world communicate. With all these different aspects, there are innumerable social impacts of tech industry decisions on the use of its services, namely with Facebook and other social media platforms. 

Tech Companies Decisions: Context and Process

Companies and businesses within the technology industry make life-altering decisions every day. Some of the most popular social media platforms are Facebook, Twitter, Parler and Google. With regards to Facebook, recent controversies such as Frances Haugen’s and the public concern with their data-keeping algorithm have exacerbated relations with users. One of Facebook’s algorithms tracks what the user clicks on and searches in order to promote similar items to them. In some cases, this can influence users to develop more extremist views. The more users see of a certain belief of an issue, the more likely they are to agree and support it. Tracking user data in order to accordingly put biased information in front of users is a decision that Facebook makes. With this algorithm, Facebook manages personal data to control and promote posts that support that post’s beliefs instead of showing unbiased posts and ads from several perspectives. A more specific tracing method is to track users’ usage of reaction emojis. Depending on usage of the reaction emojis, in particular, the anger reaction, users can be forced to view more “emotional and provocative content – including content likely to make them angry” (Merrill and Oremus). Facebook created this in order to prompt more user engagement, which was “the key to Facebook’s business” (Merrill and Oremus). Facebook’s main goal in this decision was to increase foot traffic on their sites, and therefore gain more profits. After many users complained about Facebook infringing on their privacy, Facebook ceded and conducted an experiment to track user engagement and reactions. In 2019, they confirmed that the posts that had more angry reaction emojis were much more likely to include misinformation, toxicity, and low-quality news (Merrill and Oremus). Thus, Facebook’s decisions prove how they can have many negative impacts on their users. 

Even Mark Zuckerberg, Facebook’s founder and CEO, expressed his thoughts on the topic in a tweet (January 18, 2018). He argued that since users were complaining that public content was “crowding out the personal moments that lead us to connect more with each other… the balance of what's in News Feed has shifted away from the most important thing Facebook can do -- help us connect with each other.”  Zuckerberg then discusses the research Facebook teams have conducted and how they discovered that connecting with friends and family on social media improves personal well-being. As a result, he is “changing the goal give[n] to [the] product teams from focusing on helping you [the user] find relevant content to helping you have more meaningful social interactions.” Specifically, Zuckerberg states that “you’ll see less public content like posts from businesses, brands, and media.” This decision affects how much news users see on their News Feeds, so Facebook’s algorithm would have to limit what and how much information is on users’ feeds. 

Another social media platform, Parler, was widely used among far-right conservatives in 2020 and early 2021. Parler was created as an alternative worldwide platform that allows people, especially those who were banned from Twitter, to express their views. In May 2020, they decided to promote their site to users who were unhappy with Twitter, which was flagging Donald Trump’s tweets as potentially inaccurate and misleading (Donnelly). As it gained popularity in the following months, Parler began promoting views and acquiring endorsements by well-known right-wing figures and conspiracy theorists because they claimed it increased user engagement (Donnelly). Through this site, communications and planning in the coming days before the US Capitol attack reached a high level, and Parler did not moderate the content.  

Tech Companies Decisions: Social Impacts 

Other tech companies besides social media companies have many other ways they make decisions that influence society. For instance, individuals utilize disinformation campaigns. Disinformation campaigns are a popular and widespread method to spread disinformation and demonstrate how many businesses make decisions. The difference between disinformation and misinformation is that disinformation is the purposeful spread of false information, while misinformation is the inadvertent spread of false information. According to Unger, there is a four-stage supply chain framework, described below. 

The first stage is the raw materials. These “raw materials” are the pieces of information that disinformation can be based on. The information from the “raw materials” can then be distorted to create false impressions. In one instance, Donald Trump decided to spread disinformation in his 2020 presidential campaign by assembling videos that made it appear that President Biden’s mental state was declining. The second stage is production. Actors can spread disinformation by creating content and purchasing advertisements on social media or other tech platforms. The content can be completely fabricated or manipulated, contain false information or imagery, imply a false connection, or contain false context. These decisions to spread disinformation impact the people who view it. The third stage is distribution. Disinformers distribute their information in a variety of methods such as the traditional methods of communication (newspapers, cable news, etc.); however, they usually implement only one method of communication. For example, the Sinclair Broadcasting Group, an American telecommunications conglomerate, required local news anchors to express statements live on-air that amplified the group’s views on certain topics (Fortin and Bromwich). Sinclair’s decision would impact the viewers who were listening to their news station. Other examples of disinformation communication include the 2008 ad that tied Senator Barack Obama to Weather Underground terrorist Bill Ayers (Hancock) and that they claim by Sean Hannity, political commentator on Fox News, that Donald Trump sent his personal plane to transport 200 stranded marines in 1991. However, this information was false and done for voters to view Trump in a better light (Ritchie). In every stage of Unger’s four stage framework, people are making decisions that impact others through technology. 

Tech companies may base their decisions based on factors external to their business. In particular, actions of other international countries can have a significant impact on their decisions as well, such as those that resulted from actions taken by the European (EU) Parliament or regulatory bodies. For example, because the EU first passed legislation forcing major tech companies to regulate the content more aggressively on their platforms, these companies in the US responded as well to improve on some of the areas that the EU addressed. One of these areas is targeted advertising. Also as a result, there have been more congressional hearings and proposals on internet safety and competition in the tech industry in the US Congress. However, no legislation has been passed yet (Stein et al.). It is clear, however, that the EU’s decisions involving the tech industry heavily induced the US legislature to begin technology reforms as well. 

Google is one of the several large tech companies that makes decisions that have widespread effects. In recent months, Google’s leadership has started to consider adjusting the method of how advertising on the Internet works. In response to anger and concern from privacy advocates and government competition regulators, Google is forming a plan to get rid of third-party cookies. The new system is called Topics, and based on the websites they visit, it would track what users do on Google Chrome and assign them a set of advertising categories. In the future, when users visit a website, that set of categories is then shared with advertisers on the site, and relevant ads will then be shown to the users (De Vynck). Google is taking a step in the right direction in order to enhance privacy concerns for users to feel more secure on the web.

Social Impacts: Positives

The decisions that tech companies make can have severe and lasting social impacts. While many of these effects are negative, there are several positive ones as well that benefit users and the larger society. 

Tech companies can bring people together through their platforms. On social media, people can easily connect with both friends and strangers, where they can share about their own lives, talk about whatever topics they want, and much more. Platforms such as Facebook have made known their decisions and methods to some of the decisions they make, and conducted experiments to prove social media’s positive effects. Researchers have hypothesized that “Facebook could cause relationships to become closer” after viewing results from experiments “involving boosting some people more frequently into the feed of some of their randomly chosen friends – and then, once the experiment ended, examining whether the pair of friends continued communication” (Merrill and Oremus). More specifically, social media can also provide a way for older adults to connect with others socially and emotionally despite physical locations and limitations (Zhang 888). Since a majority of older adults prefer face-to-face communication in more important situations, social media is a largely used method to help maintain social connectedness. Other benefits of social media found through research include positive effects of the brain’s reward system. When people “us[e] online technology for social purposes, such as a directed communication, [there] may [be] increase[d] feelings of social relationship satisfaction and reduce loneliness (Hutto et al., 2015; Szabo et al., 2019; Teo et al., 2019)” (Zhang 889). 

The decisions tech companies make can also provide the means and facilitate communication between people, groups, and organizations. Social media sites offer a wide access to information, which plays a huge role in helping users formulate their own views. Users can also discuss such information with a more diverse group of people (Bali 14). A direct decision that influences users is Jack Dorsey, Twitter’s CEO’s decision to “tweak his platform’s algorithm to expose people to more diverse views because he believes that would increase moderation” (Bali 15). Decisions by other countries have impacts on US internet policies as well. For instance, the European Union’s new legislation on data protection would require major tech companies to moderate content on their platforms more aggressively and implement more restrictions on advertising (Stein et al.). This means that tech companies in the US could be forced to moderate the content on their platforms, especially content that is false or misleading. Ultimately, these policies help people so that they only view truthful information. However, false information such as rumors could even benefit people. “[B]y discussing them together with others, users are said to be able to identify, challenge, and eventually correct false rumors and other forms of misinformation through an ongoing process of collective sense-making typically referred to as self-correction” (Eismann 1300). The decisions that tech companies make can alter users’ perspectives and viewpoints, leaving lasting impacts. 

Social Impacts: Negatives

There are many drawbacks of tech company decisions as well. These drawbacks can have a lasting impact on users and larger society. Some of the larger harms include the significance of Facebook’s algorithm, Parler’s moderating strategies, and congressional laws and regulations.

Due to the nature of Facebook’s privacy-invading algorithm, there has been much public outrage and disputes over the issues. When users click and view content online that is more controversial, it could “open the door to more spam/abuse/clickbait inadvertently” (Merrill and Oremus. As a result, legislators from both parties in Congress have come together to create a bill called the Filter Bubble Transparency Act, which seeks to allow users to turn off the data-driven algorithms that social media companies, especially Facebook, implement. The bill would force social media and other technology companies to enact a method for users to have a chronological News Feed (Gold). 

Another regulation is Section 230 of the federal Communications Decency Act, which states that “online platforms–a category that includes enormously rich and powerful tech companies such as Facebook and Google, as well as smaller and less influential blog networks, forums, and social media start-ups–are not considered “publishers.”” This means that you can sue the person who created the post/tweet/video, but not the company who hosts it on its platform. As a result, the company does not face the consequences they should face for allowing the content to be seen on their platform. The user suffers from the content either way. Although the law was meant to protect Internet companies, it is allowing for hate speech, harassment, and misinformation to spread. Now, these Internet companies are enacting their own rules to flag or ban “what they feel are objectionable content generated by users” ("Laws to stop malicious use can backfire without wider input"). While there are immediate negative effects, tech companies are trying to rectify these issues and help their users. 

Besides the issues surrounding well-known social media companies, other tech companies have faced backlash from their decisions. Parler played a vital role in the success of the January 2021 Capitol attack. This occurred because of Parler’s passive approach to moderating its site. Their lax moderation of the site allowed the attackers to communicate and coordinate the riot, even though it was obvious from the communications that violence would occur. While there were lasting negative effects on the Capitol building and its meaning of democratic society, positives did come out from it. After the event, well-known and influential companies such as Apple, Amazon, and Alphabet, parent company of Google, removed the application from their offerings. Due to this, the smaller companies Parler was partnered with severed their relations as well, and Parler was left to its own devices. While this had a negative consequence on the company itself, in the long run the events had a positive impact on society, as users could not communicate through Parler anymore. 

Tech companies besides social media ones also make decisions that negatively affect users. Many of these companies have poor data privacy for their users. This contributes to the spread of misinformation and democratic erosion. Campaigns that promote fabrication of fact lead to less trust in democracy, endangerment of public health and safety, and damage to democratic institutions. (Unger 323) This differs tremendously with the policies and legislation implemented by the EU. Stein et al. describes the EU’s legislation as the “most aggressive attempt yet to regulate big tech companies as the industry comes under greater international scrutiny. It could serve as a model for lawmakers in the United States who say they, too, want to rein in the businesses’ digital practices.” The difference between US and EU legislation is drastic, and Stein’s thoughts demonstrate the negatives of the US’ policies.

While many may argue that Google’s new privacy system is beneficial to users, there are still harms. If Google’s new advertising system is implemented, user privacy is still not protected because they are “essentially using one website’s data to help other websites advertise more accurately” (De Vynck). The new system would give Google more control and other advertisers less control over advertising on the web. As a result, the issue over user privacy on the web has more significant cons than pros.

Overall, the spread of rumors can cause serious problems in several social and economic settings. For example, in the social setting, people using unverified information to make decisions and act during critical situations could cause serious problems. In a business setting, “online rumors and firestorms that are not addressed adequately can have severe negative consequences for companies, including the loss of trust between management, staff, and shareholders, and sustained personal and corporate reputational damage” (Eismann 1300). In any public sphere, there are consequences to decisions made by the executives.

As seen with Frances Haugen and the whistleblower crisis, there can be serious impacts from the decisions that tech companies make. In Frances Haugen’s situation, Facebook’s reputation can dramatically diminish, and they can gain much more notoriety. In the social platform Parler’s case, the lack of decision making on important issues caused the downfall of their own company. Disinformation campaigns are also an influential method that can severely impact society. By tech companies and well-known figures spreading misinformation, consumers are more prone to making poor choices. These decisions that companies make can have a significant influence on both the company itself and its users. However, there are positive impacts of these decisions as well. Social media is an essential method of bringing people together, especially for older adults. It can also help users in feeling less disconnected with the world and the people around them. Tech companies today have such an enormous influence on the world that they are most always the center of attention. Users should continue to keep these companies in check. 


The author's comments:

I'm a senior in high school and I wrote this paper for my Intern-Mentor class. My mentor is an economics professor at a local university, and I worked with her throughout the school year to do research into the tech industry field. I would love to get my paper published as a final product that shows all the work I've done this school year! 


Similar Articles

JOIN THE DISCUSSION

This article has 0 comments.