Acton Institute Powerblog

Facebook is a symptom of a much deeper Big Tech problem

(Image credit: Associated Press)

Facebook changing its name to Meta will not change the fact that all social media platforms make promises they can’t keep. […]

Read More… from Facebook is a symptom of a much deeper Big Tech problem

At this point, most have heard about Frances Haugen, the whistleblower who leaked documents to the Wall Street Journal this fall detailing how Facebook knew about many of the downsides of its platform, yet chose to prioritize engagement. The documents outline, among other things, how Facebook introduced new reactions in addition to the Like button and then ranked content that received extreme reactions, such as anger, higher. Polarizing content then took precedent over posts created by family and friends. The response to these revelations has been intense media coverage, calls from politicians for greater control, and a great deal of buzz around the downsides of Facebook.

But is any of this truly revelatory? The fact that social media, especially Instagram (owned by Facebook), is bad for teens’ mental health is not new. Neither are claims around extremism or crime. The largest revelation is concrete proof that Facebook knew about the harm. But unless Facebook executives have been living under a rock, that itself should be no surprise either. The downsides of social media have been endlessly highlighted and debated since its inception. The revelations regarding Facebook, while generating a good deal of hype, are ultimately a limited picture of a broader issue. These are critiques of degree rather than of category. We’re told that Facebook should do more to fight crime, more to fight disinformation, more to protect kids. Yet this tells us nothing about what constitutes a sufficient response to prevent adverse outcomes in the first place. One could argue that Facebook could always be doing more. In contrast, a categorical critique would tell us something about the underlying technology or business model. It would reveal a deeper way to view technological changes in order to make judgments that go beyond pure reactivity.

Wait, did we say Facebook? We meant Meta. In the midst of the heat generated by the whistleblower, Facebook announced it would change its name. This precedes a claimed shift in business focus, albeit also a convenient marketing strategy. The name Meta reflects plans to move into the metaverse, a fully virtual online world where we will “work, play and live.” In many ways, the issues the whistleblower raised, such as mental health, violence, and polarization, are five, maybe 10 years old. Technology has moved on. This is not to say that the critiques are unimportant, only that they miss a broader understanding of the real issue. Before we can fully grasp the implications of past changes in technology, a new technology arises. We lack a framework to weigh the benefits and downsides, even comprehend the impacts, of new technology.

What do these institutions and organizations promise us? Technological innovation has always had the allure of “possibility and progress,” an almost unbounded hope that whatever you can dream up you can accomplish. Because of this undercurrent, many people believe that technology is neutral, simply a tool like any other that can be used in both good and bad ways. But this belies the very clear and unavoidable point that there are always tradeoffs with every innovation. To paraphrase Italian philosopher Paul Virilio, when you invent the ship, you also invent the shipwreck (insert train, car, plane, rockets, electricity, etc.). In other words, there are always negative effects created along with positives. It’s never either/or. It’s both/and.

This echoes Amara’s Law, named for American scientist Roy Amara, who claimed, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Seen here with the example of Facebook, it is painfully clear that what many thought was simply a platform to stay in communication with friends and post “what’s on your mind” actually, in the long run, divided families, communities, and even nations. As mentioned above, decision-makers within Facebook knew there were obvious negative effects to their platform but chose to minimize or even deny them to the broader public. The effect social media has had on fundamental institutions has not gone unnoticed, but it is virtually impossible to stay on top of all of the implications of emerging technological innovations. The Everyman is left impotent by the overwhelming pace of our technology, and thus our culture as a whole.

Hartmut Rosa, a German sociologist, argues in his provocative little book The Uncontrollability of the World that “for late modern human beings, the world has simply become a point of aggression. Everything that appears to us must be known, mastered, conquered, made useful.” Our desire to control the world is at the heart of modernity. This desire stems from our sense of social acceleration. We all have a metaphysical dream of the world, and Rosa argues that we are dominated by the desire to control all things. But rather than creating hope and advancing human flourishing, “this escalatory perspective has gradually turned from a promise into a threat. Growth, acceleration, and innovation no longer seem to assure us that life will always get better; they have come instead to be seen as an apocalyptic, claustrophobic menace.”

This framework allows us to begin to form a categorical critique of technology in general and Facebook in particular. We must soberly observe what happened at Facebook and consider the future implications of Meta. Rosa’s work gives context to the phenomenon of social change, as evinced in the rapid series of changes in Facebook. We need to be reminded that “technology gives us the illusion of companionship without the demands of friendship.” What is Facebook if not an attempt to define, quantify, even codify friendship? The technology behind social media encourages us to seek further control of the world. What started 17 years ago as the assumption that Facebook would increase interpersonal connection and draw people closer actually had the opposite effect. Creating a platform that allows someone the ability both to control incoming information via a customizable (controlled) “feed” and mold a perfectly curated image to present to the world proved disastrous. This control seeking, in turn, decreases social cohesion and solidarity. Social media in particular has an uncanny ability to perform a cultural bait-and-switch. We are promised more control, access, and information, but instead of increased flourishing, these only make us more anxious, alienated, and angry.

Different people could look at this framework and propose different solutions to the problems. But given the rate of change, the very idea of a solution belies the fact that the underlying problems continue to change. One such solution is increased regulations on “Big Tech.” That solution is important in the sense that “rules of the game” do need to be established for tech companies. But, in line with the framework we present, legislation will lag behind technology to an even greater degree than popular perceptions of benefits and harms. Regulation usually represents a too-little-too-late response. For instance, Microsoft faced antitrust litigation in the 1990s surrounding the bundling of its browser with its operating system, but by the time the litigation resolved, the desktop computer had decreased significantly in relevance. Legislation is a helpful but limited tool in the fight. Because of the nature of the phenomenon and the rate of social change, we will never be able to legislate ourselves out of this problem.

Perhaps a better route would be to address the issue on the level of communities and families. Within these groups, it is possible to slow some, though not all, of the effects of social acceleration. At the very least, adopting new technologies on the individual level should be met with some skepticism, until one can understand more about the trade-offs within the design. While this will not erase the problem of social change, it can ameliorate some of the harms.

The kind of social acceleration represented by Big Tech innovation is obviously a contributing factor to the decline of trust in bedrock institutions like the family, religious organizations, and political groups that has featured prominently in recent news cycles. Technocratic culture is simply moving too fast. While the negative consequences of Big Tech seem as if they are only now coming to light, they are and always have been baked into the technology itself. If the popular narrative fails to grasp that fact and continues to focus only on the positives, then we should expect exposés like those of Frances Haugen to continue like clockwork.

Dan Churchwell

Dan currently serves as the Director of Program Outreach for the Acton Institute where he manages external relationships with foundations, higher education institutions, businesses and NGOs. During his tenure, his team has designed and executed over 50 educational conferences ranging from small collegiate seminars to Acton's flagship annual conference, Acton University. Prior to his work at Acton, Dan spent ten years in higher education as a lecturer and administrator in the Pacific Northwest. He is interdisciplinary in his interests and has taught and lectured widely on issues related to the intersection of philosophy, theology and economics. His current research interests include media ecology, technological ethics and the future of work. Dan also has experience in a Fortune 100 logistics company, a commercial real estate investment firm, has served as executive director of an international medical non-profit, and on multiple non-profit boards.

Noah Gould

Noah is a Programs Associate at the Acton Institute where he regularly contributes to the blog and Religion and Liberty. He is a graduate of Grove City College, where he studied Economics.