top of page

Three historical stages of digital infrastructure
               cool paper from a Course on Media and politics

Our digital infrastructures saw three main phases of its development, each with considerably different impacts on the politics and collective action possible using it. In the 1990s when the internet was still in its infancy, the digital sphere empowered individuals that were previously excluded from the public discourse which was traditionally dominated by states, economic elites, and the media. In Barlow’s Declaration of the Independence of Cyberspace, he espouses the spirit in which the cyber world was created, a spirit of freedom of the mind, and a dream of a civilization of the Mind aimed at achieving collective governance in its mission to transcend traditional hierarchical structures and systems of power. Individuals were empowered outside of the confines of the state and industry, and interactions between citizens were now a liberating form of communication mediated by none other than the constraints of the platforms they utilized, a medium that wouldn’t become disruptive until the profit models of social media and platforms dominate future versions of the internet. Through the rising tide of decentralized power, the notion of authority was now characterized by the horizontal communication of individuals placed on the same world stage, rather than traditional vertical hierarchical structures which derive authority from the legitimacy of the political or financial elite. This digital realm as Barlow describes it, “grows itself through our collective action” (2), aspiring to remove ourselves from the traditional constraints of political and corporate incentives as well as historical power dynamics amongst domestic and international institutions. 

In Turner’s How Digital Technology Found Utopian Ideology (2006) the conception of the digital infrastructure and the promise of “egalitarian forms of political organization” effectively leveled out the playing field for all global citizens who were now on an equal footing of human experience and deservedness of rights (1). This dramatic elevation of the potential for collective action provided an important tool for the materialization of the idealism that came after World War II with documents such as the UN declaration of human rights which propounded values of universal suffrage and equality.  The type of idealism which has Barlow describes cyberspace as “arrayed liked a standing wave in the web of our communications” that would be aimed at the end of collective governance emerging from “ethics, enlightened self-interest and the common weal” (3). While the content of these communications and the ideologies which underscore them contribute to the shape that our digital Anthropocene takes, it is important to note that the interests of traditional power structures and capitalist proclivities would not evaporate as a driving force in this digital sphere. Lessig’s Code is Law argues that the claim to cyberspace is unavoidably free, none can rule, and the direction that the infrastructure takes is self-ordering and built from the bottom up. However, simply because none can rule, this does not mean that this living digital net would be impervious to influences and circumventing of state actors. While members of state and cooperation were reduced to mere participants in the collective human digital experience, their organized interests can impact the flow of our cyber globe, and corporations who develop the platforms on which individuals can congregate can define the constraints and obstacles which direct the flow of human motion on the internet.  

 

The next step of the web’s evolution was the social web in the early 2000s which saw the rise of social media and aggressive pushback from established powers. Social media afforded actors a convenient medium to organize collectively, outside any defined geographical location. This technology was especially disruptive for states who derive their Webberian legitimacy from the monopoly over the legitimate use of force in a definable territory. Given that this definable territory was now social media and thus was no longer bound by borders or space, the claim to legitimacy in the face of the political power accrued through collective assembly on the web was threatened. As a place of meeting, social media platforms such as Facebook gave way for citizens to circumvent the watchful eyes on the street of the dictatorial Mubarak regime of Egypt during the 2011 uprising. This allowed citizens to find strength in unity before gaining the momentum needed to charge the streets, a feat that would have otherwise been dispersed and muted. Leiner’s A Brief History of the Internet accentuated that with the evolving design of cyberspace the new social media platforms enabled new forms of collective action. Their ease of scalability and accessibility, paired with the platforms’ latent access to relevant audiences congregated by geographical or network proximities and later on ideologies, and its resistance to cyber threats by state powers for example given its multi-server structure, laid the foundation for more collective power. These features re-imagined what Castells calls counter-power, the capacity of social actors to resist and challenge the traditional institutionalized power relations in his Communication, Power and Counterpower in the Network Society (239). Users were now equipped with the newfound power of digital networks to enact social change using horizontal communications networks rather than through traditional hierarchical structures. While that may be, this rise in power did not go unchecked by governments who sought to maintain their foothold on the societal norms of power structures and dynamics as well as protect against national threats that linger on the web. The struggle, however, in what Zuckerman argues in his Cute Cat Theory, lies in between the empowerment of individual actors who were previously alienated from the avenues of political influence, and hierarchical bureaucratic organizations like governments and corporations who would otherwise have held unattested powers. The platforms which empowered the layperson to take charge of their right to free speech and thought, civil rights activists to organize, the digital economy to flourish, also blanketed violent and subversive actors such as terrorists and criminal hackers. In order to effectively target these needles in the digital hayfields of anonymity which the internet grants, states would have to undermine the regular users who otherwise benefit from collective power, and yes, their cat videos. Now, state involvement in the digital sphere becomes a matter of national security, and in future instances, an invited delegation of authority by the collective masses who are even more reluctant to delegate this authority to corporations who are increasingly having to legislate over the platforms.    

 

By the late 2000s, Web 3.0 evolves and the internet becomes dominated by platform companies defined by corporate interests and control. These platforms act as filter points which the internet funnels users through and in turn siphons their information as a commodity to sell to the highest bidder. Regular profiles and content profiles with large followings have begun being weaponized in order to satisfy foreign state or organizational interests, such as Russia’s influence of the 2016 US elections. These breaches in the privacy and sanctity of cyberspace by deliberately manipulating the flows of human activity has threatened the cohesive nature of the collective governance which Barlow dreamed for cyberspace, a unity of aspirations that is contingent on mutual trust in the fabric of the digital world which mediates between collectives and individuals. What was dreamed to be the civilization of the Mind, was now being hijacked by profit schemes and political motives, one mind at a time.  

When digital platforms were conceived they set the idyllic intent of making the internet universally accessible and thus free of charge, a motivation that would ironically lead to the loss of a different kind of freedom, that of individuals’ minds and their access to reliable information which underpins the agency behind their responsible political participation.  In Shoshana’ Zuboff’s Age of Surveillance Capitalism, she describes a new front to the interjection of capitalism in our quotidian lives, that of siphoning and instrumentalizing user information for a profit, as well as getting into the business of not only attempting to “know our behavior but also shape our behavior at scale” (4). Large organizations such as Google and Facebook mine individuals’ profiles like commodities for sale, and scale their operations and revenues by modifying the herd of users’ behaviors through platform designs that trap attention and guides it through increasingly sophisticated models of influence. At the surface, this profit model based on competitive advertising and a newfound data economy thrives from the lack of circulation of reliable information surrounding the architecture of the platform and its capitalist incentives. The relationship between these financial models and the access to reliable information is much more profound than that. 


In order to keep individuals on these platforms for an optimal amount of time and raise the possibility of profiting off their activities, these platforms often cater the content that users receive to their pre-existing interests of products, services, hobbies, and political inclinations. Platform algorithms divide users into these echo chambers which further entrench ideological schisms and political polarization of democracies that usually thrive on public discourse and trust in a common societal fabric. By categorizing users in a bubble of “distinct and insulated media system” as Marwick and Lewis describe (40), in order to cater digital experiences to the interests of the cattle whose attention is being mined for informational resources, this model challenges the cross-examination and economy of ideas which healthy democracies thrive on. One might argue that this stunts the reliability of information by eroding its objectivity, but western news outlets have transitioned from their traditional vows of objectivity into an overt admittance of subjectively gathered and interpreted information. When users are faced with such a schismatic array of information between left-leaning and right-leaning media outlets, it is no mystery that a dangerous form of public distrust in media organizations is going to fester. At the birth of functional democracies, the media was recognized as the ‘fourth estate’ check on governmental powers, a source of reliable information that promotes the cohesion and the interest of the population first and foremost outside the purview of the state. Today, however, it’s become inseparable from political parties’ interests. Media outlets have been normalized as extensions of political interests, rather than their examiners and subjectors of accountability. If for example, Palestinians living in Gaza are empowered by social media to raise awareness for the injustice they must endure, this information will only reach users with a history of being politically inclined to support Palestinians. Given that left and right-leaning political agents are provided vastly differing information on their feeds, they wouldn’t have access to information that might lead them to question their ideologies and discuss the occupation without antagonizing either side. 

Moreover, Vaidhyanathan argues in his The Attention Machine in Antisocial Media that if our information ecosystem becomes saturated by attempts to cage our attention through advertisements, this will build up our resistance to each effort and empty the reserves of attention we can spare for the news (80-81). If we give in to the force of these financial interests and develop a habit of brief and intermittent attention, of scrolling in order to trigger the spikes of pleasure embedded in the reward systems and cues of these applications, we will further starve our capacity for focused attention. We will end up relying on informal sources of news that package information in a digestible form which oftentimes loses the nuance, credibility, and reliability that traditional journalistic institutions espouse in their longer articles. Furthermore, platform algorithms are designed to compete for our attention on the attention market by developing increasingly sophisticated ways of capturing and sustaining our attention. Nothing does that more effectively than hyper-polarized statements such as alt-right ideologies or for example. In a study conducted to determine the recommendations that AI generators recommend for users,  the results were a barrage of anti-Semitic content. The platforms amplify the opinions that lie on the periphery of acceptability or informed judgment, the opinions with shock factor capable of more effectively drawing you in, and by doing so create a peculiar environment for which news is demanded. Misinformation and disinformation spread like wildfire not only because they’re more effective at capturing your attention, but also because without the proper regulation, algorithms will amplify the number of users receiving what seems like a shocking article. 

Marwick and Lewis argue that in such an environment of regularly fallible information,  “People who do not trust the media are less likely to access accurate information” (45).  They move to posit that after the fall of traditional media as a consequence of the digitalization of the industry, only the bigger news agencies were capable of adapting and morphing into the “new media barons” as investors consolidated ownership and prioritized “short term profits over quality civic journalism” (41). Not only were smaller regional papers forced to shut down as a consequence of their inability to digitalize, but this also meant that these larger news outlets had to focus on the more eye-catching national news rather than the milder titles of regional affairs. Journalists were restricted by profit-oriented organizations to produce work that paid off in the short term, often inviting feeble journalistic integrity, appeal to the subjective relationships of viewers and the news in order to elicit stronger sensationalist reactions, rather than the traditional standard of objectivity. Moreover, because the algorithms of online platforms are designed to bring forward similar sensationalist content which elicits strong emotional reactions and grasp the attention of users for more advertising exposure, we observe an intimate and circular relationship between the deterioration of reliable information that the media produces and the demand for emotion-provoking information by the masses steered by the digital machine and its capitalist interests. 

 

 

 

 

 

 

 

 

 

 

Citations:
Barlow, John P. “Declaration of Independence of Cyberspace .” Darknet, 16 Feb. 2018, pp. 145–151., doi:10.1002/9781119425502.app1.
Catells, Manuel. “Communication, Power and Counterpower in the Network Society.” International Journal of Communications, Jan. 2007, pp. 238–266.
Leiner, Barry M. “A Brief History of the Internet.” Internet Society Journal, Dec. 2003.
Lessig, Lawrence. “Code Is Law.” Stanford Law Review, vol. 52, no. 5, 2000, p. 987., doi:10.2307/1229508.
Marwick, Alice, and Rebecca Lewis. “Media Manipulation and Disinformation Online.” Data and Society Research Institute, 15 May 2017.
“The Attention Machine in Antisocial Media .”
Turner, Fred. “How Digital Technology Found Utopian Ideology.” Critical Cyberculture Studies: Current Terrains, Future Directions, Jan. 2006.
Vaidhyanathan, Siva. “Media : How Facebook Disconnects Us and Undermines Democracy.” Oxford University Press, July 2018.
Zuboff, Shoshana, et al. “Surveillance Capitalism: An Interview with Shoshana Zuboff.” Surveillance & Society, vol. 17, no. 1/2, 2019, pp. 257–266., doi:10.24908/ss.v17i1/2.13238.
Zuckerman, Ethan. “Cute Cats to the Rescue? Participatory Media and Political Expression.” Youth, New Media and Political Participation, Apr. 2013.

 

bottom of page