Deep dive: How do you solve a problem like social media?

Deep dive: How do you solve a problem like social media?

In this issue of 47, we explore the lay of the digital lands, asking: what’s at stake when it comes to social media moderation and regulation, and how can communications professionals pursue best practice?

The conflict between governments and digital platforms has boiled over in recent months, with efforts to regulate companies like Meta, Alphabet and Twitter devolving into a bitter battle with extremely high stakes.

To date, tech companies have enforced their own rules in line with partisan commercial interests: but their response to user-generated misinformation, harassment and violence are generally viewed as a case of too little, too late.

While legislation has struggled to keep up with the pace of technology, governments are increasingly attempting to regulate digital and social media, and communications professionals need to navigate even more complex polarities of policy and personal practice to provide their clients with strong results.

Clearly, social media is here to stay. Most clients will have a presence on major platforms, and a considered social media strategy is now essential to campaigns of all kinds. But while social media opened a world of new possibilities for widespread and cost-effective campaigning, it has also unleashed a Pandora’s box of risk that can be a challenge to manage.

New media barons – a class of their own

The democratising nature of social media stands in stark contrast to the massive influence and communicative clout of the companies and individuals that have driven the media marketing transformation of recent decades.

This year’s Forbes rich list underlines the enormous wealth of the individuals who control companies like Meta (Facebook, Whatsapp, Instagram), Alphabet (Google, YouTube) and Twitter.

Indeed, the world’s richest person, Elon Musk, only recently was taking steps to buy Twitter. He claimed the takeover would unlock the platform’s ‘enormous potential’. Ultimately Musk has backed away from the deal, stating that his reason for doing so is Twitter’s failure to turn over fake or spam accounts on the platform, which he claims violates their agreement. In response, Twitter has filed a suit against Musk to complete the merger.

Twitter’s lawsuit states: “Musk apparently believes that he — unlike every other party subject to Delaware contract law — is free to change his mind, trash the company, disrupt its operations, destroy stockholder value, and walk away.” Musk has since filed a countersuit.

Reputational troubles have also rocked Meta, founded by Mark Zuckerberg, as the company locks horns with governments around the globe; and Alphabet, whose era-defining products Google and YouTube have assumed central importance to how information operates in our everyday lives.

Efforts are underway in the US legal system to hold Meta CEO Mark Zuckerberg personally responsible for his alleged role in the infamous Cambridge Analytica data breach, which saw millions of users’ personal information compromised during the 2016 election cycle.

Tech companies and their owners exert extraordinary power over the users of their platforms. They shape our lives, all while issues continue to abound in the digital space.

How companies have addressed issues… or not

Social media companies have made efforts to address user harassment, misinformation and disinformation, and other moderation issues through changes to their platforms and policies, often in response to moves toward increased legislation.

In 2021, the US Congress held an inquiry aptly titled, Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation. The CEOs of Twitter and Google appeared at the hearing, as did Mark Zuckerberg, who boasted about Facebook’s automation of moderation, with more than 95 per cent of content violations being taken down by artificial intelligence.

Responses of this nature have accelerated in recent years, prompted in part by the Christchurch terror attack of 2019, which saw live footage of the massacre re-posted millions of times across social media platforms at lightning speed.

Rather than submit to far-reaching legislation, the companies banded together, with Meta, Alphabet, Twitter and Microsoft now operating The Global Internet Forum to Counter Terrorism, which includes a secretive ‘hash database’ of terrorist content and its digital fingerprints.

Social media companies laud the improved speed and efficiency of machine-learning algorithms that flag and remove violent, extremist and terrorist content. In fairness, removing content quickly at scale is a vexed issue, and tech companies’ efforts are showing increasing results with algorithms flagging more problematic content at greater speeds.

The war in Ukraine provides a case in point. Social media giants have been removing high volumes of graphic content quickly using new content moderation technology.

Despite this apparent improvement, social media companies are now being criticised for removing material that could later be used as evidence of war crimes. To some extent, you can see why they’re frustrated.

Even in day-to-day contexts, similar problems arise. While it enjoys advantages in speed, scale and economy, artificial intelligence struggles to properly process and understand the diversity of social, political and cultural contexts across the globe; the nuance and complexity of our human lives.

As diversity and inclusion gain greater currency in mainstream communications campaigns, public relations professionals are increasingly required to evaluate and balance the risks posed by any unethical, biased or discriminatory practices pursued by various platforms. Of course, this is complicated by the lack of transparency around how algorithms and corporate strategies function, and potentially outright deceit, as alleged by various whistleblowers.

Anti-trolling and online safety

Not only algorithmic bias, but also the anonymity offered by many online spaces presents problems for communications professionals.

It’s a familiar nightmare: a campaign or piece of content goes viral and gets people talking. Great. Then the comments section goes bad. Attention turns from carefully crafted communications to a narrative controlled by a cabal of commenters intent on tearing it down.

Commenters may be genuine or malicious, but the results are often the same: loss of control of the narrative and the potential to destroy a campaign or message. Of course, the same type of experience can play out for individual social media users as well, with cyberbullying resulting in tragic consequences.

Animated by the deaths of teenagers who have been subject to online bullying, the former Morrison government legislated the Online Safety Act 2021: a key outcome of which was to establish a new eSafety Commissioner.

The commissioner is equipped with powers to impose the act’s Basic Online Safety Expectations on platforms, and to compel them to remove seriously harmful content within 24 hours of receiving a formal notice.

A report from the House of Representatives Select Committee on Social Media and Online Safety stated that ‘for too long social media platforms have been able to “set the rules”, enabling the proliferation of online abuse.’

The committee’s recommendations included that a comprehensive review of digital platforms and services be undertaken with a view to expanding the eSafety Commissioner’s remit to include powers to mandate platform transparency on the use of content algorithms and their relationship to online harm — aspects of the information ecology that have effectively functioned as trade secrets up to this point.

Another piece of the legislative puzzle pursued by the former federal government was the Social Media (Anti-Trolling) Bill. It sought to empower individuals to ‘unmask’ the originator of anonymous defamatory content but did not become law.

The anti-trolling bill arose in the context of publicised legal cases with far-reaching legal implication for the internet, its users, and the communications industry.

This year, as part of a defamation lawsuit litigated by far-right political figure Avi Yemini, Twitter was ordered to reveal the identity of the anonymous PRGuy17. Yemini accused the pro-Labour account of publishing defamatory material. He successfully sought a court order requiring Twitter hand over all basic subscriber information and IP addresses associated with the account. In response, the owner of the PRGuy17 publicly stepped forward to reveal his identity.

The willingness with which the Australian court upended online anonymity in this case has caused concern in some quarters. Digital Rights Watch program lead Samantha Floreani said: “It’s really important to remember that for many people the ability to be anonymous online is a vital safety mechanism and that it also plays a key role in protecting our right to privacy, freedom of speech, and people’s ability to participate in democracy.

Another consequential clash on this front occurred late last year, when Dylan Voller, a former detainee in the Northern Territory’s Don Dale Youth Detention Centre, brought proceedings against three Fairfax media companies for defamatory Facebook comments posted by third parties responding to articles posted by the media companies on their Facebook pages.

On a preliminary question in the case, the High Court found that media companies could be liable for defamatory comments made by third parties, upholding Justice Rothman’s judgement in the first instance that Fairfax was the publisher of third party material posted on their channels, because they “maintained Facebook pages and encouraged and facilitated the making of comments by third parties which when posted on the page were made available to Facebook users generally”.

The substantive case was later settled in the lower courts without the opportunity to test possible defences to the claims, but these legal developments have alerted organisations to the risks involved in social media activity and operations. Any organisation that permits public comments on their online pages must now carefully moderate every online interaction. A failure to stem a string of belligerent exchanges, or to fact-check a wild claim, could result in liability.

Social media campaigns cannot only rely on carefully crafted content and collateral but must manage the mess that will inevitably arise in social media forums. We must now be cleaners as well as creators.

Every social media strategy must be accompanied by an airtight risk management framework. An errant commenter could be the seed to an uncontrollable and unnavigable kudzu of negative online commentary and potential damages.

A global issue

In the social media space, meaningful legislation is still catching up.

While Australia has been at the forefront of the global push for reform and regulation, this has been a global issue with many countries and global regulators stepping forward to fill the gaps.

In the two years leading to July 2021, United Nations research revealed 40 new social media laws had been adopted globally and another 30 were then under consideration. This raft of regulation has continued to grow.

The legacy digital platforms are almost unanimously headquartered in the US, and there, too, efforts are underway to regulate the companies. A coalition of lawmakers is pursuing legislation that could check the market power of the dominant digital platforms. They aim to limit the power held by tech giants by reforming rules around data, mergers, and anti-competitive behaviour.

The Digital Services Act (DSA) will also likely put social media companies on a collision course with European states. Experts say the act goes beyond clear-cut issues like hate speech, terrorism and child sexual abuse, to include requirements for an annual assessment of how much a product contributes to the spread of divisive material.

The evidence for such restrictions continues to mount. Former Facebook employee Frances Haugen alleged that the company knew its products were damaging teenagers’ mental health, fomenting ethnic violence in countries like Ethiopia and failing to intervene and remove misinformation before the capital riots last year, yet the company did nothing to address the situation.

When enacted, Europe’s DSA would set a new standard for internet user protection and safety, keeping tech companies accountable for illegal and harmful content, and mandating transparency with reference to their internal processes, content moderation and algorithms. The legislation would give rise to substantial fines and could see repeat offenders banned from the market.

The success of reforms in Europe and the US could determine the future for our digital era. Democracy and its institutions may yet succeed in limiting the economic and social power of the tech titans. If they fail, the companies will continue to shape and surveil our social and political realities free from the transparency and accountability we have come to expect.

It’s a double-edged sword for marketing and communications. While we seek to ensure ethical practices, it’s also true the industry is increasingly dependent on harvesting deep data to capture and codify the metrics of consumer behaviour and trends.

By working hand in hand with the social media data collection machine, communications agencies increasingly risk exposing themselves and their clients to reputational damage and consumer backlash.

Careful consideration of the ethics of these potent new insights and tools is already essential; as regulation seeks to increase transparency and disclosure around how data is used, consumers may discover the extent to which their data is already mined, and all indications are this could contribute to substantial reputational risk.

It’s a wild, wild digital world

The mostly unregulated digital space is a minefield for the risk-averse, but it can be a boon for those able to execute bold and innovative strategies with the agility required for sticky collateral.

The looseness of existing regulation has helped drive astronomical profits and left downstream industries as well as citizens exposed to substantial risk and uncertainty, along with undeniable rewards.

Viral content and vibrant comment sections can propel a campaign to acclaim in ways that in the past was only achievable with a big budget, but individual commenters and uncontrolled content can also expose an organisation to risks outside its control.

At present, communications professionals can have limited say in how these platforms choose to present and use the content they put online.

Content couched in unrelated automated advertising can compromise an organisation’s reputation. Misunderstanding a platform’s content moderation rules can throw content out of alignment with organisational and brand objectives.

In February 2021, we were reminded of how the power balance is currently skewed towards social media companies in the way Meta responded to Australia’s Media Bargaining Code, which requires platforms, including Alphabet, to pay for news content.

Meta removed the posts of any page that covered Australian news and would not allow any use of post from Australian news sites. The incident, which occurred during the height of the COVID-19 pandemic, compromised the accounts of hundreds of organisations that were not news platforms including providers of essential services in areas like homelessness, health, government and emergency services, punctuating the final stages of a protracted conflict over the code.

From the commercial perspective of the tech giants, it’s unsurprising Australia’s regulatory policy platform evoked a strong response. Meta claimed the proposed legislation would set an ‘unworkable’ precedent. The platform’s decision to pull hundreds of pages in retaliation was presented as unavoidable; the impact on health and government services, an unintended consequence.

Whistleblowers allege the tech giant intentionally shut down Australian government and community service pages — not accidentally, as claimed at the time, but in a deliberate tactic designed to put pressure on regulators.

The eleventh-hour stand-off between the tech companies and the federal government pushed the government back into negotiation, and a compromise was reached. Meta and Alphabet subsequently negotiated payment deals with media companies like News Corp, Sky news and Seven West.

It was a world first, and widely seen as a win that would bring greater fairness, transparency and accountability to how big tech operates within Australia. But there’s little doubt the showdown twisted the government’s arm at a crucial point in talks with the platform. While it forced a result, it also contributed to social media companies’ growing reputation for unethical conduct and helped make the case for more regulation.

The saga also underscored that communications professionals can never fully rely on third party platforms, that can disappear or change on a whim, to reach their audiences.

What can we do?

Against the backdrop of a continually shifting online world, it is important for communications professionals to stay informed about the changes in the social media spaces we participate in. This means staying up to date about not only the latest trends and opportunities, but also the legal and moral responsibilities and risks that come with using social media.

While communications professionals don’t have control over social media companies and how they operate, these platforms are social media ­– there is an extent to which we get out of these platforms what we put in.

As we take advantage of what social media can give us – larger audience reach and increased interconnectedness ­– we need to ensure the content we produce is as ethical as possible, both in how it’s made and considering its potential impacts on communities. We also need to address risk mitigation as a key part of our social media strategies.

Social media is here to stay, so it’s up to all of us to make sure its benefits outweigh its costs.