Australia is set to introduce new legislation aimed at curbing the spread of misinformation on social media platforms, with fines of up to 5% of global revenue for companies that fail to comply. The bill, which will be presented to parliament on Thursday, is part of a broader regulatory crackdown on tech giants that have been criticized for undermining national sovereignty and failing to address harmful content effectively.
Under the proposed law, internet platforms will be required to develop codes of conduct to prevent the dissemination of dangerous falsehoods. These codes must be approved by a designated regulator. If platforms fail to adhere to these standards, the regulator will impose fines for non-compliance. The legislation specifically targets misinformation that impacts election integrity, public health, incites hatred or violence, or threatens critical infrastructure and emergency services.
Communications Minister Michelle Rowland emphasized the urgency of the bill, stating, “Misinformation and disinformation pose a serious threat to the safety and wellbeing of Australians, as well as to our democracy, society and economy. Doing nothing and allowing this problem to fester is not an option.”
The legislation is a response to growing concerns that foreign-domiciled tech platforms are bypassing Australian laws and regulations. It follows the criticism of platforms like Facebook, which, under its parent company Meta, has threatened to block professional news content if forced to pay royalties. Similarly, X (formerly Twitter) has significantly reduced its content moderation since its acquisition by Elon Musk in 2022.
The bill addresses concerns raised about an earlier version of the legislation, which was criticized for granting excessive power to the Australian Communications and Media Authority (ACMA) to determine what constitutes misinformation. The revised bill clarifies that the ACMA will not have the authority to remove individual pieces of content or user accounts. It also provides protections for professional news, artistic, and religious content, though it excludes government-authorized content.
The move reflects a growing global trend to regulate tech companies more tightly and address the challenges posed by the rapid spread of misinformation on digital platforms.