Netsafety Week 2025: Resources on technology-facilitated violence and online safety

30

July

2025

Netsafety Week 2025

Netsafe are running their annual Netsafety Week from 28 July to 1 August 2025. This year’s theme is “Power in Partnerships” and shines a spotlight on Netsafe’s collaborative projects that help shape a safer online world.

This news story summarises some of the ways in which violence is perpetrated through the use of technology platforms and services. It highlights resources and strategies that have been implemented both in Aotearoa and internationally to counter harm arising from technology-facilitated violence.

New resource launch

Netsafety Week 2025 opened on Monday 28 July with the release of Netsafe’s new resource on technology-facilitated coercive control (TFCC). Created in collaboration with Women’s Refuge, Shine, and the Light Project, it will help people recognise, understand, and seek support for TFCC and is aimed at those experiencing abuse and their support people.

Webinars in the week’s lineup include:

  • Monday 28 July 2025, 12pm: AI and Online Safety run by Netsafe and the AI Asia Pacific Institute, which will share the latest research, emerging risks and opportunities, and digital resilience strategies in an evolving tech landscape.
  • Tuesday 29 July, 7pm: What Matters Online — A Rangatahi Perspective supported by the Spark Foundation. It offers rangatahi a chance to share their perspectives on social media and online life, to help shape the future of online safety support and initiatives.
  • Wednesday 30 July, 11am: Online Safety in Fiji hosted by Netsafe Pacific. Fiji’s Online Safety Commissioner, Filipe Batiwale, will be sharing insights into Fiji’s current online safety challenges and solutions.
  • Wednesday 30 July 2025, 1pm: Reflections on International Indigenous Health Collaborations where Netsafe CEO Brent Carey and Netsafe Poutaki Mātauranga Māori Amokura Panaho reflect on the recent INIHKD RIEL global Indigenous health conference.

You can find more information about the webinars including how to register on Netsafe's website.

Technology-facilitated violence

Violence against women online

Deepfakes

Deepfakes are AI-generated images designed to digitally mimic visual or vocal appearances of real people, both living and dead. Academics from Te Herenga Waka | Victoria University of Wellington and Columbia University in the US suggested to The Conversation that one solution to the threat of unregulated deepfakes would be giving individuals the ability to enforce intellectual property rights to their own image and voice.

Australia recently grappled with a landmark criminal case against a man who created and posted hundreds of violent and sexually explicit deepfake images of a woman he knew. For further discussion of international legislative approaches to address deepfakes, see RNZ’s interview with Alex Sims, Associate Professor in the Department of Commercial Law at the University of Auckland Business School and expert on blockchain technology, copyright law and consumer law.

Sextortion

Sextortion is a form of blackmail where someone is pressured or tricked into complying with demands (for money, further explicit material etc.) at risk of having their sexually explicit images or videos released. This explicit material may be real, or it may be digitally faked.

Since 2019, Netsafe has experienced an 88% increase in sextortion reports.

Netsafe’s Chief Online Safety Officer Sean Lyons spoke to 1News in response to cyber security company Norton reporting that dating scams surged 60% in the past year. He comments that:

“Scammers will make better and better profiles, better and better attempts, more and more realistic stories to try and entice us in to try and convince us that what's in front of us is real and not a scam.”

For a general scoping review of sextortion as a type of cyber scam, see this article published by academics from the Royal Melbourne Institute of Technology.

A report from UN Women released in 2023 discusses image-based sexual abuse in further detail - Technology-facilitated violence against women: taking stock of evidence and data collection.

Doxing/Doxxing

Doxing, the malicious publishing of someone’s personal information, is a common stalking and intimidation technique often levelled against women in the public eye. It is intended to scare and discourage participation in public spaces and often leads to threats or acts of in-person violence.

The US Institute of Justice published an evidence brief on countering technology-facilitated abuse including sextortion and doxing.

Vine covered the Crimes Legislation (Stalking and Harassment) Amendment Bill earlier this year and the subsequent broadening of the definition of stalking to include doxing.

Technology-facilitated intimate partner violence

Technology-facilitated intimate partner violence/family violence is when someone close to a person uses technology to monitor, harass, control, intimidate, stalk, or coerce them. In addition to their newly-launched resource, Netsafe provides support, and safety measures for recognising the signs of technology-facilitated violence in relationships.

Te Mana Whakaatu | the Classification Office has also published research on technology-facilitated intimate partner violence.

Other resources on technology and coercive control include a report from the Australian eSafety Commissioner on attitudes that normalise tech-based coercive control and an article from Henry. et. al. discussing image-based abuse as a means of coercive control.

Sexual violence on dating apps

An 18-month investigation known as the Dating Apps Reporting Project was published by the Guardian in February 2025, which revealed that common Match Group dating apps like Tinder were failing to permanently ban users who were reported for sexual assault and rape.

In response to technology changing the way people connect and date online, Netsafe teamed up with Bumble to release their Guide to Safer Online Dating. Bumble is not part of Match Group.

Link between online and offline violence and extremism

For more information on the overlap of online violence with offline violence, particularly against women and girls, see Te Mana Whakaatu | the Classification Office’s report on online misogyny and violent extremism.

A report from the Disinformation Project further examines the link with reference to the sustained and high-volume networked targeting of wāhine Māori in the public eye.

Manatū Wāhine | the Ministry for Women’s Long-term Insights Briefing is currently considering the impacts of online harm on the participation of women in public life, including in leadership roles.

Violence against children online

Netsafe and Save the Children have recently released a report Children and Youth Online Safety in Aotearoa New Zealand. The report shares the findings of an online survey developed by Save the Children New Zealand in partnership with Netsafe to gain insights into children’s engagement in the online environment, and their views and experiences of online safety.

Vine have also recently put out a story about online violence against children in response to the public consultation run by the Education and Workforce Committee.

You can read our story here.

What have countries already tried

Social media restrictions

Countries around the world have tried different approaches to combatting violence young people experience online, including age-based social media restrictions.

Safety by design

Australia’s eSafety Commissioner has created a new initiative called Safety by Design, which puts user safety at the centre of design and development of online products and services.

In New Zealand, the Safer Online Services and Media Platforms work aimed to improve the safety and regulation of online services and media platforms, with a particular focus on minimising content harms for children and young people, including online child exploitation and other forms of online violence. You can read Vine’s submission for the related public consultation and see a recording from a webinar we hosted and related resources.

In May 2024, Minister of Internal Affairs Brooke van Velden announced that Te Tari Taiwhenua | Department of Internal Affairs would not be progressing further work on the Safer Online Services and Media Platforms.

Addressing misinformation and disinformation

Misinformation and disinformation can be a source of social polarisation, hatred, or harm. Netsafe has a guide to recognising and combatting misinformation and disinformation.

The Department of the Prime Minister and Cabinet has been strengthening New Zealand’s resilience to disinformation and coordinating responses to national security implications that arise from its spread.

Finland teaches media literacy to school students as an important part addressing disinformation, particularly regarding democratic participation and reduction of social polarisation.

Similarly, the Netherlands has introduced legislation that improves the transparency of political advertising to combat misinformation.

Common language

The World Economic Forum’s Global Coalition for Digital Safety has developed a toolkit called Typology of Online Harms. It aims to harmonise universal perceptions of online threats and develop a shared language to address the problem.

Targeted site blocking as a last resort

Te Tari Taiwhenua | Department for Internal Affairs (DIA) blocks websites known to host child sexual abuse material as a last resort for combatting online harm against children. The Digital Child Exploitation Filtering System assists in the combatting the trade of illegal material by making it more difficult to access. In March 2025, Internal Affairs Minister Brooke van Velden announced DIA was launching a new database measure design to combat digital violent extremism by assigning a unique identifier (‘hashing’) to each piece of content. Content can be more easily and quickly identified as illegal, reducing the emotional burden on investigators.

Licensing one’s own image and voice

Denmark announced in June 2025 that it was taking steps towards banning deepfake imagery online. In May 2025, the US passed legislation making it illegal to knowingly publish or threaten to publish intimate images without a person’s consent, including deepfakes. In 2024, South Korea took steps to curb deepfake sexually explicit material, including harsher punishment and increased regulations for social media platforms.

Proposed future solutions

Transparency on algorithms

Social media companies, rather than individuals, should be required to ensure user safety from design through to implementation. Regulation of these systems might look like independent audits, transparency about how content is promoted, and real consequences for platforms that fail to act.

Proposed member’s bill

ACT Party MP Laura McClure recently put forward a Member’s Bill that – if drawn from the members’ ballot – would criminalise the creation and distribution of deepfake sexually explicit images.

Victoria University senior lecturer in AI Andrew Lensen told RNZ that an outright ban might be good in principle, but difficult to enforce:

“There is a pretty high burden of proof to show that someone produced a deepfake, and that gets even more complex when it could be done cross-border.”

He said the government needed to provide more detail of implementation and enforcement to make it a substantive effort to solve the problem.

Increased media literacy

Ben James is the editor of AAP FactCheck, a division of the Australian Associated Press, told RNZ that one solution was improving media literacy.

"Media literacy needs to start from an early age… There really needs to be a coordinated effort if we are to have functioning democracies making big decisions based on fact."
Online safety regulators with greater power to hold platforms accountable and apply penalties

In 2023, the Chief Human Rights Commissioner Paul Hunt wrote to NZ Tech chief executive Graeme Muller expressing concern that social media giants were failing to protect Jacinda Ardern from online abuse. The Human Rights Commission was also critical of New Zealand’s 2022 online safety code and claimed it was not fit for purpose. The code is a voluntary set of commitments co-designed with the technology industry, including some social media companies such as Meta and X-Corp. Weaknesses of the code include a high threshold for “harm”, an aim to reduce rather than eliminate harm, its failure to comprehensively capture kinds of harm that occur e.g., misogynistic hate speech, and its emphasis on the role users play in managing harm rather than platforms.

Greater international cooperation

Denmark’s new law addressing deepfakes is only enforceable within the country’s borders, making the issue difficult to address in a global digital environment. Further discussion of how international cooperation is key to robust legislation addressing digital harm can be found on Newsroom, here.

Netsafe's annual Netsafety Week runs from July 28 to 1 August 2025 with the theme "Power in Partnerships". This news story summarises some of the ways in which violence is perpetrated through the use of technology platforms and services and highlights resources and strategies that have been implemented both in Aotearoa and internationally to counter harm arising from technology facilitated violence.