Talking About Deepfakes and Synthetic Media
Deepfakes and AI-generated content can feel confusing, especially for parents who didn’t grow up with this technology either. Read on to find out how you can talk about it with your young person to help support them and keep them safe online.

Deepfakes and AI-generated content can feel confusing, and this is technology that no parent, whānau or caregiver grew up with.
It’s okay not to have all the answers! Let your young person know you’re learning too and approach the topic with curiosity (instead of fear) to keep the conversation open.
What to know
Deepfakes and synthetic media are becoming a normal part of young people’s online world.
They might encounter them through social media (TikTok, Instagram, YouTube), group chats, or gaming platforms and might include:
- face-swapping videos
- cloned voices
- edited or “nudified” images
- entirely AI-generated people or scenes
Synthetic media is the broad umbrella term that refers to any content (images, video, audio, text) that’s been artificially created or altered using technology, especially AI. That includes things like AI-generated voices, edited photos, virtual influencers, or even tools that generate realistic images from text prompts. Not all synthetic media is harmful; a lot of it is used in creative, educational, or entertainment contexts.
Deepfakes are a specific type of synthetic media, and this term is usually used when AI is used to create highly realistic but fake content that makes it look like a real person said or did something they didn’t. This often involves swapping faces in videos or cloning someone’s voice. Deepfakes tend to come up more in discussions about harm, like misinformation, scams, or reputational damage.
These technologies are becoming more common and harder to detect, which can make it difficult for young people to tell what’s real, and the tools are becoming easier to access and use, even for young children.
What matters most is not avoiding the topic but helping your child build the confidence and skills to navigate online spaces safely and responsibly.
Top tips for talking about it
Be curious and learn together
Deepfakes and AI-generated content can feel confusing and most adults didn’t grow up with this technology either. It’s okay not to have all the answers, so let your young person know you’re learning too.
Approaching the topic with curiosity (instead of fear) helps keep the conversation open.
You might say:
- “I’ve been hearing about AI-generated images and videos, have you come across that?”
- “Do people your age talk about deepfakes or edited images?”
- “Have you ever seen something online that looked real but wasn’t?”
Curious questions and active listening can help you understand what your child already knows about these topics, or what they may be experiencing online.
Help them understand what's real (and what might not be)
Deepfakes are images, videos, or audio that are created or changed using AI to look real, even when they’re not. They can be funny or creative, misleading or confusing, or used to embarrass, bully, or scam people.
Because they can look very real, young people need support to pause and question what they see online.
You might say:
- “Not everything online is what it seems anymore, even videos.”
- “If something feels off, it’s okay to question it.”
- “What clues might help you figure out if something is real or fake?”
Bring the conversation back to critical thinking - a skill that supports online safety and wellbeing beyond AI and deepfakes, and that connects to the online and offline world.
Talk about consent, respect and digital boundaries
Just like with real images, consent matters with AI-generated content too. Creating or sharing fake images of someone, especially sexual or embarrassing ones, can cause real harm.
You might say:
- “Using someone’s image without permission isn’t okay, even if it’s edited or fake.”
- “Making or sharing fake sexual images of someone is harmful and can be illegal.”
- “Behind every image is a real person who can be hurt by it.”
Deepfake technology has increasingly been used to create non-consensual or sexualised content, which can lead to serious harm, bullying, and distress. Discussing the real-world impact of deepfakes is an important topic to tackle.
Explore pressure, trends and 'just joking' behaviour
Some young people might engage with deepfakes out of curiosity, humour, or peer pressure without fully understanding the impact.
You could ask:
- “Why do you think people make or share edited or fake images?”
- “Do you think people always realise the impact?”
- “What would you do if your friends were sharing something like that?”
Talking about this ahead of time helps them think critically about their choices.
Let them know they can come to you
If something goes wrong (like a fake image being created or shared) young people may feel embarrassed, confused, or scared to speak up. Reassure them that they won’t be in trouble for asking for help.
You might say:
- “If anything like this ever happens, you can always talk to me.”
- “Even if it feels embarrassing, we’ll work through it together.”
- “You’re not alone in this.”
Knowing they won’t be blamed or judged can make it much easier for your young person to ask for help.
Build their confidence to question and pause
One of the most important skills young people can develop is learning to pause and think before reacting or sharing. You can support this by encouraging simple habits like checking where something came from, asking “does this make sense?” or talking to someone they trust before sharing.
You might say:
- “It’s okay to slow down and double-check things online.”
- “You don’t have to respond or share straight away.”
- “If you’re unsure, come talk to me.”
Help your young person understand the role they may play in spreading harm online by forwarding or sharing harmful or deepfake content online.
Bonus Conversation Starters
These questions don’t need to be asked all at once. One small conversation at a time can help build trust and understanding.
- “Have you seen AI-generated images or videos online?”
- “Do people ever use apps to edit or change photos of others?”
- “How can you tell if something online is real or fake?”
- “Why do you think people make deepfakes?”
- “What would you do if someone made a fake image of you or a friend?”
- “What would you do if a group chat was sharing something like that?”
- “Do people talk about what’s okay and not okay with AI images?”
- “What advice would you give a friend in that situation?”
- “Who could you go to if something felt wrong online?”
- “What helps you decide whether to trust something you see online?”
If you're concerned...
If your child is worried about a video, image or audio clip they’ve seen online, it’s important to take their concern seriously. Synthetic media and deepfakes can look and sound very real, which can be confusing or even distressing. Try to stay calm, listen without judgement, and focus on understanding what they’ve seen and how it’s affected them.
You might say:
- “Thanks for telling me, I can see why that might feel confusing.”
- “Do you want to show me what you saw?”
- “Let’s look at this together and figure out if it’s real or not.”
- “We can work out what to do next.”
Practical next steps might include:
- checking the content together using trusted sources or reverse image searches
- talking about how AI can be used to create realistic but fake content
- encouraging them not to share or react to something if they’re unsure it’s real
- reporting misleading or harmful content on the platform
- saving evidence if the content could cause harm or involve someone they know
Supporting your child to question what they see online helps build their confidence and resilience. If something doesn’t feel right or you’re unsure how to respond, you can reach out to Netsafe for advice and guidance.
You don’t need to be an expert in AI or deepfakes to support your child. Staying curious, calm, and connected is what makes the biggest difference.





