Can We Still Trust Paranormal Evidence in the Age of AI?
"Pics or it didn't happen" is so last century, darling...
First published in 2019 on the Hayley is a Ghost blog
When someone says they’ve seen a ghost or encountered Bigfoot, the response from most people is almost automatic: “Pics or it didn’t happen.” In an age where nearly everyone has a camera in their pocket, the assumption is that if something truly extraordinary occurred, there should be some visual evidence to prove it. But even if someone did pull out their phone and show you a crystal-clear photo of a strange figure in the trees - would you believe them?
Probably not. And that’s the crux of the issue.
The demand for proof has always something of a ritual in discussions about the paranormal. Skeptics often ask for photos, video, or physical evidence to support extraordinary claims. But here’s the problem: photos and videos can lie. People have been faking paranormal media for as long as they’ve been able to create it - from Victorian-era spirit photography to the playful hoax of the Cottingley Fairies and beyond. While the burden of proof does sit with the person making an extraordinary claim, what counts as proof has never truly been defined. And has become complex given the modern surge of easily accessible generative AI.
Now, it’s easier than ever to manufacture convincing “evidence.” AI tools can generate photorealistic images and deepfake-style videos with just a few prompts. With minimal effort, anyone can create footage of a ghost in a hallway or a UFO hovering over a city. So if someone did show you a perfect video of Bigfoot walking through the woods, your instinct wouldn’t (and shouldn’t) be to accept it - it would be to ask how it could have been faked.
This creates a paradox: we ask for evidence, but we don’t trust it when we see it. Perhaps we never really did.
A Broader Crisis of Trust
This isn’t just a paranormal research problem. AI-generated misinformation has become a growing concern across many sectors, including journalism, politics, public health, and national security. A 2023 report by the Brookings Institution1 warned that deepfake technology poses serious risks to democratic discourse, with convincing fake videos of political figures or events being used to manipulate opinion, influence elections, and destabilise societies.
The RAND Corporation has similarly warned that as AI-generated content becomes more sophisticated and harder to detect2, it opens the door to large-scale disinformation campaigns. These aren’t hypothetical scenarios: we saw it happen during the COVID-19 pandemic3 when fake videos and images were circulated that contributed directly to vaccine hesitancy and widespread misinformation.
Why We Fall for Fakes
Psychological research shows that humans tend to rely on intuitive judgments (gut feelings) when assessing online content, rather than engaging in careful analysis.4 This makes people more vulnerable to accepting false but persuasive information, especially when it aligns with their pre-existing beliefs or emotional responses.
This has massive implications for how we evaluate paranormal claims in the digital age. If fake political videos can sway voters and alter real-world behaviour, what does that mean for a ghost photo going viral? All things considered, this raises a number of concerns about paranormal researchers who fail to update their methodologies to account for this adapting technical landscape.
For those who genuinely believe they’ve had an unexplained experience, it can be disheartening to see bad-faith actors flood the field with fabricated content. Using AI to create fake paranormal evidence undermines trust not just in individual claims, but in the entire process of investigation. It exploits grief, fear, and wonder, and erodes public understanding about anomalous experiences.
This makes the decisions of the paranormal researcher more important than ever. An over-reliance on photo and video evidence has always been a shaky foundation, but in the age of generative AI, it’s a liability. Ethical responsibility falls not just on those who create fake evidence, but also on those who accept and share it without scrutiny. As such, it’s fair to say that a paradigm shift in the fields of paranormal research is long overdue.
Developing a Human-Centred Approach
I’ve been investigating paranormal claims for 20+ years and learned early on that I needed to shift away from an over-reliance on demanding physical proof for the claims people make about their experiences. At the core of every paranormal claim is a human experience. Even non-faked photographs and videos don’t act as proof of the paranormal - they simply document something that someone interpreted as being paranormal.
This is a fundamental rule of my approach to the research I carry out: There is always a human in the loop. There is always someone witnessing, interpreting, and framing an experience through their own beliefs and expectations. This should never be forgotten.
Photos and videos presented by eye-witnesses are only part of the puzzle. They can support a story, but they are never the whole story. Sometimes, a photo or video can aid an investigation but can equally hinder one too. Sometimes, they’re misleading (intentionally or unintentionally), And sometimes, they’re just plain fake.
What’s The Solution?
Evaluating extraordinary claims - whether paranormal or political - demands vigilance, healthy skepticism, and a commitment to developing and maintaining critical thinking skills. The good news is that developing good media literacy skills works. Studies have shown that teaching people to spot manipulation tactics increases their ability to detect misinformation5 and makes them less likely to share it. Although it might feel as though we are living in a “post-truth” era, there are things we can all do to reduce our chances of falling for or sharing AI hoaxes.
Below are some everyday strategies you can use when encountering supposed “evidence” of the paranormal:
Look for inconsistencies. AI often struggles with fine details – blurry hands, unnatural lighting, distorted backgrounds, or symmetrical patterns that don’t quite add up. Sometimes, things in the background of a photo lack information (like books with no titles on the spines.)
Use reverse image searches. Tools like Google Lens or TinEye can help you trace where an image came from and whether it has appeared elsewhere. This can lead you straight to the source!
Examine videos frame-by-frame. You can use media software to play videos frame-by-frame, which often highlights inconsistencies in individual frames. I use this fbf viewer from Why The Trick.
Check for perfection. AI images often lack details that can be found in real pictures, leading to these photos having an ‘airbrushed’ look.
Ask: who is sharing this, and why? If a source has a history of hoaxes, treat the content with appropriate caution. If you’re speaking to a witness directly, ask whether their media aligns with their actual experience - or overshadows it. If it seems too good to be true… well…
We may never be able to definitively prove or disprove every ghost photo or UFO video. But we can be better at asking the right questions, understanding the limits of evidence, and remembering the very human experiences at the heart of these strange and compelling stories.
Deepfakes and International Conflict | Brookings Institute, 2023. Available at: https://www.brookings.edu/wp-content/uploads/2023/01/FP_20230105_deepfakes_international_conflict.pdf
Generative Artificial Intelligence Threats to Information Integrity and Potential Policy Responses | RAND, April 2024. Available at: https://www.rand.org/content/dam/rand/pubs/perspectives/PEA3000/PEA3089-1/RAND_PEA3089-1.pdf
Deepfakes could supercharge health care’s misinformation problem | Axios, November 2023, Available at: https://www.axios.com/2023/11/14/ai-deepfake-health-misinformation-fake-pictures-videos?
Pennycook, G. et al. (2021) The Psychology of Fake News. Trends in Cognitive Sciences, Vol. 25, No. 5, pp. 388 - 402 Available at https://doi.org/10.1016/j.tics.2021.02.007
El Mokadem, S. (2023). The Effect of Media Literacy on Misinformation and Deep Fake Video Detection. Arab Media & Society, Vol. 35, pp. 115-139. Available at: https://doi.org/10.70090/SM23EMLM




