Fraud, abuse, and harmful content: what every CX leader needs to know about online risks
-
IntouchCX Team

In 2024, digital fraud didn’t just grow – it exploded. Businesses around the globe lost more than $47.8 billion due to online scams, a staggering 15% increase on the previous year. Sophisticated scams such as synthetic identity fraud ballooned by 31%, illustrating how rapidly cybercriminal tactics evolve. In 2025, The World Economic Forum predicts that cybercrime will drain the global economy of a jaw-dropping $10.5 trillion.
But the cost of fraud is more than just the stolen dollars. In the United States, retailers and e-commerce platforms, for instance, lose far more than the immediate theft. Every dollar lost to fraud translates into an additional $4.61 spent dealing with operational disruptions, repairing damaged reputations, and navigating compliance penalties.
Meanwhile, online spaces are becoming increasingly hostile: in the United States, nearly one in three internet users (29%) reported experiencing harmful online content such as harassment, graphic imagery, or abuse in 2023. Alarmingly, more than half of American adults — 52% — have faced online hate or harassment at some point in their lives, with a sharp rise noted over the past year. Notably, these risks are not limited to major social media platforms; smaller sites and communities also see significant levels of harmful content, underscoring that digital threats permeate every corner of the internet.
For customer experience (CX) leaders, these numbers should represent an urgent call to action. Fraudulent activities and toxic online environments erode consumer trust, degrade loyalty, and disrupt seamless operations, and with businesses leaning ever more heavily on digital engagement with their users, tackling these challenges is no longer optional. It’s a strategic necessity and critical to your long-term growth.
What are the types of online risks?
The spectrum of online risks facing businesses today is diverse and growing more complex and sophisticated with each passing day. At one end are financial fraud schemes, such as phishing attacks where cybercriminals deceive users into revealing sensitive information. Account takeover fraud involves unauthorized access to user accounts, often resulting in illicit transactions and data breaches. Transaction fraud, often targeting online purchases, can cause substantial losses and significantly damage consumer trust.
Online abuse poses a serious risk and includes harassment, hate speech, bullying, and threats. Such abuse not only emotionally impacts customers but also damages the integrity and reputation of platforms, often leading to significant user churn.
Harmful content—such as graphic violence, explicit material, misinformation, and self-harm content—further compounds these issues. Exposure to such material severely deteriorates user experience, eroding trust and prompting customers to abandon affected platforms. The rise in harmful content underscores the necessity for stringent moderation practices to ensure user safety and preserve brand reputation.
For nearly 20 years, WebPurify, an IntouchCX company, has been a leader in content moderation, helping protect companies of all types against various forms of fraud, abuse, and harmful content. By updating its AI models weekly to adapt to evolving risks and regularly training and quality-checking its moderation teams, WebPurify continuously enhances its capabilities, providing businesses with strong defenses against these pervasive online threats.
Why online risks should matter to CX leaders
CX leaders are directly responsible for shaping positive customer experiences with their brand. And the bad actors out there are directly challenging this responsibility by damaging customer trust, negatively influencing brand perception, and harming your overall satisfaction scores. The reputational harm stemming from unmanaged risks can be severe and immediate, as negative customer experiences can rapidly escalate into public crises via social media. Trust is notoriously difficult to regain once lost.
What’s more, new regulatory pressures from legislation such as the EU’s Digital Services Act and the UK’s Online Safety Act, or the long-established Children’s Online Privacy Protection Act (COPPA) in the US, require stringent standards, making compliance another strategic consideration in your CX playbook.
“Typically, WebPurify recommends a harms-based approach where you’re focusing on highest harm, or the most direct harm content,” says Alexandra Popken, VP of Trust & Safety at WebPurify.
Another important consideration is global parity: while platforms strive to enforce policies consistently worldwide, practical constraints such as limited resources and language capabilities mean it’s often necessary to prioritize specific markets. Internally acknowledging these limitations and clearly aligning them with executive teams is critical. Additionally, establishing clear guiding principles and criteria – such as confirming the content is false or misleading, assessing the potential for harm, and evaluating virality – helps teams determine precisely what’s within the scope of moderation efforts and ensures consistency in enforcement decisions.
Furthermore, the consequences extend beyond customer trust and satisfaction; they encompass significant operational disruptions and increased costs. Handling online risks reactively can lead to escalating customer support costs, complicated compliance issues, and potential legal liabilities, all of which drain resources and detract from core business activities.
From a strategic standpoint, proactively addressing online risks presents CX leaders with an opportunity to positively differentiate their brands. Businesses known for robust safety standards and transparent risk management practices enjoy enhanced reputations, attracting and retaining customers who prioritize secure, respectful online experiences. By effectively managing online risks, CX leaders not only safeguard their customer base but also position their brands as trustworthy, responsible, and customer-centric.
Navigating online abuse: how to best protect your customers
The growing presence of harmful content – ranging from violent imagery to dangerous misinformation and disinformation – significantly damages user retention and undermines brand credibility. Meanwhile, any online abuse can deeply affect your users’ emotional wellbeing, sense of safety, and willingness to continue engaging with a brand. Persistent harassment, targeted hate speech, or bullying will quickly erode trust and lead to customer churn.
Customers increasingly expect brands to safeguard their online interactions and quickly lose trust if harmful content is inadequately addressed, and as such CX leaders must prioritize creating environments that proactively moderate and swiftly address any harmful content.
Effective moderation solutions, combining automated systems with skilled human oversight, will significantly reduce abuse occurrences and help build safe, welcoming digital communities. Encouraging active customer engagement through accessible reporting mechanisms further reinforces trust, demonstrating that customer safety is a core brand value.
“For your moderation to be effective at scale, it requires balancing automation with human insight,” Alex explains. “While automated systems offer scalability and efficiency, relying solely on technology can lead to over-enforcement, under-enforcement, or misinterpretation due to the lack of contextual understanding. Integrating human review ensures nuanced, accurate moderation decisions, combining the best of both approaches.”
However, striking the right balance between moderation effectiveness and freedom of expression remains crucial, as well, to ensuring transparency and preserving customer confidence.
“It’s important to develop clear, symptom-based decision trees for your moderation teams,” Alex points out. “This ensures they have precise guidance on what’s within scope and what isn’t, enabling them to enforce policies consistently, accurately, and objectively. Transparent workflows help reduce ambiguity and subjectivity, which in turn supports fair and effective moderation decisions.”
Best practices for mitigating online risks
Effective mitigation of online risks begins with clear, well-communicated community guidelines and user policies. These standards not only inform customers about expected behaviors but also empower moderators to consistently enforce rules. CX leaders who want to be proactive and stay ahead of threats should invest in comprehensive moderation solutions that combine advanced AI systems with trained moderation teams for optimal effectiveness.
Transparent user education about risks and proactive measures can dramatically reduce incidents, improving overall user experiences. Establishing airtight incident-response strategies, complemented by clear communication channels, ensures brands remain responsive and trustworthy when managing risk scenarios.
The CX leader’s role in trust and safety
Ultimately, CX leaders play an essential role in ensuring trust and safety remains at the forefront of digital strategies. Those who proactively address online risks not only protect their customer base but also build more resilient brands that become known for their integrity, safety, and reliability.
By understanding, anticipating, and effectively managing fraud, abuse, and harmful content, CX leaders position their businesses to foster lasting customer relationships built on trust and mutual respect.