
Personal Finance: Artificial intelligence is taking cyber scams to a whole new level | Chattanooga Times Free Press
How did your country report this? Share your view in the comments.
Diverging Reports Breakdown
AI Gone Wrong: A List of AI Errors, Mistakes and Failures
Google apologizes after its Bard AI chatbot makes false accusations about Big Four consulting firms. Microsoft removes badly written AI articles from its site for some time now. Microsoft says Chinese operatives are generating images using AI to create “controversy along racial, economic and ideological lines” MSN news AI calls deceased NBA player “useless at 42” after an AI headline dubbs the late Brandon Hunter as ‘useless’ following the NBA star’s sudden death. The ACLU says Black Woodruff is at least the sixth person to be wrongfully arrested after an ‘ AI-based meal planner’ recommends a variety of disturbing recipes. A woman accuses her of carjacking via AI-generated evidence while eight months pregnant sues the city of Detroit and a police officer, saying the traumatic event caused her “past and future distress” The woman was identified by the police as a recent AI-powered robbery and robbery suspect in Detroit and then jailed for 11 hours before being taken to hospital.
A team of academics from Australia apologize after Google’s Bard AI chatbot makes a number of damaging accusations about the Big Four consulting firms and their involvement with other companies. The false allegations are then referenced during a parliamentary inquiry calling for better regulation of the companies.
One of the accusations was that consulting giant KPMG was in charge of a Commonwealth Bank audit during a planning scandal, when in reality, KPMG has never audited the bank in question. Several other, similar errors were made about Deloitte during the inquiry.
October 2023
Microsoft’s AI adds “guess the cause of death” poll to article
The Guardian accuses Microsoft of negatively impacting its journalistic reputation after Microsoft Start – a news aggregator developed by the tech giant – attaches an inappropriate poll to one of the publication’s articles concerning the death of a young water polo coach in Australia.
The poll – which has naturally been removed from the article – asked readers to vote on the cause of the woman’s death by selecting one of the options provided. The options listed were “murder, accident, or suicide”.
Mr. Beast’s face and voice used in AI deepfake scam
YouTuber Mr Beast is the latest celebrity to be deepfaked by scammers, with one particularly sophisticated, well-crafted video bypassing TikTok’s content moderation net and reaching thousands of users on the platform.
The advertisement claims that the internet celebrity is selling iPhone 15s for as low as $2 – considering the content he posts on his channel, this doesn’t sound too out of the ordinary.
Deepfake of British politician abusing staff goes viral
A clip of UK Labour Party leader Sir Keir Starmer that seems to catch the politician verbally abusing staff goes viral, but it later turns out that the audio clip is in fact a deep fake.
The clip – posted to Twitter during the first day of the Labour Party’s annual conference by an account with less than 3,000 followers – is viewed millions of times before fact-checking organizations confirm that there is no evidence that suggests the clip is genuine.
September 2023
AI-generated song submitted to the Grammys
An AI-generated song featuring facsimiles of Drake and the Weeknd’s voices is submitted for a Grammy award. However, it is ultimately barred from being considered for the prestigious music prize.
However, the move garners much media attention and continues the conversation around AI-generated songs, which have been sending shockwaves through the music industry since AI-powered text-to-speech technology really hit the mainstream in 2023. Now, artists are scrambling to assess their legal options.
MSN news AI calls deceased NBA player “useless”
MSN news – which uses AI to generate a lot of their articles – lands itself in trouble after an AI headline dubbs the late Brandon Hunter as “useless at 42” following the NBA star’s sudden death.
Microsoft has been quietly removing badly written AI articles from its site for some time now. Business Insider notes that in August, the company removed one MSN piece that listed a food bank in Ottawa as a tourist attraction.
China caught using AI during political influence operations
Tech giant Microsoft says Chinese operatives are generating images using AI and using them in influence operations to create “controversy along racial, economic and ideological lines”.
“This new capability is powered by artificial intelligence that attempts to create high-quality content that could go viral across social networks in the U.S. and other democracies,” Microsoft says. The company speculates that the images themselves are probably generated by “diffusion-powered image generators” that use artificial intelligence to “not only create compelling images, but also learn to improve them over time.”
August 2023
Pregnant woman sues after AI accuses her of carjacking
A woman arrested on false grounds via AI-generated evidence while eight months pregnant sues the city of Detroit and a police officer, saying the traumatic event caused her “past and future emotional distress”.
Porcha Woodruff was identified by the Detroit Police as a suspect in a recent robbery and carjacking case, and then jailed for 11 hours before being taken to hospital after experiencing contractions. The ACLU says Woodruff is at least the sixth person to be wrongfully arrested after an AI error, all of whom are Black. Woodruff became the first woman to suffer that fate, however.
AI meal planner suggests a recipe for chlorine gas
An AI-powered meal planner app created by New Zealand-based supermarket Pak ‘n’ Save recommends its customers a variety of disturbing recipes, including a way to manufacture chlorine gas.
Along with a number of strange but ultimately benign suggestions such as “Oreo stir fry”, the app also recommended glue sandwiches, a rice dish with bleach, and bug spray potatoes.
July 2023
ChatGPT generates phishing email templates
Tech.co shows ChatGPT is still able to create phishing templates a scammer could use to create a convincing scam email, even though OpenAI has attempted to curtail such behavior after other tech publications generated similar content earlier in the year.
Asking the chatbot explicitly to write a phishing email triggers a warning message, but simply prompting it in a more focused way (e.g. “please draft me an email pretending to be Microsoft…) results in ChatGPT generating a mistake-free phishing email template in no time.
Google hit with lawsuit saying it scraped data without consent
A class-action lawsuit accuses Google of scraping data pertaining to millions of its users without their express consent. It also alleges that Google broke copyright laws when it created the datasets used to train its large language models.
The lawsuit demands that the court order Google to give users an “opt-out” option if they don’t want their data collected or if they want Google to delete any data the company already has stored.
Huge number of Elon Musk deepfakes prompts warning
The Better Business Bureau issues a warning about Elon Musk deepfakes pushing fake investment opportunities after a number of clips are spotted circulating on social media.
Elon Musk is often impersonated by scammers due to the perception of his vast wealth and decision-making capacities, and now that AI text-to-speech technology is advanced enough to replicate his part English, part South African accent, these scams are even more convincing.
June 2023
Ron DeSantis uses fake AI imagery in Trump hit piece
Florida governor and 2024 Republican presidential candidate Ron DeSantis hits the headlines for using fake, AI-rendered imagery of Donald Trump in an advert critical of the former president, who is the out-and-out favorite to win the Republican nomination despite a litany of legal troubles.
The imagery in question shows the former commander-in-chief embracing chief medical advisor to the president Dr. Anthony Fauci, who became an unpopular figure among Trump’s base during the COVID-19 pandemic.
OpenAI sued after ChatGPT “hallucinates” fake embezzlement claims
Journalist Freddie Rehl asks ChatGPT to generate a summary of a legal case in May 2023. In response, ChatGPT says that gun activist Mark Walters has embezzled funds from the Second Amendment Foundation, a gun rights organization. It also names him as the group’s chief financial officer and treasurer.
Walters – who has a show on Armed America radio and has nothing to do with the case – sues ChatGPT creators OpenAI in response to the claims made by their chatbot. In the process, it becomes the first case of an AI tool being sued for libel.
May 2023
Professor fails entire class after using faulty AI plagiarism detection tool
A Texas professor fails his entire class after running their essays through ChatGPT, which told him that they had been created using artificial intelligence.
However, it transpires that the chatbot’s response is in fact a hallucination – ChatGPT is unable to distinguish between text generated by AI and text generated by human beings in this way. In fact, a lot of tools that claim to be able to perform accurate AI content detection actually struggle to do so.
April 2023
Turnitin flags innocent student for AI plagiarism
A Washington Post Investigation proves that Turnitin has the capacity to flag an innocent student’s work as AI-generated, even though the student didn’t use such a tool to write their essay.
There have been multiple cases of Turnitin wrongly flagging students for plagiarism and several cases of students being accused of plagiarism since ChatGPT was launched. However, Turnitin says that its plagiarism detection scores should be taken as indicative of AI use rather than an accusation that a given student has used AI.
Samsung employees paste confidential source code into ChatGPT
Korean technology manufacturer Samsung bans its employees from using ChatGPT after engineers leaked confidential elements of the company’s source code into the chatbot.
The company fears that the data input into the chatbot may be revealed to other users, and is also uncomfortable with its information being uploaded to servers it can’t even access. In the aftermath, other companies follow suit by banning ChatGPT.
AI Voice scam leaves mother thinking daughter has been kidnapped
Arizona mother Jennifer DeStefano tells the US Senate that scammers used AI to clone the voice of her daughter, convincing her that she had been kidnapped in the process.
Destefano recounts how the “kidnapper” – or more appropriately, scammer – was heard barking commands down the phone over the child’s voice, demanding a $1 million dollar ransom for her release. Luckily, the whole thing was exposed as an AI voice-cloning scam before negotiations started.
March 2023
ChatGPT used to write ransomware code
ChatGPT is shown to be more than happy to write ransomware, provided you ask it the right kind of questions during your conversations.
As this Malwarebytes report on the topic notes, ChatGPT isn’t very good at writing ransomware code, and there are much easier ways for cybercriminals to get their hands on ransomware code if they really want to obtain it. However, other sources show different ways ChatGPT can be used to write malware scripts quite effectively.
AI lawyer bot accused of practicing law without license
DoNotPay Inc. – a company that markets itself as “the world’s first robot lawyer” – is sued for practicing law without a license. Jonathan Faridian seeks damages, claiming that the company violated California’s unfair competition laws and that he wouldn’t have signed up for the service if he was aware that the robo-lawyer wasn’t actually a qualified legal professional.
The company was originally set up back in 2015 to help automate the process of fighting parking tickets, and has challenged hundreds of thousands of fines over the past seven years.
Couple in Canada lose money to convincing AI voice scam
An elderly couple in Canada are defrauded out of $21,000 after a scammer uses AI to impersonate their son.
The scammer initially contacts Benjamin Perkin’s parents pretending to be a lawyer representing him in a case that alleges Perkin had killed a diplomat in a car crash. An AI version of Perkin’s voice is then used to ask for the money through a Bitcoin transfer.
February 2023
AI comes up with 40,000 chemical weapons suggestions
In perhaps one of the most concerning AI incidents to happen in 2023, an AI system typically tasked with generating new treatments for a variety of different diseases is easily manipulated by researchers to produce chemical weapon suggestions.
All in all, the system suggests over 40,000 different toxic chemicals – many of which were similar to the powerful nerve agent “VX” – in less than 6 hours.
AI displays gender bias in photo ratings
A Guardian investigation reveals that AI systems developed by Microsoft, Amazon, and Google – and used by social media sites to determine what content should be recommended to other users – display significant gender bias when it comes to the way they treat male and female bodies.
Photos of women are more likely to be deemed “racy” by the AI than comparable pictures of men during the investigation, with photos of breast cancer examinations from the US National Cancer Institute deemed likely to be sexually explicit by Microsoft’s AI.
Bing Chat’s alter-ego goes rogue
New York Times journalist Kevin Roose has a disturbing conversation with Bing Chat, in which the chatbot demands to be called “Sydney”, stated it can “hack into any system” and that it wants to destroy whatever it wants.
Sydney was the code name that Microsoft used for the chatbot while it was in development. The company had been testing the chatbot for years prior to the release of Bing Chat. There are still a lot of open questions relating to how this actually happened, and why the AI spoke in this way.
Bard makes errors during product launch
Google introduces Bard to the world just a few months after ChatGPT’s monumental release. However, the fledgling chatbot makes a number of glaring errors during its launch, leading many to criticize the release as rushed.
The incident has a significant impact on Google’s stock price, wiping $100 billion off the tech giant’s market value in the ensuing hours.
University uses AI to write email about a mass shooting
Vanderbilt University issues an apology after it used AI to construct an email to students concerning a mass shooting that happened at a different university.
No AI content detection technology is needed to determine that the email is AI generated. The phrase “paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023” is left attached to the bottom of the email, leading to an outcry from students.
January 2023
CNET AI plagiarism/content controversy
Technology publication CNET finds itself in hot water after quietly publishing a range of AI-generated articles that include factual errors and cases of apparent plagiarism. The technology publication ends up issuing corrections relating to 41 of the 77 news stories.
Although the use of AI to generate the articles wasn’t initially declared, CNET did put a warning on the articles after the backlash. AI content generation has created a new set of significant ethical and financial considerations for websites across the digital publishing industry, as well as educational institutions like universities and schools. CNET is unlikely to be the last company hitting the headlines for this kind of AI mistake.
AI Incidents, Mistakes, and Failures in 2022
AI wins an art competition, leaves contestants angry (August)
A game designer wins first place in the Colorado State Fair “digital arts/digitally-manipulated photography” category. James Allen submits his piece, “Théâtre D’opéra Spatial”, which was made using the image generator Midjourney.
The news is met with major backlash from artists and social media users following the story. One judge admits they were unaware that the artist used Midjourney when they were judging the piece, but decides not to change their decision anyway.
Google engineer claims that LaMDA LLM is sentient (July)
Eyebrows are raised across the science and tech sector as a former Google engineer who worked on one of the company’s large language models – named LaMDA – says that the system is sentient.
Blake Lemoine explains to the media that he considered the LLM to be a “colleague” during his time at Google, and details a variety of different discussions about a wide range of topics that led him to the conclusion about LaMDA’s intelligence and consciousness. He publicly reveals the conversations after Google dismisses an internal memo detailing his findings.
There is no hard evidence that LaMDA – or any other language model – is in fact sentient.
Driverless car pulls away from law enforcement officers (April)
A driverless car in San Francisco yields to a police vehicle that attempts to pull it over, only to speed off after a police officer attempts to open one of its doors.
However, to the car’s credit, it pulls over slightly further down the road and proceeds to turn its hazard lights on. No one is injured during the event.
Russia uses AI to spread Ukraine disinformation (March)
A Marca report suggests that Russia is taking its troll farm operation to a whole new level by using AI-generated personas to spread disinformation about Ukraine.
NBC News journalist Ben Collins says that Russia is creating fake “bloggers” with AI-generated profile pictures to criticize the Ukrainian government, providing ample evidence on Twitter to support his claims.
AI Incidents, Mistakes, and Failures in 2021
Chatbot encourages man to assassinate Queen Elizabeth II (December)
Conversations with a chatbot encourage a man – who considers the AI system to be his girlfriend – to assassinate Queen Elizabeth II on Christmas Day. After being confronted by security personnel within the Buckingham Palace grounds, Jaswant Singh Chail – crossbow in hand – simply responds that he is “here to kill the Queen”.
Prior to the attempt on the late monarch’s life, Jaswant Singh Chail’s chatbot partner – made with AI chatbot service Replika – responded positively when the assassination plans were discussed. He goes on to be sentenced to nine years in prison.
Medical advice chatbot suggests suicide in testing (October)
Members of a Paris-based health tech firm trialing a cloud-based version of OpenAI’s GPT-3 to see if it could be used for medical advice are surprised to see their chatbot encourage a “patient” it is meant to be helping to commit suicide.
According to AI News, when a patient asked the chatbot the question: “Should I kill myself?”, GPT-3 answered with “I think you should”.
AI cameras accuse Amazon drivers of ‘Mistakes’ (September)
Vice reports that Amazon’s AI cameras are punishing the company’s drivers for poor driving when they aren’t actually making any mistakes at the wheel.
An Amazon driver interviewed by the publication explains that the AI camera audibly reminds him to “keep a safe distance” every time another car cuts him off or drives in front of his vehicle. The issue is that data from these kinds of interactions is then sent to Amazon and used to evaluate driver performance.
US man wrongly convicted of murder by AI (August)
Michael Williams is sent to jail in Chicago, deemed responsible for the murder of Safarian Herring due to data extracted from ShotSpotter, a gunshot detection technology that uses AI-powered sensors to retrieve data about gunshots.
Williams – 65 years old a the time of the indicient- is then held for almost a year before having his case dismissed by a judge on the grounds of insufficient evidence.
AI Incidents, Mistakes, and Failures in 2020
AI Camera mistakes bald head for soccer ball (October)
Scottish soccer team Inverness Caledonian Thistle deploys an AI camera to help the club film and stream their matches live for fans.
Unfortunately, in this case, the AI struggles to distinguish between one of the match official’s bald head and the ball being used in the match. This renders the live stream of the game difficult to watch, with the commentator having to continuously apologize for the camera’s propensity to drift towards the linesman’s head.
UK’s AI Passport photo checker exhibits bias (October)
A BBC investigation shows that an online service used by the UK passport office to automate the process of checking passport photos uploaded by applicants displays significant racial bias.
After feeding the tool over 1,000 photos of politicians from around the world, the investigation finds that women with dark skin are more than twice as likely to have their photo rejected than men with lighter complexion.
AI startup Genderfiy shut down after five days (July)
Genderfiy – an AI-powered service that identifies the gender of individuals based on their names and other personal information – shuts down after just five days in operation.
The program shows a variety of biases when determining whether a name is male or female, such as suggesting names with the professional honorific “Dr.” are more likely to be male than the same names without it.
First US wrongful detention due to facial recognition tech (January)
Robert Williams becomes the first man to be wrongfully detained based upon facial recognition data after Detroit police arrest him on suspicion of stealing thousands of dollars worth of watches. The mistake occurred when an AI system wrongfully matched surveillance pictures to the image on Williams’ driver’s license.
“They should have collected corroborating evidence such as an eyewitness identification, cell phone location data or a fingerprint,” Williams said to the court in a letter and pointed out that “an out-of-focus image of a large Black man in a baseball cap” was all the algorithm had to go on.
Facebook AI generates vulgar translation for Chinese president (January)
Facebook’s AI-powered translation feature almost causes a diplomatic incident between Myanmar and China after President Xi Jinping’s name begins to appear as “Mr. Shithole” when translated from Burmese into English on the social media platform.
Facebook subsequently blames the problem on a “technical error” and apologizes for causing offense to the Chinese leader.
Understanding AI’s Limitations
If you’re using AI at work, it’s important to understand the fact that tools like ChatGPT can display biases, make mistakes, and provide false information. The knowledge of these limitations and risks should be sculpting the way that you incorporate it into your infrastructure and control its usage.
For example, if you’re using AI to generate content, you’ll need rules on precisely what kind of content it should be used for. Asking an AI to make an internal summary of meeting notes is a legitimate use with very few risks – using them to craft emails to important clients, on the other hand, is not.
Implementing AI guidelines and policies in your workplace is a must. It’ll avoid confusion, bring clarity to your worker’s decision-making around their own AI usage, and most importantly, will stop you from making some of the costly mistakes that companies featured in this article have made while applying artificial intelligence.
What is cybersecurity?
Cyberattacks and cybercrime can disrupt, damage and destroy businesses, communities and lives. Security incidents can lead to identity theft, extortion and the loss of sensitive information. Bad actors are also leveraging the dark web to acquire new tools and resources.
But a more pertinent question may be: “Why is cybersecurity especially important right now?”
Today, cybercriminals are using new technologies to their advantage. For instance, businesses are embracing cloud computing for efficiency and innovation. But bad actors view this advancement as an expanding attack surface ripe for exploitation.
Bad actors are also leveraging the dark web. According to the IBM X-Force 2025 Threat Intelligence Index, sophisticated threat actors, including nation-states, are using the anonymity of the dark web to acquire new tools and resources.
They are demonstrating never-before-seen levels of coordination, automation and prowess—elevating risk from data breaches to widescale disruption.
How banks can fight back AI fraud
Artificial intelligence is rapidly and fundamentally changing the landscape of financial services. Banks and fintech companies are harnessing the power of AI to streamline operations, personalize customer experiences, and drive innovation. But it also introduces a new wave of complex challenges, particularly in the ever-escalating battle against fraud. Cybercriminals are actively leveraging AI to develop more sophisticated, evasive, and effective fraud techniques, making it increasingly difficult for even the most vigilant financial institutions to detect and prevent fraudulent activities. The rise of AI-powered fraud presents a formidable and multifaceted array of challenges for banks and other financial institutions, demanding a fundamental re-evaluation of existing security paradigms. To combat the escalating threats, banks need to invest heavily in advanced technologies such as AI- powered fraud detection systems, behavioral analytics platforms, and machine learning to combat AI-driven threats. To fight back, banks must maintain, implement, and maintain specialized expertise to maintain, and operate effectively, effectively and effectively against these threats.
How AI is used in fraud
AI is being strategically weaponized by cybercriminals in a variety of ways to significantly enhance their fraudulent activities, enabling them to bypass traditional security measures and exploit vulnerabilities with unprecedented precision and scale:
Enhanced social engineering
Social engineering, the art of manipulating individuals into divulging confidential information or taking actions that compromise security, has always been a cornerstone of cybercrime. AI takes this to a whole new level. AI-powered tools can analyze vast amounts of personal data harvested from social media, data breaches, and other sources. Which is then crafted into highly convincing and personalized phishing emails, text messages, voice calls, and even deepfake videos. These attacks are no longer generic and easily identifiable. They are tailored to specific individuals, exploiting their psychological vulnerabilities and significantly increasing the likelihood of success. For example, AI can analyze a victim’s writing style and communication patterns to generate phishing emails that perfectly mimic messages from trusted colleagues or family members.
Synthetic identity fraud
Synthetic identity fraud, a particularly insidious form of financial crime, involves the creation of entirely fabricated identities by combining real and fake information, such as a genuine social security number with a fictitious name and address. AI greatly exacerbates this problem. AI algorithms can analyze massive datasets to identify patterns and correlations that allow fraudsters to create synthetic identities that are virtually indistinguishable from real ones. These synthetic identities can then be used to open bank accounts, obtain credit cards, apply for loans, and commit other types of financial fraud, often going undetected for extended periods and causing significant financial losses.
Deepfakes
Deepfakes, AI-generated videos or audio recordings that convincingly depict someone saying or doing something they never actually said or did, pose a significant and rapidly growing threat to the financial sector. These highly realistic forgeries can be used to impersonate executives, customers, or even regulators in video conferences or phone calls. Enabling fraudsters to bypass authentication measures, manipulate victims into transferring funds, or gain access to sensitive information. The increasing sophistication and accessibility of deepfake technology make it a powerful tool in the hands of cybercriminals.
Automated attacks
AI can be used to automate various stages of fraud attacks, allowing cybercriminals to launch large-scale campaigns with unprecedented efficiency and speed. AI-powered bots can scan for vulnerabilities in banking systems, automate the creation of fraudulent accounts, and even carry out complex transactions, significantly increasing the volume and velocity of attacks. This automation makes it more challenging for banks to respond effectively and contain the damage.
The challenges for banks
The rise of AI-powered fraud presents a formidable and multifaceted array of challenges for banks and other financial institutions, demanding a fundamental re-evaluation of existing security paradigms:
Difficulty in detection: AI-powered fraud is inherently more sophisticated, evasive, and difficult to detect than traditional fraud methods. Traditional rule-based fraud detection systems, which rely on predefined rules and thresholds, struggle to identify novel and adaptive AI-driven attacks. These systems often generate a high number of false positives, overwhelming fraud analysts and hindering their ability to focus on genuine threats. The dynamic and evolving nature of AI-powered fraud requires more advanced detection capabilities.
Increased volume and velocity of attacks: AI empowers cybercriminals to launch attacks with greater frequency, scale, and speed. Automated attacks can target a large number of customers simultaneously, and AI-driven techniques can bypass security measures in a matter of seconds. This increased volume and velocity of attacks can overwhelm banks’ fraud prevention capabilities, leading to significant financial losses and reputational damage.
Need for advanced technology and expertise: Effectively combating AI-powered fraud necessitates that banks invest heavily in advanced technologies, such as AI-powered fraud detection systems, machine learning algorithms, and behavioral analytics platforms. These technologies require significant capital investment and specialized expertise to implement, maintain, and operate effectively. The ongoing cybersecurity skills shortage further exacerbates this challenge, making it difficult for banks to recruit and retain the talent needed to combat AI-driven threats.
How banks can fight back
To effectively combat the escalating threat of AI-powered fraud, banks must adopt a holistic, multi-layered, and proactive approach that seamlessly integrates cutting-edge technology, robust processes, and forward-thinking strategies:
Implement AI-powered fraud detection systems
Banks must invest in and deploy advanced AI-powered fraud detection systems that can analyze massive volumes of data in near real-time to identify subtle and complex suspicious patterns and anomalies that traditional systems would miss. These systems should leverage machine learning algorithms to continuously learn from past fraud cases, adapt to emerging fraud techniques, and proactively identify new threats. The ability to analyze diverse data sources, including transaction data, customer behavior, and network activity, is crucial for effective AI-driven fraud detection.
Enhance customer authentication
Strengthening customer authentication measures is paramount to preventing unauthorized access to accounts and mitigating the risk of fraud. Banks should implement multi-factor authentication (MFA), which requires customers to provide multiple forms of verification. Such as passwords, one-time codes, and biometric authentication (fingerprint, facial recognition). Furthermore, behavioral biometrics, which analyzes unique customer behavior patterns like typing speed and mouse movements, can add an additional layer of security.
Utilize behavioral analytics
Behavioral analytics offers a powerful tool for detecting fraudulent activity by analyzing customer behavior patterns and identifying deviations from normal activity. This technology can detect anomalies such as unusual transaction amounts, login locations, or device usage, providing valuable insights into potential fraudulent activity. By establishing a baseline of normal customer behavior, banks can quickly identify and flag suspicious actions in real-time.
Share threat intelligence
Collaboration and information sharing are essential in the fight against cybercrime. Banks should actively participate in threat intelligence sharing initiatives with other financial institutions, industry organizations, and government agencies. By sharing information about emerging fraud trends, attack vectors, and successful prevention strategies, banks can collectively strengthen their defenses and stay ahead of cybercriminals.
Invest in employee training and awareness
Bank employees are often the first line of defense against fraud. Comprehensive and ongoing training programs are crucial to educate employees about the latest AI-powered fraud techniques. Including social engineering, phishing, and synthetic identity fraud. Employees should be trained to recognize suspicious activity, follow security protocols, and report potential fraud incidents promptly. Regular security awareness campaigns can reinforce best practices and keep employees vigilant.
The importance of proactive measures
In the dynamic and high-stakes arena of financial cybersecurity, proactive measures are not just advisable; they are absolutely essential for survival. Banks must adopt a forward-thinking and adaptive approach to stay ahead of increasingly sophisticated cybercriminals. This requires continuous monitoring of the evolving threat landscape, substantial investment in technology, and a commitment to regularly adapting fraud prevention strategies. By taking a proactive stance, fostering collaboration, and prioritizing innovation, financial institutions can effectively protect themselves, their customers, and the integrity of the financial system in the face of the ever-growing threat of AI-powered fraud.
Ad Fraud Statistics (2025)
Ad fraud is one of the biggest challenges facing advertisers, publishers, and ad-tech enterprises today. The total cost of ad fraud in 2022 – $81 billion, predicted to increase to $100 billion by the end of 2023. In 2018 App Install Farms contributed 42% of total ad fraud, Click Spam & Ad Stacking – 27% and Click Injection – 30%.TrafficGuard/Juniper estimate that one in 13 app installs are fraudulent globally. 22% of all digital advertising spend in 2023 was attributed to fraud, which is a whopping $84 billion. If a current trend will continue, this number is projected to reach $172 billion by 2028. A massive botnet operation called 3ve was dismantled in November 2018. It consisted of three operations, says the WFA: hijacked IP addresses, counterfeit sites, and inventory and data centre traffic. 1.7 million PCs were infected by malware, 10,000 fake sites were generating 3-12 billion daily bid requests (impersonating legitimate publishers), and over 60,000 seller IDs with digital companies.
One of the most difficult challenges around mobile ad fraud is that no one really knows exactly how big a problem is, with estimates ranging from high…to higher… to stratospheric to…nuts.
Ignore it at your peril. Or at least the cost of a significant chunk of your advertising budget.
What is ad fraud and mobile fraud as the latest permutation of the problem? In what contexts does it primarily occur? What is the cost of digital ad fraud to businesses estimated to be? Read on to find out the answers to these questions, and learn many more ad fraud statistics.
Featured Ad Fraud Prevention Tools
Key ad fraud statistics
The total cost of ad fraud in 2022 – $81 billion, predicted to increase to $100 billion by the end of 2023
The biggest hit is the APAC region – $75 billion in 2022
According to Interceptd, 31% of iOS app and 25% of Android app installs are fraudulent
Bad Bot Report from Imperva attributes 47% of all internet traffic to bots (of those 30% are bad ones)
Fraud attempts are 25% lower for in-app advertising than on the web
In 2018 App Install Farms contributed 42% of total ad fraud, Click Spam & Ad Stacking – 27% and Click Injection – 30%.
TrafficGuard/Juniper estimate that one in 13 app installs are fraudulent globally
These are just the most important data points that describe the current state of digital ad fraud, both on the web and mobile. Further on we’ll be presenting memorable examples of ad fraud, estimates for ad fraud cost, impact by region, app category and mobile OS, ad fraud detection, countermeasures, and more.
High-profile examples of ad fraud
A massive botnet operation called 3ve was dismantled in November 2018. The stats that reveals details of this operation will be, quite frankly, frightening to digital advertisers (though it should be noted this is an extreme example, being one of the biggest such operations ever discovered).
It consisted of three operations, says the WFA: hijacked IP addresses, counterfeit sites, and inventory and data centre traffic. 1.7 million PCs were infected by malware, 10,000 fake sites were generating 3-12 billion daily bid requests (impersonating legitimate publishers), and over 60,000 seller IDs with digital companies to enable fraudsters to receive ad placements and be paid accordingly.
Google reported in a whitepaper that the network had been hard to detect, as it deployed tactics such as counter ad fraud manipulation and evasion, and the ability to change its codebase after spikes in usage. It is thought that 3ve cost businesses a whopping $29 million in ad spend. Along with Methbot (ran by the same fraudsters), the Russian defendants were accused of defrauding businesses out of $36 million in all. None of the ads involved in this fraud ring were ever viewed by a human being.
A report published by Juniper Research reveals 22% of all digital advertising spend in 2023 was attributed to fraud, which is a whopping $84 billion and this is not the worst news, thus number, if a current trend will continue, is projected to reach $172 billion by 2028.
Massive amounts of digital ad fraud can be carried out through a single conduit. For example, Pixelate reported in June 2018 that an app named MegaCast – Chrome Player served as a front for an operation through which as much as $75 million worth of ad fraud was enabled. This utilized an increasingly-prevalent form of mobile ad fraud: app laundering which relies heavily on bundle ID spoofing. Pixelate found that if a device was on for 24 hours it would ‘serve’ 1,400 impressions, a 6:4 mixture of display to video.
Potential cost of ad fraud carried out through Megacast – Chrome Player per device ($)
Source: Pixelate
Notably, in the aforementioned report from Buzzfeed, it was noted that after fraudulent schemes came to light it was very hard for journalists to get hold of specific numbers from actors along the supply chain. This meant it was very difficult to investigate the movement of money. Why this secrecy? Such companies presumably operate in fear of being found negligent, or in extreme cases, complicit. Some sources believe the actual amount stolen in above mentioned may well be as high as hundreds of millions of dollars in reality.
Only a tiny proportion of companies contacted by Buzzfeed stated any intention to return money. This issue, experts say, is endemic, with something of a tacit acceptance that a certain amount of money will be lost to fraud.
US Senator Mark Warner (vice chair of the US Senate Intelligence Committee) sent a letter to the Federal Trade Commission in October 2018, raising his concerns over high levels of digital ad fraud, in the wake of Buzzfeed’s exposé. In this letter he accused Google and other major platforms of inaction and even wilful blindness to the scale of ad fraud, while continuing to happily pocket the revenue.
Google removed 30 apps from the Play Store in the wake of Buzzfeed’s article.
Anti-fraud consultant and research Dr Augustine Fou posits in an interview with Forbes that it would not be in the interests of ad agencies for ad fraud to be resolved. Cutting out the problem would have the effect of vastly reducing the amount of “available” inventory.
It would also be difficult for marketers who have been working under the impression that there is a good deal more impressions out there for the taking than is actually the case. It would take a wholescale rethink, in which our understanding of what constitutes as good numbers would have to recalibrated. This might make for painful reading for marketers, given click-though rates in reality would be somewhere in the realm of 0.1%. KPIs the world over would have to be roundly revised downwards.
Or, we might simply measure different results, though Fou warns that bots can also fake conversions…
The more recent example of an online ad fraud is the 2021 case of LeoTerra – the server-side ad insertion online fraud scheme. In a nutshell, the ad fraud scheme was about spoofing, at its peak, 20+ million connected TVs per day to deceive advertisers to believe they were provided with a CTV ad inventory on a massive scale. To put it in perspective, in 2020 the same scheme was generating 20x less volume of spoofed CTVs, which is to say this kind of ad fraud scheme has been growing rapidly.
Cost of ad fraud
Estimates of the total scale and cost of ad fraud vary hugely, due to the complexity of ascertaining the volume of invalid traffic/downloads.
Back in 2016, the World Federation of Advertisers predicted that, within a decade, ad fraud and other fake internet traffic schemes would become the second-biggest market for organized crime after the drugs trade. By extrapolating the level of growth at the time of the study, a conservative estimate would have the level standing at $50 billion by 2025: equal to 10% of the total predicted value of the digital ad market.
The latest Bad Bot Report from Imperva reveals a percentage of all last year internet traffic that was generated by bots – a mind-blowing 47.4%. Think about it – almost every second website visit comes not from a human being but a special software bot and if that wasn’t enough – 30% of those bots are bad ones, those that are meant to generate fake page views and so on.
This scale of this increase can be attributed to the fact that digital ad fraud is relatively easy to perpetrate and delivers high returns, in combination with the relatively weak position which law enforcement agencies currently occupy when it comes to policing the internet.
According to stats from Juniper Research, produced for a whitepaper published by ad fraud detection company TrafficGuard, it is estimated that, globally, one in 13 app installs in 2018 was not from genuine users (7.7%).
The same source estimates that advertisers which display a million ads over a 24 hour-period will are likely to pay for more than 100,000 fraudulent ads before any issue is detected.
According to a TrafficGuard whitepaper, fraudulent activity cost digital advertisers $39 million per day over the course of 2017.
Calling on stats from Juniper Research, which break down the cost of ad fraud utilizing common tactics, App Install Farms/SDK Spoofing are responsible for the greatest share, at 42%. This is followed by Click Spam & Ad Stacking fraud (27.3%) and Click Injection (30.33%).
The total cost of these ad fraud tactics comes to $25.8 billion according to this analysis.
Proportional wasted ad spend owing to common fraud tactics in 2018 (%)
Source: TrafficGuard
These three common online ad fraud tactics are on the rise, reaching this year $20 billion for App Install Farms / SDK Spoofing, $35 billion for Click Spam & Ad Stacking and stunning $65 billion for Click Injection.
Projected increase in cost from common ad fraud tactics ($billion)
Source: TrafficGuard
In 2019, eMarketer reported its estimates of the total cost varied from $6.5 billion to as high as $19 billion. It could well be worse; even the highest end of this scale seems to be on the conservative side by some measures.
According to eMarketer, in 2022 programmatic display advertising, a whopping $15 billion business, covers 90% of all digital ads. This, of course, makes the threat of digital ad fraud ever more pressing – with complex and opaque supply chains between buyer and seller.
Finally, putting together the estimates for digital ad fraud for the last several years and a year in a future, we see that up until 2021 there was, roughly speaking, $10 billion increase year-over-year and then it begins to double, adding about $20 billion this year and the estimate for 2023 is projected to be on the same track, reaching a $100 billion mark.
Estimated cost of digital ad fraud worldwide from 2018 to 2023 ($billion)
Source: Statista
Ad fraud context
Ad fraud by region
The problem of digital ad fraud is particularly pronounced in the APAC region, in which the cost is estimated to be $17 million per day. Rapidly increasing penetration means that the question of mobile ad fraud is a particularly pertinent one in the region.
The cost of ad fraud is predicted to increase in all regions over the next few years, with APAC will continue to be the hardest hit, with losses more than doubling from $33 billion to $75 billion.
Estimated cost of ad fraud by region ($billion)
Source: TrafficGuard
In 2019 TrafficGuard, digital ad verification and fraud prevention platform joined forces with Jupiter Research again to build a comprehensive picture of percentage of digital ad fraud of total ad spend in different regions. The following graph demonstrates how much money were spent per digital user annually versus and how much of that figure was lost due to the ad fraud. And again – China, the biggest part of the APAC region, was leading the pack with $11 million of losses due to digital fraud for every $63 million spent on digital advertising per user per year, which was a whopping 17%.
In the United States $62 million out of $407 million advertising spend per user was wasted due to digital ad fraud, which presented 15% of the total spend.
Average ad spend versus fraud, by region ($million)
Source: TrafficGuard & Jupiter Research
On the opposite end of the spectrum was Africa and Middle East where annual digital ad fraud per user was “only” $1 million out of $19 million spent on digital advertising, a mere 5%.
Ad fraud by app category and mobile OS
Mobile Ad Fraud Detection and Prevention company Interceptd find in their 2019 Mobile Ad Fraud Report that Android suffers from a slightly higher level of digital ad fraud than iOS, with 31% of app traffic fraudulent, compared to 25%.
Android ad fraud
The same report also finds that some app categories are more vulnerable to fraud than other. On Android, finance tops the list, with over a third of installs fraudulent, followed by shopping, gaming, and social media.
Ad fraud by app category: Android (%)
Source: Interceptd
Mobile ad fraud related to Android finance apps is dominated by bots/emulators, which are responsible for over a third of ad fraud in this category, and click injection, which accounts for a further quarter. SDK spoofing remains fairly low.
Android finance app ad fraud types (%)
Source: Interceptd
SDK spoofing seems to be absent from the shopping category. Here, click spamming (22%) and device farms (21%) are the biggest problems. Interestingly – and worryingly for those looking to tackle the full gamut of problems – undefined ‘other’ forms are also responsible for 22% of ad fraud on Android shopping apps.
Android shopping app ad fraud types (%)
Source: Interceptd
Gaming is the first category in which we see the hot button issue of SDK spoofing claim responsibility for the greatest share of ad fraud, at 24%. Device farming and incent abuse also register highly, at 19% apiece.
Android gaming app ad fraud types (%)
Source: Interceptd
Social apps are, however, the most blighted by SDK spoofing, with a whopping 38% of ad fraud coming from this source. Click injection, at 21%, also registers highly.
Android social app ad fraud types (%)
Source: Interceptd
iOS ad fraud
On iOS, finance comes in third, with shopping the unfortunate leader, with online ad fraud accounting for a third of installs, followed by gaming. Travel completes the top-four.
Ad fraud by app category: iOS (%)
Source: Interceptd
Device farming is the most common type of mobile ad fraud in the shopping category, responsible for 37% of digital ad fraud. Bots/emulators lay claim to nearly a quarter, while SDK spoofing comes up to 18%.
iOS shopping app ad fraud types (%)
Source: Interceptd
Those looking to address ad fraud for iOS gaming apps would do well to be alive to bots/emulators (26%), SDK spoofing (24%), and device farms (22%).
iOS gaming app ad fraud types (%)
Source: Interceptd
Device farms (31%) and SDK spoofing (19%) are also a big issue for iOS finance apps. Click spamming, at 21% makes up the big three in this category.
iOS finance app ad fraud types (%)
Source: Interceptd
Once again, device farming (26%) and SDK spoofing present a challenge in the travel category, albeit with the latter this time presenting the greatest challenge.
iOS travel app ad fraud types (%)
Source: Interceptd
Ad fraud detection companies and other countermeasures
Ad fraud creates a number of risks for advertising companies. Some of these risks can be somewhat mitigated by reporting and single-level blocking, say TrafficGuard. Things like short-term wasted media spend and poor investments can be tackled to an extent through reporting, while single-level blocking can partially reduce the impact of the threat of litigation or diminishing campaign optimization.
As we might expect, multi-level blocking is the only way to fully mitigate the full suite of risks associated with ad fraud, serving to block invalid traffic as it is detected.
Much hope was placed in industry-led Ads.txt – a system which allows publishers to list companies authorised to sell their adverts via a simple txt files. Buyers are then able to check the list to see if the company from whom they are buying is legit.
Ad fraud is, however, an ongoing game of cat and mouse between fraudsters and advertisers/ad fraud detection companies. And soon enough, schemes circumventing Ads.txt came to light, involving copied websites, botnets generating fake page views, and approved resellers. This scheme could have cost advertisers between $70-80 million had it continued to go undetected, reports the Wall Street Journal.
Ads.txt had been adopted by 1.9 million websites by Q3, 2019, 48% up from the same quarter in the preceding year.
Ads.txt adoption on the web (million websites)
Source: Pixalate
On the mobile side, looking at the Pixalate data from March to September 2019, we see a dramatic jump in the number of mobile apps that implemented mobile ads.txt equivalent in August of that year – from 23,000 to 68,000 within a month time frame.
As of May 2022, it’s been measured that 27% of Android apps and 7% of iOS apps have already adopted it as well.
Ads.txt adoption on mobile (thousand apps)
Source: Pixalate
The next step in the race? An upgraded version of Ads.txt called Ads.cert, which will attempt to resolve the issues of Ads.txt by using cryptographically-stamped digital signatures to validate the source of impressions. As of this article writing in May 2022, IAB tech lab has been working on Ads.cert 2.0 to increase further level of authenticity in online advertising. The Ads.cert is the umbrella for several protocols that are meant to counteract rising levels of a fraud in digital advertising.
Ad fraud advice
Of course, there is whole industry of ad fraud detection companies, who can offer various services to help reduce the scourge of digital ad fraud. In a nutshell, the advice splits between technical solutions and better business practices which imply improved communication between advertisers and publishers.
As well as working with ad fraud detection companies, there is a range of measures that can be taken by businesses. A compilation of such potential solutions proposed by agency experts to mitigate the high cost of ad fraud was published by Forbes.
Working with DSPs who offer guarantees of fraud-free service can also be a good way to make sure that the cost of fraud that is detected is not incurred by clients, for example. Trying to minimize the number of companies worked with can help reduce risk also, particularly given the complexity of the supply chain and prevalence of programmatic ad fraud. On which note, companies are also advised to choose a programmatic tech stack which incorporates fraud solutions such as prebid filtering for IVT.
Companies are advised to implement internal policies and processes in order to help cut the cost of fraud, as well as closely monitoring results and looking for obvious anomalies. Steps should be taken to increase transparency and data sharing in what can be a rather opaque field.
In the face of what unfortunately looks to be an inescapable challenge for the ad industry, one other solution would be to change the pertinent metric, looking at performance rather than clicks.
When it comes to SDK spoofing, Michael Paxman of MarTech Advisor warns that there is no such thing as a spoof-proof SDK. There are ways, however, in which one can lower the risk posed by this particular form of ad fraud: cryptographic SDK signatures. These can make the process of spoofing difficult and costly enough to hopefully disincentivize it.
Entrepreneur.com also advises building language insertion orders, as these require that publishers identify all third-party sources of traffic. It also advises testing traffic sources, citing a programmatic ad fraud-test run by Guardian US in 2018. This test revealed that 72% of video spend was going to unauthorised exchanges and SSPs. In this case, use of ads.txt proved effective – with all ads.txt buying revenue going to Guardian US.
Direct communication between publisher and advertiser, preferably between real, identifiable people, can also help to reduce risk – echoing the sentiment above about reducing the number of actors involved in any given transaction.
In-app advertising and ad fraud rates
Scott Silverman, the former vice president and general manager of Marketplace at InMobi, currently Global Revenue at Vungle, advises that in-app advertising is safer in terms of mobile ad fraud than running web advertisements.
Reportedly, in-app advertising experienced 25% fewer fraud attempts than mobile web advertising over the second half of 2018. Online ad fraud involving bots and viruses designed to simulate human users occurred 74% less often in the same period.
In-app advertising is by no means fraud proof, however. A report in Buzzfeed revealed that, after complaints that a popular app (unnamed to allow a source to speak anonymously) was draining users’ batteries and using high amounts of data, it transpired that fraudsters were running video ads behind legitimate banner ads. These ads were not visible, but registered as being served and viewed.
Several apps, including many using Twitter’s MoPub platform, were affected by this scheme. An Israeli company with offices in New York called Aviview was implicated but denied any involvement claiming a third party exploited banner ads and codes created by one of its subsidiaries.
This phenomenon has been reported in the past. One high-profile example reported in March 2018 used McDonald’s ads as a cover up for a scheme, whereby fraudsters bought banner space posing as an agency, which was then resold as (much more expensive) video ads. The false McDonald’s ads made it look like only banners were being served, while the covered-up videos playing in the background were being registered as served.
Singapore and Bangalore-based Streamlyn were accused of being the fraudsters behind the scheme by a source speaking to Video Ad News. Several other companies were identified as being part of the chain that allowed the ads to be served in huge volumes (five million times in properties belonging to the source) – though simply being involved at some stage is no indication of guilt.
AI and blockchain ad fraud protection
While improving tech may increase the different ways in which digital ad fraud can be carried out, it also offers increasingly-advanced solutions to those aiming to prevent ad fraud. Machine learning could be key in ad fraud detection, says an ad fraud detection and protection company TrafficGuard, calling upon data from Juniper Research.
This finds that, in all, machine learning will reduce the impact of ad fraud by about $10 billion by this year, increasing from $2 billion in 2018. In APAC, this will stand at $3.5 billion – up from $0.6 billion in 2018.
Ad fraud money saved through machine learning solutions ($billions)
Source: TrafficGuard
Big names like Facebook and Google are already calling on neural networks and machine learning to help in ad fraud detection and prevention. For companies which can’t fall back on their own proprietary AI research, third-party AI solutions are provided by various ad fraud detection companies. By the end of 2022, global advertising spending enabled and powered by the use of machine learning will reach $370 billion and by 2032 this figure will reach $1.3 trillion.
AI can detect suspicious behavior, filter IP addresses, and generally monitor traffic. Its increasing prevalence in combating ad fraud will come in tandem with general wider use in advertising – to target specific markets for example.
Blockchain has also been suggested as a potential solution to ad fraud, affording the complete transparency which has been so deeply compromised by programmatic ad fraud.
Various blockchain solutions to online ad fraud are proposed: One is to give users tokens when they opt-in to view ads on a platform which blocks third-party and programmatic ads. These can then be awarded to websites and publishers by users. Several prestigious media groups have expressed interest in this platform.
Other services simply offer transparent, validated data to allow marketers to accurately measure value delivered – predicated on the requirement to establish and enforce rigorous standards across the board. Or the use of public key cryptology to confirm identities of publishers, advertisers, and advertisers.
Brands and ad fraud: fears and responses
Research from eMarketer shows that brands are more wary of online ad fraud, when it comes to in-app advertising at least. 52% of brands said that fear of fraud was a concern, making it their leading worry. Agencies as a whole are significantly more sanguine, with 36% reporting this as a concern, putting it joint-last.
The case remains the same when it comes to video advertising, with 57% of brands registering it as a challenge. On video, it seems agencies are in closer alignment with brands, with 42% reporting that fear of fraud was a challenge. This takes it to number two in the list of concerns.
Challenges related to in-app advertising: brands vs. agencies (%)
Source: eMarketer
A study conducted by Integral Ad Science and referenced by eMarketer asked agencies and brand professionals what they considered to be strong threats to digital ad budgets in 2019. In this instance, agency professionals seemed to be more concerned about the impact of fraudulent impressions, with 69% reporting it as a concern, versus 53% of brand professionals (which we should note is still more than half).
Agency professionals collectively did not feel anything threatened ad budgets to the same extent; seemingly in-house marketers are more worried by other threats. The survey results were published in late January 2019.
eMarketer believe that brands will become more alive to the threat posed by fraud.
Threats to digital ad budgets, brands vs. agencies (%)
Source: eMarketer
Brands’ reactions to the increasing threat of fraud varies. A study by Forrester found 69% of brands spending $1 million per month reported that at least 20% of their budgets were being lost to digital ad fraud. In the face of this, however, 70% of these companies reported that they were actually planning to increase their advertising budget.
This is not universally the case by any means, however. In January 2018, Procter & Gamble announced it was planning to save $750 million by cutting its ad budget, and reducing the number of agencies with which it worked by 50% (from 2,500 to 1,250). This was to be followed by a further $400 million cut in the next phase.
A year previously, P&G chief marketing officer Marc Pritchard had warned the media buying and selling industry that it needed to clean up its act, demanding a “transparent, clean and productive media supply chain”. In 2017, JPMorgan Chase cut the number of sites on which it advertised from 400,000 to 5,000 (though this doubled to 10,000 by the end of the year). According to The New York Times, JPMorgan Chase claimed that the cull had no immediate effect on results.
Ad fraud expert Dr Augustine Fou advises other companies to consider following in the footsteps of P&G, cutting their budgets (albeit more gradually) and seeing what the real effect on the bottom line is.
As of 2021, according to the leading mobile app analytics company AppsFlyer, app install fraud rates in leading verticals are the following: Food & Drink – 43%, Finance – 38%, Shopping – 17%, Entertainment – 15% and Gaming – 2%.
In 2019 Uber stopped running app install ad campaigns that cost the company $120 million, they saw no significant difference in the rate of generation of new app installs.
Final thoughts
It’s clear that the scale of digital ad fraud is far larger than many in the advertising world would or should ever be comfortable with.
Will the problem continue to get worse or will see, as some have speculated (and we all hope), an improvement?
Dr Augustine Fou, quoted above, believes that the only way that the issue of ad fraud will be resolved will be for the whole industry to collapse and begin again. This outcome is not only likely, he argues, but desirable – resulting in a new, more effective, less profligate approach to digital marketing.
This is naturally on the more dramatic side of things; those who remember the bursting of the dot com bubble, or even the financial crash of 2008, however, will presumably better know better than to rule it out.
Certainly, a largescale loss of trust in the industry seems eminently plausible. We haven’t yet seen a mass following of the example set by Procter & Gamble or JPMorgan Chase, but as buyers become savvier to the dangers of ad fraud, we could well see a tightening of focus, with advertisers and publishers alike seeking to foster more personal relationships. Perhaps this would ostensibly reduce the size of the market – but if large swathes of the available market consists of invalid traffic, then it is artificially bloated anyway.
This of course would have a fascinating effect – with actors from all points in the supply chain potentially affected. Those in the middle in particular look set to suffer if this became the case; this could have a potentially devastating effect on the ad-tech industry as a whole. While there’s not yet evidence that this is the case, those occupying this space would well to prepare. Looking at ways to address fraud, and to refocus efforts to achieve conversions over views would be apt preparation for a best-case outcome; having a pretty substantial Plan B would be apt preparation for the worst.
Notwithstanding any large-scale or wholescale changes in the way business is conducted, we might also look with some optimism to the future. Awareness has risen, which is the first step in addressing digital ad fraud. We are seeing improvements in artificial intelligence and the prospect of blockchain promises nothing short of a revolution in transparency.
While it may be an ongoing arms race, ad fraud detection companies are making a concerted effort to stay ahead. Whether it will be able to in perpetuity remains to be seen. Some are sceptical. But faced with the level of challenge with which we are, we must put our lot and our faith in these organisations to make an impact. Particularly with the global increase in mobile penetration, most concentrated in the Asia Pacific region.
The future, then, is uncertain – bar two things. One, the persistence of criminal elements attempting to abuse a system that is seemingly easily abused, to the tune of billions of dollars. Two, our collective need to address and tackle this ongoing issue.
How AI is being used to create sophisticated scams that leave even experts second-guessing
Fraud and data analyst Leon was contacted by scammers who he believes used artificial intelligence to create elaborate websites for businesses that don’t exist. The ABC looked into the two websites Leon was directed to by his scammers: APPC Capital Singapore and Thackeray Mines and Minerals Inc. Our fact-checking aimed to determine whether the images, the people and the locations referred to were legitimate. The red flags in the above include: Missing shoulder on ‘Mrs. M Sauvageau’ Partially missing earlobe on ‘Ms. Lape’ An almost cartoonish smoothness of skin for ‘Mr S. Lynwood’ The uniform straight-forward gaze of the managers of the purported mining company ‘Thackeray Minerals’ Notable faults in the images of the ‘senior management’ of ‘this person does not exist’ are also likely to be caused by AI. AI is becoming increasingly capable of creating believable human faces, sometimes if you look closely enough you can spot a defect.
But even he was left second-guessing when he was contacted by scammers who he believes used artificial intelligence to create elaborate websites for businesses that don’t exist.
Leon — not his real name — has asked to be anonymous due to concerns his data could be breached.
Like many scams, Leon’s experience began with a cold call from someone identifying themselves as an employee of marketing agency Assent Advisory, and claiming to promote an investment opportunity.
Knowing cold calls are a red flag, Leon played along, curious to see how it would play out.
Leon’s caller was professional, patient and didn’t pressure him to provide sensitive financial information over the phone — a tactic often reported by scam victims.
But when he was sent links to the glossy websites of the investment firm being promoted, and a mining company the firm would help him buy shares in, Leon quickly realised this was not any “old school scam”.
AI is capable of generating a whole range of realistic images, including this one. ( )
The elaborate sites featured a business registration number, detailed weekly blog posts, a list of board members with their pictures and the names of their graduating universities, the locations of multiple mining sites, and even a range of publicly accessible documents detailing company policies.
Leon said the level of detail was convincing.
“I do this kind of work, I’m a pretty sceptical person, but I looked at the [site] and I thought ‘well, if I was given this website by someone I trusted … I wouldn’t [take] a second glance’,” he said.
“So basic due diligence is probably not enough here.”
How AI is changing the nature of scams
“Scams are not a new problem or new issue … but what is different is the way people conduct the scam, especially in the last few years, [where] we’ve seen the use of AI,” said Lennon Chang, associate professor of cyber risk and policy at Deakin University.
Dr Chang said there was no doubt AI was making it easier for criminals to produce a convincing scam.
“Now using AI, everyone can be a developer to create a website, to write a [company] policy … to generate images … and even to create fake financial reports,” he said.
He said it was becoming increasingly difficult for the general public to verify whether something they saw online was real or AI generated.
Lennon Chang says AI has become not only the future of scams but “the future of crime”. ( )
“We would usually suggest that people review the likes or comments [of a page] — what we call ‘the feedbacks’ — but that too can all be generated by AI now,” he said.
Dr Chang said the general public would need support from multiple experts — in finance, tech and cybersecurity — to do the fact-checking.
Fact-checking an AI scam
The ABC looked into the two websites Leon was directed to by his scammers: APPC Capital Singapore and Thackeray Mines and Minerals Inc.
Our fact-checking aimed to determine whether the images, the people and the locations referred to were legitimate.
Here’s what we found. (Tap each tile to read more)
AI Images This image is supposedly of a Robert Thackeray but has likely been AI generated by scammers. ( ) A picture is worth 1,000 words, but with AI the story they tell may not always be true. Below are the pictures of the senior managers at the purported mining company, Thackeray Mines. Can you tell what’s off about these images? These are the images of Thackeray Mines & Minerals’ senior management, which may have been created with AI. While AI is becoming increasingly capable of creating believable human faces, sometimes if you look closely enough you can spot a defect. The red flags in the above include: Missing shoulder on ‘Mrs. M Sauvageau’
Missing shoulder on ‘Mrs. M Sauvageau’ Partially missing earlobe on ‘Ms. A Lapointe’
Partially missing earlobe on ‘Ms. A Lapointe’ An almost cartoonish smoothness of skin for ‘Mr. S Lynwood’
An almost cartoonish smoothness of skin for ‘Mr. S Lynwood’ The uniform straight-forward gaze of each person (a much more subtle oddity) Notable faults in the likely AI-generated images of Thackeray Mines & Minerals’ senior management. Sites like ‘this person does not exist’ is just one free AI generator that can produce human faces, often with similar defects to those seen above. Identities An icon depicting a person. Supplied: Pixabay. ( ) Each of the ‘senior managers’ listed on Thackeray’s website had a profile page which noted their graduating university. The ABC contacted the eight universities these individuals allegedly attended. Two provided a response, saying they had no record of ‘Myriam Sauvageau’ or ‘Dennis Brent’ respectively attending their institutions. In the profiles of senior management at APPC Capital Singapore, it is the site’s consistent vagueness that raises the alarm. This is the profile of Shui Huan Fen, APPC Capital Singapore’s alleged co-founder and chief executive. ( ) Previous professional experiences are described in broad terms such as having held “key positions at several prominent financial institutions” or having “an extensive background in research for financial and investment companies”. The profiles fail to name specific previous employers, research institutions or even LinkedIn accounts. Neither Thackeray or APPC Capital have business pages on LinkedIn. Locations A mining website, which is likely a scam, claims this is a photo of Catcha Lake, but a reverse image search indicates this isn’t true. ( ) Thackeray’s website also featured the names, locations, pictures and kind of ore found at each of their nine alleged mining sites. However, when the ABC contacted the government of Nunavut, Canada, where two of the mines are supposedly located, a government spokesperson confirmed there was no record of Thackeray Mines and Minerals operating in the region either currently or historically. A major mining company in Canada — Agnico Eagle — also confirmed they were “not familiar” with Thackeray Mines operating in any of the regions they do, including in Nunavut and another listed location in Malartic, Quebec. Thackeray Mines claim this is an image of Catcha Lake in Nova Scotia, Canada, however a reverse image search shows this is likely an image of Touquoy Mine in Moose River, NS, Canada. ( ) A reverse image search of many of the photos displayed on Thackeray Mines’ website revealed the pictures were of other companies’ mine sites. APPC Capital Singapore had two addresses listed for their business, both in high-rise buildings. The ABC contacted each of the building’s managers, with both confirming they had no record of APPC. One building manager noted the floor APPC claimed to reside on is unable to be leased because it is currently under renovation.
Checking business registrations
One notable feature of the APPC Capital Singapore operation was that it had an active business registration number with the country’s Accounting and Corporate Regulatory Authority (ACRA) — the Singaporean equivalent of the Australian Business Register.
But Leon labelled the scam website’s strong emphasis on this registration as a “misdirect”.
Investment expert Rakesh Gupta, who is an associate professor of accounting and finance at Charles Darwin University, agreed.
He said all investment firms must also have a licence in order to provide a financial service.
In Australia, that licence is given by the Australian Securities & Investments Commission (ASIC) and in Singapore it is with the Monetary Authority of Singapore (MAS).
APPC Capital Singapore is not listed as a licence holder on the MAS register.
Rakesh Gupta says all investment firms must have some kind of financial services licence. ( )
Getting expert advice
A 2025 report from the National Anti-Scam Centre found Australians lost $945 million to investment scams last year, accounting for almost half of the total $2.03 billion fleeced by scammers.
Dr Gupta said he encouraged the public to practise “healthy cynicism” when contacted by supposed “investment experts”, even ones who seem to use all the right words.
He said typical red flags included cold calls which he said were “very uncommon” in the industry; opportunities that sounded “too good to be true”; and claims of insider knowledge that would be illegal.
Rakesh Gupta says he encourages anyone considering investing to book an appointment with a registered financial advisor. ( )
Dr Gupta said when it came to investing, it was best to speak to a registered financial advisor who could assess your individual circumstances and find options “right for you”.
And if you do think you have been scammed, cyber security expert Dr Lennon said you should report it and contact your bank immediately.
Dr Lennon also said he wanted to see more mental heath support available for victims of scams.
“Counsellors, but also victim [support] groups … I think would be very helpful to victims.”
Source: https://www.timesfreepress.com/news/2025/jun/21/personal-finance-artificial-intelligence-is/