Phishing is the No. 1 method used by hackers to steal information. Spurred by the pandemic, it became the most common cybercrime of 2020, according to the FBI. With working from home becoming the new norm, phishing incidents have more than doubled since 2019, making us more vulnerable than ever.
It’s a complicated challenge for employers. Even if they provide cybersecurity software, these programs cannot physically stop an employee from interacting with phishing attacks that can expose a company. As phishing has evolved, it’s hard for people to decipher between real or fraudulent websites. So how can organizations help their employees better navigate the waters of what’s real?
This was the aim of a study led by researchers at the University of Notre Dame’s Mendoza College of Business. Ahmed Abbasi, the Joe and Jane Giovanini Professor of IT, Analytics, and Operations (ITAO), and David Dobolyi, assistant research professor of ITAO, examined the problem of phishing in their recent paper, “The Phishing Funnel Model: A Design Artifact to Predict User Susceptibility to Phishing Websites,” published in Information Systems Research. But instead of looking at improving anti-phishing tools that react to a potential threat, they wanted to create a tool that could predict a user’s susceptibility to phishing.
In the mid-1990s, phishing attacks were email based, where a weird link or attachment may download malicious software, or malware, onto your computer. This is just one phishing method hackers would use to steal valuable information such as banking or personal information.
But phishing today is different. An email from a prince or a forward from a friend aren’t the only avenues for stealing your data. Now, internet users are exposed to phishing via search engine results and through social media posts linking to phishing websites. Even when a computer’s cybersecurity program throws up a warning message, a user may think, “This program can’t make judgment calls” and continue to the suspicious website.
But that’s how it happens.
“Because threats are constantly changing and evolving, cybersecurity as a paradigm has been reactive to the threats,” said Abbasi, lead author on the study. “The big change in security is trying to become more proactive instead of reactive, and rather than just detecting phishing attacks … be proactive in protecting human assets.”
The study worked to evaluate susceptibility when anti-phishing security methods were in use. This is vital, because previous research has shown that users are noticeably bad at differentiating between legitimate websites and phishing websites even when anti-phishing programs, via cybersecurity software or built-in web browser applications, are in use. In part, this is due to the fact that phishing attempts tend to exploit and appeal to our human nature.
Abbasi and Dobolyi, along with collaborators at Temple University and the University of Wisconsin-Milwaukee, created the phishing funnel model, a tool that lays out the series of actions someone goes through as they interact with a phishing scam.
Traditionally used in e-commerce, funnel models represent the decision-making process to accomplish a particular goal. Marketing commonly uses the awareness-interest-desire-action funnel. This funnel maps the steps consumers need to take in order to make a purchase. The further someone moves down the funnel, the more likely it is for the consumer to purchase an item. Additionally, the funnel shape of the model assumes that fewer and fewer people will make it through each step.
The phishing funnel model functions similarly, mapping a user’s actions leading up to transacting with the phishing website. The model starts with the user visiting the phishing website, likely found through a search engine or social media. Then the user browses the site for what intrigued them to click in the first place. Next, the user determines whether or not they consider the website legitimate and lastly if they intend to make a transaction with the site and fork over valuable data.
As with the e-commerce funnel, the more a user engages with a phishing site, the more likely they are to transact with it. But unlike the e-commerce funnel where marketers’ want someone to move through each step of the funnel, the researchers’ goal is to stop users from moving through the phishing funnel model.
“This phishing funnel model essentially looks at an important idea that people are browsing the web usually with a purpose, like they need to make a transaction. We are using that idea to map human behavior and identify opportunities to stop users from giving out information,” said Dobolyi, a methodologist whose expertise spans an array of topics including cybersecurity, healthcare and criminal justice.
Choosing to consider this decision-making process is unique among other phishing studies because current research typically only considers susceptibility in terms of a single choice. Rather, the development of the phishing funnel model makes the case that it takes a sequence of decisions and actions before a transaction.
So what makes an employee more or less susceptible to a phishing attempt?
There are three different factors that come into play when internet users are deciding whether or not to make a transaction on a phishing site: whether an anti-phishing tool is in use, user characteristics and the level of risk associated with the threat.
“Do you trust your browser or program’s phishing warnings? Or do you ignore them?” said Abbasi. “Your perceptions of how useful those warnings are matter.”
In considering the user, the study looked at factors such as demographics and web experience. Age was one of the main factors, as older generations who didn’t grow up with the internet are less web savvy in general. But beyond age, education and gender also impact decision making on the internet. Prior web experience such as those who trust institutional websites, such as a bank’s online services, familiarity with websites and past losses to phishing attacks also factor into susceptibility.
“If you’re phished once — meaning you’ve given out information that you shouldn’t — you’re likely to be phished again,” said Dobolyi. “There is a psychological tendency for some people to make the wrong decision over and over again.”
And lastly, the researchers considered the actual threat in play and the user’s perception of it. Each threat comes with a different risk. Maybe the threat is to be spammed with phone calls and text messages that are more annoying than dangerous. More serious attempts can lead to the loss of banking or credit card information and can cost employers in IT service hours and potentially even data breaches. Phishing threats span a spectrum, and the user’s perception of each one could impact whether the individual ultimately proceeds with the transaction.
“It’s the 80/20 rule,” said Abbasi. “Twenty percent of employees are generally going to be responsible for 80% of all of the susceptibility to phishing threats — and it’s often some of the best employees.”
An employer could consider blocking non-work-related websites, said Abbasi, but that likely would be too restrictive, potentially affecting workplace morale and job satisfaction. Instead of a one-size-fits-all policy, the research team believed predictive modeling could help create a more personalized approach to cybersecurity.
“It’s the idea that what works for me as a treatment might be unique to my experiences, my background and my proclivities. Same thing with security threats,” said Abbasi. “It’s not only just predicting it, but by doing [so] we can customize some warnings and the mitigation strategies. And that only works if you can predict accurately.”
To determine if the phishing funnel model could accurately predict individual susceptibility, the study pursued two field experiments. The first asked more than 1,200 employees at two different companies to respond to quarterly surveys and periodic pop-up questions. For 12 months, every participant’s computer was equipped with cybersecurity software and the phishing funnel artifact that displayed prominent warnings when a potential phishing website was clicked on. The goal was to see how effective the phishing funnel model was in predicting each user’s susceptibility over time.
As an employee interacted with a potential phishing website, the action was recorded. In that time, there were 49,373 phishing encounters, which is more than 4,000 phishing attacks per month on average.
Throughout this experiment, employees visited more than 50% of the phishing websites they encountered. Overall, the phishing funnel model outperformed competing models in accurately predicting phishing susceptibility.
For the second experiment, the same employees were then asked to participate in a similar experiment over a three-month period. This time, however, the goal of the experiment was to determine how well interventions based on susceptibility-prediction improve phishing avoidance. The results of this experiment gave evidence to something other related studies had suspected: Users were more accepting of phishing warning messages that were tailored to them and therefore were less likely to move through the phishing funnel stages. Users guided by the phishing funnel model were one-half to one-third less likely to interact with a phishing threat.
The study’s results showed organization-wide benefits. The researchers found that each time an employee avoids the visiting, browsing or transacting stages of the phishing funnel model, there is a cost savings in the form of time for the firm. For each occasion an employee avoided transacting with a phishing website, it saved an hour of tech support time and effort, and therefore pay. If the employee avoided browsing the site altogether, the savings was estimated to be even greater. Overall, a cost-benefit analysis showed interventions guided by the phishing funnel model resulted in phishing-related cost deductions of $1,900 per employee, or over $19 million for a firm with 10,000 employees.
At its core, the study is all about predicting human behavior, which is not something typically applied in the world of cybersecurity. However, this is very much part of what Abbasi strives to do as co-director of the Human-centered Analytics Lab (HAL) with fellow co-director Ken Kelley, the senior associate dean for faculty and research and the Edward F. Sorin Society Professor of ITAO.
“More broadly, with our research and our lab, we are trying to use interdisciplinary lenses to improve the human condition,” said Abbasi. “This study works at the intersection of information technology, psychology and data science, with the focal point being we want to improve the human condition.”
As for a future direction of this research, Abbasi said there is a potential future to look at protecting those most vulnerable against phishing. He explained that although this study looked at phishing from an employer’s perspective, phishing attacks are also imposed on individuals and that there is the possibility to further study how this phishing funnel model could benefit those who are outside of an organization.
“One of the most challenging applications is when you look at folks who are traditionally disadvantaged,” said Abbasi. “Those who come from lower income or lower education backgrounds tend to have the lowest phishing literacy, just like they tend to have the lowest technology literacy. And if you look at the average cost of a phishing attack … as a percentage of income, it is so devastating and heart-wrenching. In many ways, the most vulnerable in our society are often the most targeted.”
The study funded by the National Science Foundation works as a proof of concept for the phishing funnel model, and there is potential to apply this research in the real world. This research provides the general field of cybersecurity, including practitioners who are building anti-phishing tools, a better sense of what else they can do to get better results from their applications, plug-ins and other products in a practical working environment.
“The real goal is to put something out there that can actually help companies protect their employees and reduce losses, because they have the potential to be massive,” said Dobolyi.
DAVID DOBOLYI is an assistant research professor of IT, Analytics, and Operations at the Mendoza College of Business and co-director of the Gaming Analytics & Business Research (GAMA) Lab.
AHMED ABBASI is the Joe and Jane Giovanini Professor of IT, Analytics, and Operations at the Mendoza College of Business and co-director of the Human-centered Analytics Lab (HAL).
Information Systems Research (Feb. 2021)
Ahmed Abbasi (Notre Dame), David Dobolyi (Notre Dame), Anthony Vance (Temple), Fatemeh Mariam Zahedi (UW Milwaukee)