July 25, 2021

Artificial intelligence reduces the big problem of cards (and it's not fraud) | Technology

Artificial intelligence reduces the big problem of cards (and it's not fraud) | Technology



The bank card fraud is tiny in Spain: 0.017% of operations in 2017 with cards issued in Spain were criminal, according to the Bank of Spain. Of every 100 cards only one was affected, for a value of 68 euros on average. In absolute terms, it is not little money: 40 million euros. In the euro zone, the data are slightly worse: there was a 0.041% fraud in 2016, according to the European Central Bank.

Is effectiveness in the fight against fraud causes a greater challenge for banking entities, businesses and customers: false positives. A false positive is a legitimate card purchase that the bank prevents because its prevention system sees something suspicious. It is as if the system used a sheet to cover the broken network: it covers too much and ends up stopping legal payments.

One in six real owners of a card saw at least one payment decline in one year, according to a 2015 study of the consultant Javelin. The amount rejected in this failed purchase is not the only problem of false positives: 26% visit less trade in which it has occurred and 32% always avoid it from that moment. In addition, the user resorts less to the card that has been declined, always according to Javelin.

The BBVA went to MIT in 2016 to improve its anti-fraud system: "But working with them we saw that with the current means stopping more fraud would imply a residual improvement", says Carlos Capmany, responsible for the project at BBVA. Then it was when they saw that there was another more feasible improvement: "Why do not we attack instead the false positives that are impacting more than us, the businesses and the client community?", He adds. A new MIT system could have the solution.

"The big challenge of the industry is the false positives," says Kalyan Veeramachaneni, co-author of Article where the model and principal investigator is explained in the MIT Decision and Information Systems Laboratory, in a press release from the center.

It is understandable that banks have traditionally oversaw fraud. A transaction made by a cybercriminal is assumed by the institution. On the other hand, much of the impact that occurs in stores does not come to be seen in banks. In an estimate made by the authors of the model, false positives now block some 289,000 operations per 1.8 million. The new system will stop only about 133,000, 54% less. These transactions amount to approximately 190,000 euros. We must bear in mind that BBVA carries out 2 million daily operations, which means that, as the scientific article says, "a tiny figure of the total annual volume".

Since the 20th century

The models of machine learning to detect fraud are used since the late twentieth century. But they were models with few variables: they looked at quantities, frequencies, place of purchase and little else. If a card exceeded a limit of money or was used very often or in unexpected places, the purchase was blocked. But today something like this can be a normal use.

At MIT, a team participating in Veeramachaneni had devised a system called Synthesis of Deep Characteristics (DFS) that found variables much more elaborate than usual: on the technical characteristics of the terminal, on the features of the vendor, on the presence of the client.

The BBVA ceded a history of 900 million real transactions anonymous to MIT, which managed to perfect the model. The success is not total, but his program manages to eliminate half the false positives that came with the previous method. As it does? Adding many more variables so that the program looks for patterns of behavior in each card and therefore it is easier to detect unusual uses.

The MIT model created 236 characteristics from the BBVA data. "You have to think about additional features to produce good information for machine learning to work," explains Carlos Capmany, project manager at BBVA. "This was done by trial and error, and the way MIT works generates a series of additional data, many of which would not have occurred to us, accelerates us to produce it and gives us novel ways to cross information to train the systems "

The challenge of the model is extremely complicated. A card does not have a behavior and suddenly varies forever: "A card that is used fraudulently is not always used fraudulently, a normal user uses it on average 150-200 times over a year and suddenly in one, for whatever reason, an anomalous thing appears, "says Capmany. If a criminal gets, for example, 1,500 card numbers with the identity of their owners, they can be used with delicacy to avoid being detected. Many still achieve it: it's about trying. The evolution of fraud is a long-term challenge.

In the last six months, BBVA has replicated the experiment with updated data. The model maintains its success rate. The bank is about to introduce the algorithm into its system. The model would also work for other banks. "MIT has issued some open source guides," says Capmany.

.



Source link