BidenCash Shop
Rescator cvv and dump shop
adv ex on 30 October 2024
Patrick Stash
banner expire at 13 August 2024
Kfc Club
Yale lodge shop
UniCvv
Carding.pw carding forum
swipe store
casino

Mr.Tom

TRUSTED VERIFIED SELLER
Staff member
The world through the eyes of an anti-fraud

Today I'll tell you why for some time now the antifraud has been looking at us not through rose-colored glasses, but through the scope.

Until recently, the most popular anti-fraud system architecture was the Fraud score architecture. The Fraud score architecture received single parameters and fingerprints using the user's browser, and then, using logical expressions and a statistical base, assigned each obtained parameter or group of parameters a specific weight in the Risc Score, for example:

1. DNS difference from country IP = + 7% to Risc Score

2. The difference between DNS and IP subnet = + 2% to Risc Score

3. Unique Canvas print = + 10% to Risc Score

4. Unique parameters of shaders = + 5% to Risc Score

etc.

As a result of this analysis, the user scored a kind of "Fraud Probability Rating" and, if this rating was below 35%, the protection systems considered all the user's actions legitimate, with a slight increase in the rating, the protection system limited the user's rights, and with a strong increase in the rating, it completely blocked him. There were some exceptions and peculiarities, but on the whole, everything worked like that.

The Fraud score architecture was effective before the advent of advanced antidetect mechanisms and allowed the user to ignore changes in some fingerprints, so very often one could come across statements like "I work from a regular browser, I clean cookies, I use that plugin and everything works for me." Over time, the Fraud score architecture has lost its effectiveness and is being replaced by the more advanced DGA architecture - Dedicated Group Analysis. Most modern anti-fraud systems are based on this architecture.

The DGA uses the same statistical elements as the Fraud score architecture, but the processing logic has been fundamentally changed.

Let's give an example:

Imagine a school with three grades - 1A, 1B and 1B.

We are a cook in this school and we need to understand what food and how much to cook for each of the classes. To solve this problem, we will use the data that was provided to us - these will be names.

1 A class. Students:

Igor, Anton, Sasha, Vova, Gena.

1 B class. Students:

Marina, Oleg, Aristarkh, Sergey, Olga.

1 In class. Students:

Sayfuddin, Yuri, Pavel, Ilya, Maxim.

In order to understand what to cook for each class, we will assign each student a rating from 1 to 9, where 1 is the most "Russian" name and 9 is the most "foreign", and as a result we get:

1 A class. Students:

Igor (1), Anton (1), Sasha (1), Vova (1), Gena (1)

1 B class. Students:

Marina (1), Oleg (1), Evlampy (5), Sergey (1), Olga (1)

1 In class. Students:

Sayfuddin (9), Yuri (1), Pavel (1), Ilya (1), Maxim (1)

Once we have assigned a unique rating to each name, we will compose the overall uniqueness of the class using the standard arithmetic mean function:

1 A class. Rating:

(1 + 1 + 1 + 1 + 1) / 5 = 1

1 B class. Rating:

(1 + 1 + 5 + 1 + 1) / 5 = 1.8

1 In class. Rating:

(1 + 1 + 9 + 1 + 1) / 5 = 2.6

According to the class rating, we will prepare:

For 1 A class - Pies and tea

For 1 B class - Pie and tea

For 1 B class - Echpochmaks and koumiss

Accordingly, we conclude that because of one unique student of Sayfuddin, all other students of grade 1B will suffer, while Sayfuddin will sit with a contented face and drink kumis.

Further, for each class, we will determine the portion size by gender, but here the logic is clear and in grade 1B the portions will be the least because of the two girls.

Translating this example in the context of anti-fraud systems, we conclude that even when all our parameters and prints are changed, but some 1 remains unique (for example Canvas), our overall Risc Score will increase to 26% in DGA architecture systems, while as in Fraud score systems, it would grow by only 10%.

A key feature of the DGA architecture is stricter rules for fraudsters, while not affecting the activities of real users.

The most perfect print. Today, there are many different technologies with which you can identify a user. Some are old, some are new, but combined fingerprints are the best user authentication option. Combined fingerprints are a technique in which a logical expression is used to analyze not one, but two or more parameters of a user's PC, and these fingerprints can reveal information about each other.

Currently the most advanced pair is Canvas-WebGl. Many of you know or at least have heard about these parameters, but almost nothing is known about the method of substituting them, at the same time, it is the method of substituting these fingerprints that hides the most interesting identification mechanisms.

I'll tell you in simple language about Canvas. Modern antidetects for replacing Canvas prints use a simple technology for replacing the color of pixels, that is, when a 2D image of the Canvas technology is drawn, a pixel is selected - the 1st, or 5th, or 125th - (whatever the antidetect developer considers) and in the selected pixel changes color ratio / gamma / transparency. It may not be 1, but 2 pixels, for example, or the 7th or 500th, and a change in color even 1 pixel will change the hash of the print.

*************************************************

What is hash?

Hash is the transformation of an array of data into a single bit string. For example:

Ivanov Ivan Ivanovich 1950 Moscow Nakhimovtsev street 29, apt. 31 +79260014589

converted to hash:

ICAgMTk1MCAgLiAyOSwgLiAzMSArNzkyNjAwMTQ1ODk =

*************************************************

Accordingly, changing the color of the image pixels leads to a change in the Canvas print hash - this is the basis for changing the Canvas print. The question remains why, when using some antidetects, the uniqueness of the fingerprint is 100%, for example, on the browserleaks website. It turns out that antidetects, instead of disguising you, set you apart from the crowd of other users.

I'll tell you in simple language about WebGL. WebGL is a 3D image, first we form a skeleton from vertices and lines, and then we fill the space between vertices and lines with a 2D image. It is important to understand that when building a 3D image, a 2D image is used. To simplify, we end up with Canvas as WebGL.

For those who want to read more deeply:

webglfundamentals.org
Как работает WebGL
Как на самом деле работает WebGL
webglfundamentals.org webglfundamentals.org

WebGL Specification

How WebGL Substitution Works in Modern Antidetects The simplest method of spoofing is the same color change in pixel shaders. In the same way as when substituting Canvas, but in a different place (sometimes substitution of vertex coordinates occurs, but this is an exception). Again, the colors change and again a new hash, the result is achieved and users see the change in the fingerprint, but if there were a public WebGL fingerprint verification service, it would also show the WebGL fingerprint uniqueness equal to 100%. But this is not the worst thing …

But what if we compare the rendering process of these prints? And when comparing the rendering process of these prints, you can see the differences in color formation and, accordingly, identify the use of the antidetect system with 100% accuracy.

The process by which different fingerprints verify each other is a combined browser fingerprint technology.

Here's an example:

1. Launch the Chrome browser

2. In the chrome store, install the DontFingerprintMe developer tool:

chrome.google.com
Don't FingerPrint Me
A browser devtools extension for detecting browser fingerprinting.
chrome.google.com chrome.google.com

3. Open the site facebook.com

4. Press F12 on the keyboard

5. In the toolbar we see the buttons "Elements, Console, Resources, Network" and open the drop-down list >> in it select DFPM

6. Refresh the page facebook.com

7. We see the request for Canvas print

8. Making a login

9. We see the WebGL fingerprint request

Fuck and browser fingerprints

How are browser fingerprints generated?

Our browser fingerprints are not some kind of function put by the developers, and this is not a hidden feature or something like that, there are NO browser fingerprints in the browser!

Everything that we are used to seeing on checker sites and perceive it as prints is actually just a figment of the developer's imagination and nothing more. Let's take a closer look at:

Each browser has a foundation, the foundation is the OS and hardware, in fact, just our PC.

In order to make it convenient for the user to work, the browser transmits information about itself and about the PC to the sites, and even not only in order to make it more convenient to use, but in order to protect itself from "Eblans". Indeed, in fact, 90% of users are complete fucking.

For example:

1. UserAgent

You are looking for a program for yourself, any program you like and go to the site. Here are 4 links - Windows7x32 / Windows 10x64 / MacOS / FreeBsd

And although for everyone who reads this article, the choice is obvious, but most users will not be able to solve this problem on their own and will try to install .dmg in Windows, etc. They need help, so the browser sees your OS and sends data about it to the site - which will automatically give the necessary link and everyone will be happy. Is the browser doing something bad in this case? No…

2. Canvas

Canvas technology is used to render the visual elements of web pages. Until 2006, when surfing the web, to display a web page, the server had to transfer visual elements of the site to our PC - graphics, tables, etc., which heavily loaded the communication channel (remember the speeds of that time) or we had to use Macromedia Flash, to watch videos, or play basic games. But then Canvas came, which is based on JavaScript and now the site does not transfer ready-made elements, but simply shows us the text of the script, which is executed not on the server, but ON OUR PC using our browser and our hardware. The speed has increased, the load on the servers has decreased, the possibilities have expanded. Is the browser doing something bad in this case? No…

Such examples can be given for any technology, and they all boil down to one main goal - to improve usability, and to one side goal - to protect yourself from fucking.

Well, where are our prints then? And fingerprints are just derivatives, in other words, a by-product of event processing.

Example Canvas print:

1. The user visits the site

2. The site transmits javascript to the user's PC by which the user's browser automatically renders a picture with the specified elements, applies effects and shadows (this picture may even be hidden from the user's eyes). The image format is PNG and to generate a PNG image, the library of our operating system called libpng is used, which represents the image in those levels - IHDR, IDAT, IEND (by the way, in IHDR you can directly sign who processed this image and on which PC).

3. The whole picture consists of pixels, and inside the pixel there is chromaticity and transparency, so the picture is serialized into a byte array.

4. The byte array is encoded in base64 format and transmitted to the site

5. The site uses hashing technology or does not (depends on the developer) and receives our pseudo-unique Canvas fingerprint - here it is, our fingerprint!

What is the uniqueness of a print and why is it so important?

Many people know the site: https://browserleaks.com/canvas

And probably everyone is interested in why the site detects my operating system from a real PC, but not from an antidetect and shows 100% uniqueness.

If you do not resort to secret Masonic technologies, you can simply guess, the Browserleaks site records the users who visit it and the user agent records it, comparing it with the canvas - that's all. At the time of this writing, the number of user agents in the Browserleaks database was 358283. But this is just a small site known only to a narrow circle of people, but imagine a statistical collection of Google, or Facebook, or Betfair or Paypal.

Resources with millions of hits per day can in the simplest way collect internal statistics and compare that your unique fingerprint has not been used by any of the 100,000,000 users in the last year. Where will this lead you? And this will lead you to the effect of an ostrich - when your head is in the sand (I checked everything on browserlix, I have all the fire!), But your butt will be outside and will set you apart from the crowd of all other ostriches, because the benefits of 100% uniqueness of the print are the same a myth like the myth about the head of an ostrich in the sand.

But besides the fact that the fingerprint will set you apart from the crowd of other users, it will also harm the rest of your fingerprints …

Do you know why antidetects die?

Let's imagine a pristine payment system and a carder.

The carder enters this system from his real PC (don't forget about the fucking guys) and steals money. The system will allow him, but then it will work on the errors and understand that the user with such prints is a fraud and the second time he will not be allowed to steal money.

Carder will go and buy an antidetect - all the prints are new, everything is ok and again he will be able to steal money, and then change the prints and steal again, etc. But after a while, the system, applying machine learning, artificial intelligence and voodoo rituals, will develop the following policy:

UserAgent valid check:

1. HTTP Header - Chrome

2. Browser signature - Chrome

3. DynamicCompressor - Chrome

4. Mime Types - Chrome

5. ClientRect - Chrome

6. Canvas - UNKNOWN

And after analyzing the thefts that took place, the system will conclude that when using a unique canvas and it is not possible to match it with the OS version, all users with such data must enter the SECURITY MEASURE restriction.

What the anti-fraud system did is called the independent application of the logic of protection technologies based on statistical data. And if it is simple - they followed, followed and took the ass.

Examples of independent implementation of protection algorithms are based only on the analysis of the actions that have taken place and only then are introduced as frontal protection mechanisms in systems based on the DGA architecture - Dedicated Group Analysis (it was discussed in the first part).

However, anti-fraud systems have a few more aces up their sleeve, one of which is called Fuzzy Hash or fuzzy hashing.

Here's an example:

Ivanov Ivan Ivanovich 1950 Moscow Nakhimovtsev street 29, apt. 31 +79260014589

converted to hash:

ICAgMTk1MCAgLiAyOSwgLiAzMSArNzkyNjAwMTQ1ODk =

This is exactly how our prints are converted - into one single hash. And what happens if Ivan Ivanov changes his phone number?

For example:

Ivanov Ivan Ivanovich 1950 Moscow Nakhimovtsev street 29, apt. 31 +79260014588

converted to hash:

ICAgMTk1MCAgLiAyOSwgLiAzMSArNzkyNjAwMTQ1ODkKCg ==

Having changed only 1 digit in the phone number, we already have a new hash and a new identity, but has the essence of Ivan Ivanov changed? No, it hasn't changed. What should be done in this case?

To solve this problem, the phasing hashing technology is used, which allows you to ignore the set% changes until the collected data is converted into a hash.

In simple terms - if Ivanov Ivan Ivanovich changes his phone or city, street or apartment - we will still recognize him. The antifraud system acts in exactly the same way, collecting information about you and comparing it with the existing one. This is an excellent mechanism for calculating carders, fraudsters, etc., who, using an antidetect, can "bypass" only the protection of the Browserleaks site.

In simple words, if your antidetect produces unique fingerprints, for example, Canvas - throw it away. This is shit.
 
Top