Digital ID: Why Compliance Will Lead Us into a Trap
By Zain Erikat- On October 3rd 2025, A 3rd party service provider, used by Discord for identity verification, had a breach which caused a total of 70,000 records relating to its users’ private information, to be leaked on the web. The purpose of collecting that info was for account verification. A user shows suspicious activity on their account and Discord’s automated system flags them and imposes enforcement actions on their account such as limiting their access etc. The user provides them with their government issued documents and then it gets reviewed by an employee for verification manually.
These kinds of measures are part and parcel of any online service that caters to a large user base. Online retailers do them as a measure in order to deter concession abusers and poachers, social media companies use them to confirm that the user is not a bot or someone with an agenda. This raises the question: should I trust a company with my identity and personal information even if it was for the sake of a safer, cleaner internet?
The first thing to point out is the fact that, despite the millions or sometimes billions of dollars worth of infrastructure, companies may sometimes leave the backdoor open. For instance, the “Tea app leak”. Tea is a dating app, simply put, catering to women who want to date someone who is not dangerous or might impose a threat on their life by providing a community for women and of women who help each other in identifying potential partners who raise a red flag. The registration process includes sharing a form of identification. The user uploads a photo of their government documents and then it gets reviewed for verification. On top of that, Tea’s privacy policy states that the data is removed later after the verification process is over.
It all sounds good on paper but reality is, as usual, very cruel. Tea used a Firebase bucket (a cloud container that holds data) to store the documents for later verification. A very curious person was lurking in the web found upon a public Firebase bucket which requires only a link to access. That bucket held thousands of government documents belonging to Tea app users which were being held for later verification. That person shared the info with random people on the web and then the hordes broke loose. That sensitive info spread all over the internet and became available via torrents after those individuals greedily downloaded the files. And on top of that, the users of the app who trusted Tea with their own info on top of their love life, had a website built in dedication to the incident that happened. A website that took their profile pics and let random users on the internet start ranking them by how cute they are.
That was just rotten luck to those affected women that a random person found out about the bucket and decided to share it with other people. It is a good thing that we have cyber security researchers who are always prowling the internet’s infrastructure for vulnerabilities and when found, instead of sharing it with their family and friends, they alert the org that is responsible in order to mitigate the vulnerability.
Such an instance happened very recently. An identity verification company called IDMerit was alerted, by security researchers, of an exposed database of theirs which held over a billion records relating to individuals spread across 26 different countries. All that data left unchecked. Did someone or an entity, with malicious intent, already know about this and decided to keep it to themselves as a treasure trove to be mined under IDMerits’ nose? Who knows?
Those mistakes are so stupidly simple it brings shudders down anyone’s spine. Anyone who has a background or simple knowledge in cloud infrastructure knows to never ever leave anything public and to only grant access when the need arises. Everything needs to be left behind locked doors and keys held close to heart.
Those are just two widely known mistakes mentioned. What if I told you that around 80% of organizations in the UK are exposed due to such vulnerabilities? Businesses, charities and the sort are leaving the door open to attacks that will lead to their data being leaked. Data about individuals, their addresses, their phone numbers, their emails and whatever else they entrusted to be left in the cloud. It is no longer a question of if they get breached but how many times they got breached.
It is clear that not adhering to simple security rules when it comes to handling sensitive data teaches very harsh lessons even though those lessons become unlearned with time. But let’s forget about the malicious intent of the people on the outside and let’s discuss the people on the inside. The people you trust your data with.
Tea promised to have the data erased when it is no longer needed (which turned out to be false), but what if it stays needed indefinitely? Going back to Discord, an update to its policies came under criticism. That update will make it mandatory for users, who the system, after it runs through their posting history, determines are not of legal age, to provide their gov issued ID in order to lift restrictions to servers that are meant for adults. When an account comes under those restrictions it is called “Teen mode”.
Considering Discord must’ve learned from its past lesson, it sounds like a great idea after also considering the nature of Discord. People started looking into this and found out that Peter Thiel, the co-founder of Palantir, was involved in providing the software that processes those IDs. Discord then proceeded to sever ties with the Peter Thiel-backed software made for processing those IDs. The reason for that is the software used was found to be used in a US surveillance program. It was used for screening individuals and running them through their system to see if they are already on a list.
That was found by security researchers. But the way it was found was unusual; the whole architecture was accessible from the outside. It was a good thing that the whole system was left exposed but what if it was not? A better question yet is which other services that require your ID run it through some gov surveillance program?
Australia, UK, France and Spain are starting to move towards requiring an ID in order to access either social media or certain other content on the internet, stating it is for their citizens’ safety. Those IDs will be processed by third party government contractors whose infrastructure will be managed by them which begs the question: will they practice goodwill or take the path Tea took? You can leave an entire database exposed to the web or have the credentials required to access them exposed themselves. It is very much simple to just slip up and leave traces around that allows unsavory parties to gain access to that info. Then they will be able to sell that info to other parties who want to capitalize on your personal details.
Now let’s say those databases can’t be hacked and that you are perfectly safe. The revelation relating to Discord and how its userbase’s data was gonna be run through a government surveillance program brings into account that all those flashy companies and software as a service providers might have malicious intent behind them. It is not their fault, getting backed by the gov means you have a virtually infinite source of income and all you have to do to keep the cash flowing and workers paid is to comply with the contract. It is simply in my opinion something called “what passes for survival”.
You might say: I have nothing to worry about if I do not do anything wrong. But that kind of thinking always gave the govs power over its people. Everyone knows how much the world of intelligence runs on paranoia. Where did you sleep tonight? Where do you work? How do you spend your income? Who are you related to? Who are you friends with? Those questions and what’s similar to them allows one to draw lines between the dots and in the end they paint a picture and what might be drawn might not be what is in reality.
Now let’s say those questions turn into something like this: Why did you leave an impression on this post? Why did you write that comment? Why are you following this individual on social media? Why do these posts appear in your feed on this app? Now they will be able to draw bigger pictures that might implicate you in something that you have nothing to do with.
When needing to upload your personal details becomes the norm in order to use an online service, it will lead to normal private citizens turning up on watch lists for no reason. It will leave everyone exposed and for what? To keep the children safe from the evils on the internet? Why all of this for something that common sense parenting can resolve? Why do they need to do all of this now when children have been exposed to all of this since the internet’s inception? Why do you promise me it’ll be fine when it’s been proven it is not? Why do the same hands that want to grab all my personal info keep comforting me?
By Zain Erikat- On October 3rd 2025, A 3rd party service provider, used by Discord for identity verification, had a breach which caused a total of 70,000 records relating to its users’ private information, to be leaked on the web. The purpose of collecting that info was for account verification. A user shows suspicious activity on their account and Discord’s automated system flags them and imposes enforcement actions on their account such as limiting their access etc. The user provides them with their government issued documents and then it gets reviewed by an employee for verification manually.
These kinds of measures are part and parcel of any online service that caters to a large user base. Online retailers do them as a measure in order to deter concession abusers and poachers, social media companies use them to confirm that the user is not a bot or someone with an agenda. This raises the question: should I trust a company with my identity and personal information even if it was for the sake of a safer, cleaner internet?
The first thing to point out is the fact that, despite the millions or sometimes billions of dollars worth of infrastructure, companies may sometimes leave the backdoor open. For instance, the “Tea app leak”. Tea is a dating app, simply put, catering to women who want to date someone who is not dangerous or might impose a threat on their life by providing a community for women and of women who help each other in identifying potential partners who raise a red flag. The registration process includes sharing a form of identification. The user uploads a photo of their government documents and then it gets reviewed for verification. On top of that, Tea’s privacy policy states that the data is removed later after the verification process is over.
It all sounds good on paper but reality is, as usual, very cruel. Tea used a Firebase bucket (a cloud container that holds data) to store the documents for later verification. A very curious person was lurking in the web found upon a public Firebase bucket which requires only a link to access. That bucket held thousands of government documents belonging to Tea app users which were being held for later verification. That person shared the info with random people on the web and then the hordes broke loose. That sensitive info spread all over the internet and became available via torrents after those individuals greedily downloaded the files. And on top of that, the users of the app who trusted Tea with their own info on top of their love life, had a website built in dedication to the incident that happened. A website that took their profile pics and let random users on the internet start ranking them by how cute they are.
That was just rotten luck to those affected women that a random person found out about the bucket and decided to share it with other people. It is a good thing that we have cyber security researchers who are always prowling the internet’s infrastructure for vulnerabilities and when found, instead of sharing it with their family and friends, they alert the org that is responsible in order to mitigate the vulnerability.
Such an instance happened very recently. An identity verification company called IDMerit was alerted, by security researchers, of an exposed database of theirs which held over a billion records relating to individuals spread across 26 different countries. All that data left unchecked. Did someone or an entity, with malicious intent, already know about this and decided to keep it to themselves as a treasure trove to be mined under IDMerits’ nose? Who knows?
Those mistakes are so stupidly simple it brings shudders down anyone’s spine. Anyone who has a background or simple knowledge in cloud infrastructure knows to never ever leave anything public and to only grant access when the need arises. Everything needs to be left behind locked doors and keys held close to heart.
Those are just two widely known mistakes mentioned. What if I told you that around 80% of organizations in the UK are exposed due to such vulnerabilities? Businesses, charities and the sort are leaving the door open to attacks that will lead to their data being leaked. Data about individuals, their addresses, their phone numbers, their emails and whatever else they entrusted to be left in the cloud. It is no longer a question of if they get breached but how many times they got breached.
It is clear that not adhering to simple security rules when it comes to handling sensitive data teaches very harsh lessons even though those lessons become unlearned with time. But let’s forget about the malicious intent of the people on the outside and let’s discuss the people on the inside. The people you trust your data with.
Tea promised to have the data erased when it is no longer needed (which turned out to be false), but what if it stays needed indefinitely? Going back to Discord, an update to its policies came under criticism. That update will make it mandatory for users, who the system, after it runs through their posting history, determines are not of legal age, to provide their gov issued ID in order to lift restrictions to servers that are meant for adults. When an account comes under those restrictions it is called “Teen mode”.
Considering Discord must’ve learned from its past lesson, it sounds like a great idea after also considering the nature of Discord. People started looking into this and found out that Peter Thiel, the co-founder of Palantir, was involved in providing the software that processes those IDs. Discord then proceeded to sever ties with the Peter Thiel-backed software made for processing those IDs. The reason for that is the software used was found to be used in a US surveillance program. It was used for screening individuals and running them through their system to see if they are already on a list.
That was found by security researchers. But the way it was found was unusual; the whole architecture was accessible from the outside. It was a good thing that the whole system was left exposed but what if it was not? A better question yet is which other services that require your ID run it through some gov surveillance program?
Australia, UK, France and Spain are starting to move towards requiring an ID in order to access either social media or certain other content on the internet, stating it is for their citizens’ safety. Those IDs will be processed by third party government contractors whose infrastructure will be managed by them which begs the question: will they practice goodwill or take the path Tea took? You can leave an entire database exposed to the web or have the credentials required to access them exposed themselves. It is very much simple to just slip up and leave traces around that allows unsavory parties to gain access to that info. Then they will be able to sell that info to other parties who want to capitalize on your personal details.
Now let’s say those databases can’t be hacked and that you are perfectly safe. The revelation relating to Discord and how its userbase’s data was gonna be run through a government surveillance program brings into account that all those flashy companies and software as a service providers might have malicious intent behind them. It is not their fault, getting backed by the gov means you have a virtually infinite source of income and all you have to do to keep the cash flowing and workers paid is to comply with the contract. It is simply in my opinion something called “what passes for survival”.
You might say: I have nothing to worry about if I do not do anything wrong. But that kind of thinking always gave the govs power over its people. Everyone knows how much the world of intelligence runs on paranoia. Where did you sleep tonight? Where do you work? How do you spend your income? Who are you related to? Who are you friends with? Those questions and what’s similar to them allows one to draw lines between the dots and in the end they paint a picture and what might be drawn might not be what is in reality.
Now let’s say those questions turn into something like this: Why did you leave an impression on this post? Why did you write that comment? Why are you following this individual on social media? Why do these posts appear in your feed on this app? Now they will be able to draw bigger pictures that might implicate you in something that you have nothing to do with.
When needing to upload your personal details becomes the norm in order to use an online service, it will lead to normal private citizens turning up on watch lists for no reason. It will leave everyone exposed and for what? To keep the children safe from the evils on the internet? Why all of this for something that common sense parenting can resolve? Why do they need to do all of this now when children have been exposed to all of this since the internet’s inception? Why do you promise me it’ll be fine when it’s been proven it is not? Why do the same hands that want to grab all my personal info keep comforting me?
By Zain Erikat- On October 3rd 2025, A 3rd party service provider, used by Discord for identity verification, had a breach which caused a total of 70,000 records relating to its users’ private information, to be leaked on the web. The purpose of collecting that info was for account verification. A user shows suspicious activity on their account and Discord’s automated system flags them and imposes enforcement actions on their account such as limiting their access etc. The user provides them with their government issued documents and then it gets reviewed by an employee for verification manually.
These kinds of measures are part and parcel of any online service that caters to a large user base. Online retailers do them as a measure in order to deter concession abusers and poachers, social media companies use them to confirm that the user is not a bot or someone with an agenda. This raises the question: should I trust a company with my identity and personal information even if it was for the sake of a safer, cleaner internet?
The first thing to point out is the fact that, despite the millions or sometimes billions of dollars worth of infrastructure, companies may sometimes leave the backdoor open. For instance, the “Tea app leak”. Tea is a dating app, simply put, catering to women who want to date someone who is not dangerous or might impose a threat on their life by providing a community for women and of women who help each other in identifying potential partners who raise a red flag. The registration process includes sharing a form of identification. The user uploads a photo of their government documents and then it gets reviewed for verification. On top of that, Tea’s privacy policy states that the data is removed later after the verification process is over.
It all sounds good on paper but reality is, as usual, very cruel. Tea used a Firebase bucket (a cloud container that holds data) to store the documents for later verification. A very curious person was lurking in the web found upon a public Firebase bucket which requires only a link to access. That bucket held thousands of government documents belonging to Tea app users which were being held for later verification. That person shared the info with random people on the web and then the hordes broke loose. That sensitive info spread all over the internet and became available via torrents after those individuals greedily downloaded the files. And on top of that, the users of the app who trusted Tea with their own info on top of their love life, had a website built in dedication to the incident that happened. A website that took their profile pics and let random users on the internet start ranking them by how cute they are.
That was just rotten luck to those affected women that a random person found out about the bucket and decided to share it with other people. It is a good thing that we have cyber security researchers who are always prowling the internet’s infrastructure for vulnerabilities and when found, instead of sharing it with their family and friends, they alert the org that is responsible in order to mitigate the vulnerability.
Such an instance happened very recently. An identity verification company called IDMerit was alerted, by security researchers, of an exposed database of theirs which held over a billion records relating to individuals spread across 26 different countries. All that data left unchecked. Did someone or an entity, with malicious intent, already know about this and decided to keep it to themselves as a treasure trove to be mined under IDMerits’ nose? Who knows?
Those mistakes are so stupidly simple it brings shudders down anyone’s spine. Anyone who has a background or simple knowledge in cloud infrastructure knows to never ever leave anything public and to only grant access when the need arises. Everything needs to be left behind locked doors and keys held close to heart.
Those are just two widely known mistakes mentioned. What if I told you that around 80% of organizations in the UK are exposed due to such vulnerabilities? Businesses, charities and the sort are leaving the door open to attacks that will lead to their data being leaked. Data about individuals, their addresses, their phone numbers, their emails and whatever else they entrusted to be left in the cloud. It is no longer a question of if they get breached but how many times they got breached.
It is clear that not adhering to simple security rules when it comes to handling sensitive data teaches very harsh lessons even though those lessons become unlearned with time. But let’s forget about the malicious intent of the people on the outside and let’s discuss the people on the inside. The people you trust your data with.
Tea promised to have the data erased when it is no longer needed (which turned out to be false), but what if it stays needed indefinitely? Going back to Discord, an update to its policies came under criticism. That update will make it mandatory for users, who the system, after it runs through their posting history, determines are not of legal age, to provide their gov issued ID in order to lift restrictions to servers that are meant for adults. When an account comes under those restrictions it is called “Teen mode”.
Considering Discord must’ve learned from its past lesson, it sounds like a great idea after also considering the nature of Discord. People started looking into this and found out that Peter Thiel, the co-founder of Palantir, was involved in providing the software that processes those IDs. Discord then proceeded to sever ties with the Peter Thiel-backed software made for processing those IDs. The reason for that is the software used was found to be used in a US surveillance program. It was used for screening individuals and running them through their system to see if they are already on a list.
That was found by security researchers. But the way it was found was unusual; the whole architecture was accessible from the outside. It was a good thing that the whole system was left exposed but what if it was not? A better question yet is which other services that require your ID run it through some gov surveillance program?
Australia, UK, France and Spain are starting to move towards requiring an ID in order to access either social media or certain other content on the internet, stating it is for their citizens’ safety. Those IDs will be processed by third party government contractors whose infrastructure will be managed by them which begs the question: will they practice goodwill or take the path Tea took? You can leave an entire database exposed to the web or have the credentials required to access them exposed themselves. It is very much simple to just slip up and leave traces around that allows unsavory parties to gain access to that info. Then they will be able to sell that info to other parties who want to capitalize on your personal details.
Now let’s say those databases can’t be hacked and that you are perfectly safe. The revelation relating to Discord and how its userbase’s data was gonna be run through a government surveillance program brings into account that all those flashy companies and software as a service providers might have malicious intent behind them. It is not their fault, getting backed by the gov means you have a virtually infinite source of income and all you have to do to keep the cash flowing and workers paid is to comply with the contract. It is simply in my opinion something called “what passes for survival”.
You might say: I have nothing to worry about if I do not do anything wrong. But that kind of thinking always gave the govs power over its people. Everyone knows how much the world of intelligence runs on paranoia. Where did you sleep tonight? Where do you work? How do you spend your income? Who are you related to? Who are you friends with? Those questions and what’s similar to them allows one to draw lines between the dots and in the end they paint a picture and what might be drawn might not be what is in reality.
Now let’s say those questions turn into something like this: Why did you leave an impression on this post? Why did you write that comment? Why are you following this individual on social media? Why do these posts appear in your feed on this app? Now they will be able to draw bigger pictures that might implicate you in something that you have nothing to do with.
When needing to upload your personal details becomes the norm in order to use an online service, it will lead to normal private citizens turning up on watch lists for no reason. It will leave everyone exposed and for what? To keep the children safe from the evils on the internet? Why all of this for something that common sense parenting can resolve? Why do they need to do all of this now when children have been exposed to all of this since the internet’s inception? Why do you promise me it’ll be fine when it’s been proven it is not? Why do the same hands that want to grab all my personal info keep comforting me?
comments
Digital ID: Why Compliance Will Lead Us into a Trap
comments