The Unseen Risks of Data Profiling: How Your Digital Footprint Shapes Opportunities and Exclusions

The Invisible Web of Data Tracking

In today's data-driven world, your digital footprint extends far beyond the websites you visit or the purchases you make. Financial transactions, location data, and even seemingly anonymous online interactions contribute to extensive personal profiles that companies and data brokers compile. These profiles fuel targeted advertising but also influence hiring decisions, insurance rates, and financial opportunities in ways most individuals are unaware of.

How Financial Institutions and Data Brokers Trade Your Data

Many banks and financial institutions sell anonymized transaction data to third parties, including advertisers and hedge funds. While direct identifiers like names may be stripped, research has repeatedly shown that "anonymous" data can often be re-identified when cross-referenced with other datasets.

Notable Cases of Re-Identification

  • A 2015 study demonstrated that 90% of individuals could be re-identified using just four credit card transactions (MIT Study).
  • The 2025 Gravy Analytics breach revealed that location data could expose visits to sensitive sites such as therapists' offices (Biometric Update).

Mental Health Data and the Hidden Market

Many assume that health data is protected under HIPAA, but in reality, this only applies to healthcare providers. Mental health-related purchases, including therapy sessions, medication, and even mental health apps, can be tracked and sold by data brokers. Investigations have uncovered brokers selling lists of individuals who suffer from conditions such as depression, ADHD, or anxiety (Duke Tech Policy).

AI-Powered Hiring Discrimination: A Silent Exclusion

AI recruitment tools are increasingly used to screen job candidates. While companies claim these systems reduce bias, studies suggest they often reinforce existing inequalities. Resume gaps, location history, and online activity are often used as proxies for mental health and stability, influencing hiring decisions without candidates' knowledge.

AI in Hiring: A Double-Edged Sword

  • AI Resume Screening Bias: A 2024 University of Washington study found AI screeners favored resumes with white-associated names 85% of the time and male-associated names 89% of the time (UW Study).
  • Neurodivergent Candidates at a Disadvantage: Many AI hiring tools rely on personality assessments that penalize candidates with ADHD or autism for their responses to standardized tests (HR Dive).
  • Culture Fit Labeling: AI hiring tools assess whether a candidate fits into a company's defined "culture fit." This vague metric is often shaped by employer preferences, reinforcing homogeneity and excluding diverse thought and backgrounds.
  • Job-Hopping Risk Profiling: AI models infer job stability by analyzing employment history, penalizing candidates with frequent job changes. These decisions are made based on aggregated data without individual context.
  • Legal Precedents: In Mobley v. Workday (2024), a plaintiff alleged that Workday's AI-driven hiring software discriminated against individuals with mental health conditions (Clark Hill).

The Flow of Data: How AI Companies Use Purchased Data

  1. Data Collection: Banks, retailers, and online platforms collect user data from transactions, browsing habits, and subscriptions.
  2. Data Brokerage: Data brokers purchase this data, aggregate it, and sell anonymized profiles to various industries.
  3. AI Recruitment Integration: AI hiring platforms buy data from brokers to refine their candidate assessment models.
  4. Employer Implementation: Companies use AI hiring tools, often unaware of the hidden biases embedded in the data.

Why Anonymized Data Is Not Truly Anonymous

Companies claim to use anonymized data, but research consistently shows that it can be re-identified with minimal effort. When multiple datasets (e.g., location tracking, purchase history, browsing behavior) are cross-referenced, individuals can be pinpointed with alarming accuracy.

  • Cross-Referencing Risks: A combination of demographic, location, and behavioral data allows companies to infer identities.
  • Pattern Recognition: AI models detect behavior patterns that lead to re-identification even when no direct identifiers exist.
  • Data Brokerage Loopholes: Brokers market data as "anonymous," but many buyers have the tools to re-identify users and tailor their decisions accordingly.

The Limits of Transparency and Regulation

Current U.S. privacy laws provide limited protections against data-driven discrimination. The lack of oversight allows data brokers to continue collecting, selling, and combining data, leading to unintended but deeply impactful consequences.

  • Anonymization Loopholes: Many companies claim data is anonymized, but without strict regulations, re-identification remains easy.
  • Lack of Hiring Transparency: Companies rarely disclose how AI hiring tools function, preventing applicants from knowing why they were rejected.
  • Minimal Legal Protections: Unlike the EU's GDPR, the U.S. lacks comprehensive laws preventing discriminatory hiring based on inferred traits like mental health or neurodivergence.

What Can Be Done?

Individuals

  • Use privacy-focused tools like Brave Browser and Tor to limit tracking.
  • Regularly opt out of data brokers through services like OptOutPrescreen.
  • Be mindful of online activity and how it might be interpreted by AI-driven hiring systems.
  • Turn off location tracking on your phones when you don't need them.
  • Restarting or turning off your phone once a day can minimize or reduce risk.

Policy Changes Needed

  • Stronger Data Privacy Laws: The U.S. needs GDPR-like protections, restricting the sale of mental health-related data.
  • Transparency Requirements for AI Hiring Tools: Companies should be required to disclose what data their AI systems use for screening.
  • Bans on Discriminatory Proxies: Resume gaps, therapy-related transactions, and location data should not be used in hiring decisions.

The widespread use of data profiling presents unseen risks for individuals navigating the job market and financial systems. While these technologies offer efficiencies, they also create hidden biases that exclude qualified candidates based on proxies for health, stability, and economic background. Awareness, regulatory action, and corporate responsibility are needed to ensure these systems serve fairness rather than perpetuate systemic discrimination.