I’ve been wearing a fitness tracker for years — mostly out of curiosity about sleep patterns and step counts, sometimes to keep myself honest about workouts. Like many people, I don’t think of that little device as a surveillance tool. Yet the data it generates — heart rate, sleep stages, oxygen saturation, location when paired with a phone — is intensely personal. The question I keep coming back to is simple: are our laws keeping up with the rapid growth of consumer wearables and the ways companies share that health data?

Why this matters now

Wearables have moved from novelty to mainstream. Apple, Fitbit (now part of Google), Garmin, Oura, WHOOP and dozens of cheaper wristbands and smart rings are in millions of homes. These devices don’t just count steps; they detect irregular heartbeats, monitor stress, estimate respiratory rate, and sometimes flag potential COVID-19 symptoms. Developers and researchers are eager to use that data to build predictive health tools. Insurers see opportunities to tailor premiums. Employers explore wellness programs tied to devices. All of this promises benefits — earlier detection, more personalised care, new research insights — but it also multiplies risk.

What current privacy laws cover — and where they fall short

Different jurisdictions take different approaches. Broadly speaking, existing privacy laws were written in a pre-wearables world and struggle to reflect the realities of continuous biometric monitoring and commercialized health datasets.

Law Scope Strengths Gaps for wearables
GDPR (EU/UK Data Protection Act) Personal data, special categories (health) Strict consent standards, data subject rights, heavy penalties Enforcement on consumer-platform combinations, secondary commercial use, algorithmic profiling
HIPAA (US) Protected health information (PHI) held by covered entities Strong protections for clinical records Most consumer wearables fall outside HIPAA unless data flows to providers or insurers
CCPA/CPA (US states) Consumer personal information Data access/deletion rights, opt-out of sale Patchwork rules, weaker protections for health-specific sensitivity

In short: GDPR-style regimes offer the most robust framework for sensitive data, but enforcement is uneven and companies often rely on broad consent or contractual terms. HIPAA protects clinical environments, but most wearables and health apps operate outside that bubble. That regulatory mismatch creates ambiguity about who controls wearable-derived health data and how it can be repurposed.

Common questions I hear — and my answers

Q: Can companies sell my health data?
A: It depends where you live. In some places companies can monetise de-identified or aggregated datasets. Under GDPR, selling identifiable health data without explicit consent is illegal, but companies often claim data is anonymised. The reality is complex: re-identification risks remain, especially when datasets are combined.

Q: Is sharing my data with an app the same as consenting?
A: Not really. Consent in practice is often buried in long terms and conditions. True informed consent means people understand what will be collected, how it will be used, whether it’s shared with third parties, and for how long. Many apps fail to make these uses clear.

Q: Will my insurer or employer see my wearable data?
A: They could — and some already do. Insurers may offer premiums based on wellness program participation. Employers sometimes build incentives around step challenges. That creates potential for discrimination or coercion: people may feel pressured to share data to keep a job or a discount. Legislators are only just waking up to these risks.

Key technical and policy challenges

There are several recurring challenges that legislation must confront:

  • Defining health data: Is heart rate “health data” if recorded during exercise? Laws need clearer thresholds for when biometric signals become sensitive health information.
  • Anonymisation vs re-identification: Most laws treat anonymised data as outside the strictest protections. But combining datasets can re-identify people. We need stricter standards and practical tests for anonymisation.
  • Secondary uses and resale: Companies often use data for product improvement, research, and commercial purposes. Consumers rarely have meaningful control over downstream sales.
  • Cross-border flows: Wearable platforms are global, so data often crosses borders into jurisdictions with weaker protections.
  • Algorithmic transparency: When companies use machine learning on wearable data to generate health predictions, those algorithms should be auditable and explainable, especially when they inform medical or insurance decisions.

Examples that highlight the stakes

Apple has set a high bar on device security and user control with HealthKit and end-to-end encryption for some data. That’s a model worth emulating. But even Apple has shared data with third parties for research when users opt in — a powerful demonstration of benefit, but also of how easily sensitive data can leave a consumer’s device.

Then look at Google’s acquisition of Fitbit. Regulators in the US and EU scrutinised that deal because of concerns about combining location and health data with Google’s ad business. The fact regulators raised questions shows the potential for harms, but the final remedies felt limited to many privacy advocates.

What lawmakers and regulators should do

From my reporting and conversations with privacy experts, a few clear prescriptions emerge:

  • Update legal definitions to explicitly include continuous biometric signals as potentially sensitive health data.
  • Require meaningful consent mechanisms: concise, standardised notices that explain primary and secondary uses, retention periods, and the right to revoke consent.
  • Ban coercive uses by employers and insurers, or at minimum require strict opt-in and non-discrimination safeguards.
  • Set robust anonymisation and de-identification standards with independent testing to reduce re-identification risk.
  • Mandate transparency around algorithmic decision-making when health predictions affect care, employment or insurance outcomes.
  • Encourage technical norms like local data processing on devices, default encryption in transit and at rest, and data minimisation.

What consumers can do today

While legislation evolves, there are practical steps people can take:

  • Review app permissions and privacy settings. Turn off data sharing with third parties if you don’t need it.
  • Use device features to limit data collection (e.g., disable continuous GPS or health metrics you don’t use).
  • Prefer vendors with strong privacy defaults and transparent policies — Apple and some smaller health-focused startups tend to be better on this front than ad-driven platforms.
  • Ask questions before joining wellness programs linked to employers or insurers. Insist on written guarantees about how data will be used and whether participation is voluntary.
  • Support policy changes: contact representatives and back organisations pushing for stronger protections for biometric and health data.

I don’t think we need to halt innovation. Wearables hold genuine promise for early diagnosis, research, and personalised health. But that promise must be balanced with enforceable rights and technical safeguards that reflect the sensitivity of the data. If laws lag, trust erodes — and with it, the very research and health gains people tout when they promote wearable tech.