Ask A Lawyer: FAQs Non-Covered Entities Need to Know
Healthcare data privacy has expanded far beyond HIPAA. For organizations that don't fall under HIPAA's jurisdiction—pharmaceutical and medical device, health and wellness apps, telehealth platforms, and many DTC digital health businesses—a complex patchwork of federal and state privacy laws now governs how they can collect, use, and share health-related information.
To help marketers navigate this landscape, we sat down with Mason Fitch, a health privacy lawyer at Kelley Drye who specializes in marketing and data privacy compliance. Mason answered the most pressing questions about operating in this regulated environment.
Watch Mason answer these questions in the video below or keep reading for an analysis of his responses.
When HIPAA doesn't apply, who regulates health data and what counts as health information?
Mason explains that when people think about health information, most only consider HIPAA. But over the past few years, this has become one of the biggest areas of legal development.
The Federal Trade Commission (FTC) regulates health information under its Section 5 enforcement authority, which covers deceptive or unfair trade practices. The FTC has developed extensive guidance and enforcement actions around health information, particularly in recent years. They define health information broadly—generally as something relating to a condition or diagnosis that relates to an individual.
State Attorneys General also pursue enforcement actions under similar unfair and deceptive trade practices laws. They have additional enforcement authority under state comprehensive privacy laws, which govern "sensitive personal information." While the definition of sensitive information varies state to state, it generally works off the same concept: something that reveals a condition or diagnosis.
If you're advising a company or working in-house, you're dealing with a broad universe of information that far exceeds what HIPAA governs. At this point, there are probably 30 different regulatory bodies that care about this type of information. It's an area of significant risk given the number of laws and regulators involved.
👉 Listen to Mason answer this question
How should marketing teams approach this patchwork of privacy laws?
Should you follow the strictest state model or adopt a more flexible state-by-state approach? Mason acknowledges this is challenging, especially for online marketing where you don't always know who is where.
While there are ways to infer a person's location and adjust practices accordingly, that can get unwieldy. What Mason typically sees is marketing teams grouping different states into categories based on similar laws:
- States where you need consent before processing sensitive data
- States where you don't need consent before processing sensitive data
- States where it's become really difficult to use health data at all
Dividing states into broad categories like this does a good balance of addressing risk while still enabling marketing opportunities.
👉 Listen to Mason answer this question
What is Washington's My Health, My Data Act and why does it matter?
My Health, My Data was a first-of-its-kind law coming out of Washington State that governs consumer health data. As Mason notes, you might think "I don't process health data, I own a gym or grocery store." But the definition is extraordinarily broad.
The law can reach organizations like grocery stores, gyms, and health and wellness sites. The definition is broad, and critically, its applicability is broad. This is a relatively new law, so enforcement activities will reveal how it plays out. But a strict reading of the statute extends the law to processing activities that occur within Washington state—arguably including any processing that involves servers located in Washington.
Between Microsoft and Amazon's presence in Washington, many people could be processing their health data in Washington regardless of where they're actually located.
The most important reason this matters: this law has a private right of action, which means individuals can sue directly. You're not just dealing with 30 regulators in the health world, but millions of potential plaintiffs and eager plaintiff's attorneys.
👉 Listen to Mason answer this question
What did the Healthline case teach us about purpose limitation?
The Healthline case was an enforcement action in California under the CCPA. Mason explains the case involved a health informational website alleged to have pixels collecting information about articles individuals read.
For example, an article titled "Recently diagnosed with diabetes? Here are five things you need to know." The allegation was that marketing pixels were recording that a person viewed a particular article, and that information may have been used for advertising purposes.
The enforcement tool California regulators used was called the "purpose limitation test." This was notable for two reasons:
- It was the first time this particular provision of CCPA requirements was used in an enforcement activity
- It's a subjective test that makes compliance more difficult
The purpose limitation test requires that data collected for one purpose not be used for another purpose unless it reasonably aligns with consumer expectations. So when evaluating whether data can be used for marketing, you need to perform a subjective analysis: Can this data be used for marketing, and does it align with the original purpose for which data was collected and the consumer's expectations given the context, privacy policy, and type of site?
It's a much more complicated analysis, and it's a stronger tool because regulators can take a point of view on whether something aligns with expectations.
Going forward, the Healthline case doesn't really change what we already knew: regulators care about health information, health-adjacent information, and particularly the use of health-related information for marketing. It's novel in that it introduced another tool, but it reinforces that health marketing groups need to continue to be careful and intentional about how they use data.
👉 Listen to Mason answer this question
When does website data become regulated and what raises the highest risk?
Mason is clear: the data becomes regulated as soon as you have it.
If you have browsing activity on a health-related site and it becomes collected or logged at any point—whether logged directly by you or critically by one of your service providers (which is most often how it's collected)—it becomes regulated at the point of collection.
Many state comprehensive privacy laws require consent prior to processing, and collection is a type of processing. So it becomes regulated as soon as you can see it, as soon as it's logged.
What raises the highest risk?
When evaluating tracking technologies in healthcare, consider two aspects:
- Identifiability: How confident are you that the data relates to a specific person, and who is that person? Consider whether browsing activity is logged from a logged-in or logged-out experience, and what data can be used in the logged-out experience to infer identity. Even logged-out activity can pretty easily be associated with more identifiable information.
- Certainty of health condition: How certain is it that the browsing activity indicates a particular condition or diagnosis?
For example, on a logged-out site reading an informational article about diabetes, you could argue you don't know that person has diabetes—they could be curious, reading for a family member, or doing research for a school project. There's less sensitivity there, but still not non-zero sensitivity.
Compare that to a checkout page for a person who requires a prescription. You're much more certain that the person making that purchase has that condition or diagnosis. Regulators will look very carefully at that type of data for advertising purposes.
👉 Listen to Mason answer this question
How risky are integrated patient journeys with active tracking?
Anytime trackers or pixels are present on a site that relates to healthcare, there's a risk that information is sensitive information—or at the very least, the type of information regulators will care about and pursue a Healthline-type enforcement action for.
Even if it's not necessarily sensitive information by strict legal definition, it's still data that regulators care about and will apply tools like the purpose limitation test to.
Anytime you're on any health-related site and there are pixels and trackers, regardless of where a consumer is in the funnel, it's something you need to care about.
Can that data become PHI?
This requires an entity-by-entity analysis and legal determinations. But in general, if an entity is a covered entity or business associate under HIPAA and they're exposed to health information, there's a risk that becomes PHI.
The Department of Health and Human Services has actually put out guidance about how to determine whether certain browsing activity is PHI. When you're dealing with a HIPAA-regulated entity, this is an area of extreme caution for marketing teams.
👉 Listen to Mason answer this question
What are the legal risks of using pixels for marketing?
There's a lot of risk, Mason explains. If you're working with data that is determined to be sensitive information—or more sensitive information, even if it's not strictly sensitive information under a law—many considerations flow from that.
More practically:
- Consent requirements: In many states, if you're dealing with sensitive personal information, comprehensive privacy laws require that you obtain consent before processing the information.
- Notice requirements: Traditional privacy principles apply. When you're dealing with more sensitive information for purposes that aren't typical (or perhaps typical in this space), you might need more notice than you're used to giving for other types of personal information.
The main risk is not complying with the many requirements that come from state privacy laws once you're processing sensitive personal information—namely consent.
But as we've seen in the Healthline case, it's not just about consent. There's also a more subjective analysis of whether processing activity is appropriate given individuals' reasonable expectations in relation to that data.
👉 Listen to Mason answer this question
Do session replay tools and chat widgets create legal risk?
The simple answer, Mason says, is yes—they do and they can.
Anytime any tool is exposed to or has the chance to collect sensitive information, that should be considered legally risky. You and your teams should approach any type of tool like this with intentionality. You need to be cognizant of:
- What type of information this tool is collecting
- For what purpose
- Who it can be disclosed to
This is true even if these tools aren't collecting traditional personally identifiable information like a name or email address.
One of the many lessons from privacy enforcement over the past years: that type of information isn't required for browsing information to be considered identifiable or to be considered personal data governed by state privacy laws.
Something like an IP address or device ID, given the advertising ecosystem we're in, can be used to identify someone directly. If it's something as simple as an IP address, it's likely going to be personal data. And attached to that IP address might be a session replay of a person browsing a health website. What information could be gleaned from that browsing activity?
When you're dealing with healthcare, even something as simple as browsing a health information site is something regulators are going to care about, which makes it legally risky and requires quite a bit of intentionality when integrating that tool onto your site.
👉 Listen to Mason answer this question
How can marketers reach patients if ad platforms restrict health targeting?
Mason corrects the premise: this isn't an "if" question—it's already happening. Ad platforms are restricting health-related advertising pretty widely. The short story is they don't want this type of data in many cases.
So how can you still have an effective and compliant marketing program?
Contextual advertising is an obvious alternative. This is information based on media browsing activity and is not collected over time. It's not behavioral advertising. It can be effective, and it's certainly much safer when it comes to processing activities.
But many organizations still want to do behavioral advertising. The challenge is doing it without using or disclosing health information.
When using tracking technology, health information can be included in all kinds of data:
- In a URL
- The name of an event
- What buckets you create on an ad platform
It requires having a proactive compliance program to ensure you know exactly what type of data you're collecting and having processes in place to ensure you're not disclosing anything that would constitute health information.
This means scrubbing URLs, being intentional about what you name particular events, and having a comprehensive program that prevents the disclosure of health information. That's when the risk increases legally, and that's when you risk violating platform policies.
👉 Listen to Mason answer this question
When does retargeting cross into unlawful profiling?
The laws here are relatively use-neutral, Mason explains. The question isn't necessarily what you're using it for, though that certainly matters (for example, with the purpose limitation test).
When assessing legal risk, it starts with: What type of data am I using?
If that data is health data or data that can constitute sensitive personal information under comprehensive privacy laws, any use of it for any purpose starts with some level of risk. And if you're using it for something far outside of expectations—like perhaps advertising in some contexts—the risk is even higher.
For retargeting or lookalike modeling, it depends on what type of information you're using for those seed audiences:
- Is it based on something very clearly and directly related to someone's health condition or diagnosis? That's more risky.
- Or is it something more tangential in terms of what information relates to an individual? That's less risky.
How can companies stay on the right side?
This varies from marketing program to marketing program. There are different approaches:
- Get consent: "Yes, this is sensitive personal information and we're going to get consent to use it for this purpose." You have a consented data set.
- Sanitize the information: If there's a risk it's sensitive personal information or health information, you can sanitize it to make it less likely to constitute the type of information that would require consent.
In many instances, it's either going to be get consent or sanitize the information in such a way that consent is not required.
👉 Listen to Mason answer this question
Who's accountable if tracking continues after cookie opt-out?
The typical legal answer: it depends. But ultimately, who should be worried when this happens? The answer is both.
If you're on a website with a cookie banner where people can opt in or out, they opt out, and information is still sent—that's certainly a risk for the website hosting that information, even if it's a third party still collecting.
The processing activity, the point of collection, is on your website. In many instances, the website has pretty high risk here.
For the vendor, it depends on how this configuration happened. What caused the information to still flow despite the individual's opt-out? If the cookie is not behaving as it should or not complying with certain standards, there is risk for the vendor.
But most likely, the area of higher risk is going to be the website, because that is the point of collection.
👉 Listen to Mason answer this question
What enforcement trends should marketers watch?
The use of sensitive information for marketing has been a hot topic for regulators over the past several years, Mason notes. And we've seen that in plaintiff's activity as well, though via different mechanisms.
Regulators have their own authority. Plaintiff's firms have to find laws to pursue causes of action under, and the most common laws we've seen are wiretap laws and other laws that have private rights of action, like the Video Privacy Protection Act.
But the common theme is the use of sensitive personal information—primarily in the health sphere, but to some extent in the financial industry—for marketing purposes.
What's one step you can take?
The first and most important step is to have a really comprehensive understanding of:
- What type of tracking technology and other advertising tech you have on your site
- What information that technology is collecting
That's the first step to an effective compliance program. It's the first step to introducing mitigations that would allow you to continue marketing while lowering your risk. And it's the first step to building a really effective, compliant marketing program in the health space.
👉 Listen to Mason answer this question
Privacy Compliance That Doesn't Kill Performance
For organizations that operate outside HIPAA's scope, navigating the FTC, state privacy laws, and emerging regulations like Washington's My Health, My Data Act can be dizzying.
But as complex as this regulatory patchwork is, it doesn't have to cripple your marketing effectiveness.
Freshpaint gives non-covered entities a privacy-first data foundation that keeps you compliant and lets you run modern, performance-driven marketing. Instead of turning off pixels and abandoning high-performing channels, you can:
- Replace risky third-party trackers with a BAA-covered tracking layer that filters health-related data server-side before it ever reaches ad platforms
- Enforce consent and governance at the data layer, so sensitive signals are either properly consented or fully sanitized
- Build compliant retargeting and lookalike audiences from first-party data, so you can reach high-intent prospects without exposing health information
- Send privacy-first, down-funnel conversion signals—like attended appointments or completed enrollments—back to ad platforms to unlock smarter optimization and lower acquisition costs
With Freshpaint, you get a unified system to safely collect, control, and activate health-related data across channels. You can expand beyond bottom-funnel search, regain confidence in measurement, and prove real ROI—without creating regulatory or platform policy risk.
Turn privacy from a constraint into a performance advantage. Talk to an expert today to see how Freshpaint can help your team grow responsibly.
.webp)


