Compliance / Healthcare AI

HIPAA-compliant AI development — what's actually possible.

The vendor ships the substrate. The covered entity signs the BAA.

TL;DR

True HIPAA compliance is a covered-entity certification — the application vendor (Wolrix) ships the substrate, the client signs the BAA and runs certification. At Wolrix we deliver HIPAA-aware AI builds: encrypted-at-rest schemas, audit log table from day one, row-level security in Postgres, secrets in Vercel-managed env. We shipped a HIPAA-aware telemedicine platform in 8 weeks — doctor portal, video consults, e-prescriptions, Stripe, S3. AI features always sit behind human-in-loop on irreversible PHI actions. We use the three-layer pattern (RAG + function calling + human approval) to keep AI from inventing clinical content.

What HIPAA actually requires

The phrase “HIPAA-compliant software” is a category error.

HIPAA is a regulatory framework that applies to covered entities — healthcare providers, health plans, and clearinghouses that handle PHI. Software vendors are business associates, governed by a Business Associate Agreement (BAA) signed with the covered entity.

A vendor cannot be “HIPAA-compliant” in isolation. The vendor builds technical safeguards that hold up under audit. The covered entity runs the compliance program, signs BAAs with every business associate (hosting, AI provider, email, analytics), and goes through the formal certification.

What you should ask a vendor is not “are you HIPAA-compliant”. It's “does your application layer pass a real HIPAA audit when our covered entity runs it”. The answer for Wolrix builds: yes — we've shipped builds that have.

Responsibility split

Vendor vs covered entity

What Wolrix ships (the substrate)

  • Encrypted-at-rest schemas in Postgres
  • Audit log table on every PHI read and write, shipped day one
  • Row-level security so a clinician sees only their patients
  • Secrets in Vercel + AWS-managed env, never in code
  • Role-based access on every API route
  • Human-in-loop gates on irreversible PHI actions
  • AI provider routing under BAA-eligible deployments (Bedrock or Azure OpenAI)

What the covered entity owns (the certification)

  • Signing the BAA with hosting and AI vendors
  • Designating a HIPAA security officer
  • Running the certification process and audit
  • Workforce training and access reviews
  • Incident response process and breach notification
  • Annual risk assessment
  • Patient-facing notice of privacy practices
Anti-hallucination architecture

The three-layer pattern for healthcare AI

AI as a tool with a human approval gate on irreversible actions, never as the final layer. This is the architecture we ship on every healthcare build.

Layer 1

Retrieval against your data

AI features pull context from your authoritative data store (Postgres + pgvector or a managed RAG layer), not from the model's training set. The model never invents a drug name, a dosage, or a patient detail — it cites a retrieved record.

Layer 2

Function calling for structured output

Where AI proposes a write action, output is constrained to a typed schema (Zod or JSON Schema). The application validates the proposal against business rules before it ever touches the database.

Layer 3

Human-in-loop on irreversible actions

Anything that touches PHI in a way you can't undo — sending a prescription, signing a note, billing a claim, messaging a patient — routes through a human approval queue. AI drafts; clinician signs.

Real shipped pattern

The 8-week telemedicine reference build

Doctor-patient portal with live video consults, e-prescription flow, Stripe billing, S3 document handling, audit log on every PHI read and write. Doctor and patient roles, scoped sessions, consent capture. Video consults via WebRTC with recordings encrypted at rest. E-prescription workflow with provider sign-off. Stripe payments with insurance-aware itemization. Anonymized; references on call after NDA.

FAQ

HIPAA + AI questions

Is your software HIPAA-compliant?

HIPAA compliance is a covered-entity certification — the application vendor (Wolrix) ships the substrate, the covered entity signs the BAA and runs the certification. We ship encrypted-at-rest schemas, audit log table from day one, row-level security, secrets in managed env, and human-in-loop on irreversible PHI actions. That substrate passes real HIPAA audits.

Do you sign a BAA?

Wolrix is the development vendor, not the hosting or AI vendor. The BAA is between the covered entity and the entities that store or process PHI (Vercel, AWS, Anthropic, OpenAI). We help structure the BAA chain on the spec call and recommend BAA-eligible deployment paths (AWS Bedrock for Claude, Azure OpenAI for GPT).

Can AI write to a patient record without a human approving?

Not on a Wolrix build. AI features always sit behind a human approval step on irreversible PHI actions. AI drafts the note, the clinician signs. AI proposes a prescription, the prescriber approves. AI suggests a message, the nurse sends. This is the third layer of the anti-hallucination architecture.

What healthcare apps have you actually shipped?

A HIPAA-aware telemedicine platform with video consults, e-prescriptions, Stripe, S3, audit logging — 8 weeks. A UK digital pharmacy with regulatory compliance, prescription verification flow, Stripe — 10 weeks. Both anonymized under NDA; references on a call.

How do you handle the model API itself under HIPAA?

Two paths: Anthropic Claude via AWS Bedrock under AWS BAA, or OpenAI via Azure OpenAI under Azure BAA. Direct API calls to Anthropic or OpenAI without enterprise BAA are not used for PHI workloads. Application-layer PHI redaction is the default belt-and-suspenders on every request, regardless of provider.

Building HIPAA-aware AI?

Free architecture audit in 24 hours. We map your PHI flows and tell you what ships in 8 weeks.

Top Rated Plus Upwork · 100% JSS · 42 projects · $200K+ earned · 100% satisfaction guarantee