⚗️ 32dots Learn ist ein experimenteller Prototyp — Inhalte und Funktionen ändern sich kurzfristig.
Karte 22 · Kapitel safety

Privacy and sensitive data

n8n hard 55 min

Inhalt

Privacy-aware design means limiting what data enters the system, what is stored, who can see outputs, and when AI should not be used at all. This is a system-design issue, not just a legal note. The best privacy control is a node that runs *before* the LLM call. Layers: (1) don't collect what you don't need; (2) redact what you do collect; (3) log with hashes, not raw strings; (4) mark untouchable data and refuse to ingest it.

Beispiel: A student assistant may process scheduling data safely but must never ingest identifiable health data (name + DOB + diagnosis) without explicit consent, redaction, and justification.

✓ SELF-CHECK

Hast du das verstanden?

  • [ ] 5 planted-PII test inputs are 100% redacted
  • [ ] 5 clean test inputs pass through unchanged
  • [ ] Audit log stores hashes or redacted forms, never raw PII
  • [ ] You can name one input category your system refuses entirely
  • [ ] You can explain in one sentence what you learned that you would tell a labmate tomorrow
🔗 LIVE-DEMO

Direkt ausprobieren

For data in your own research, which category would you rather refuse to ingest than try to redact — and why is refusing the safer design?
💬 KI-TUTOR

Frag den Tutor zu dieser Karte

Sokratisch: der Tutor antwortet mit Leitfragen statt fertigen Antworten — du erarbeitest die Lösung selbst.

Stell eine erste Frage zu dieser Karte unten.