AI Use & Safety Policy
Last updated: May 11, 2026 · Disclosure required by Google Play's Generative AI policy (support article 13985936) and Apple App Store Review Guideline 4.7.
1. What the AI Assistant does
HRFlowTech includes an in-app AI Assistant available in the web and mobile apps. It is designed to help employees and HR administrators with a narrow set of tasks:
- Answering questions about your tenant's published HR policies (leave entitlement, attendance rules, claims procedures).
- Drafting messages such as a leave-request note or a clarification email to HR — you remain the author and decide whether to send.
- Summarising payslips and flagging payroll anomalies (e.g. unexpected deduction changes) for you to verify with HR.
The Assistant is a generative AI feature. Outputs are produced by a large language model and may be incorrect, incomplete, or out of date. You must verify any AI output with HR before acting on it — do not rely on it alone for leave, attendance, or payroll decisions.
2. Model provider
The AI Assistant calls a hosted large-language-model API operated by a third-party provider. TODO: confirm the production provider and model family before publishing — current candidates are Anthropic Claude, OpenAI GPT-4 class, and Google Vertex Gemini. The provider's enterprise terms forbid using HRFlowTech traffic for model training.
Whichever provider is selected:
- Tenant data sent in prompts is processed under a zero-retention or short-retention enterprise agreement (typically 0 to 30 days).
- Tenant data is not used to train the provider's foundation models.
- The provider is bound by a data-processing agreement and the same Standard Contractual Clauses described in our Privacy Policy.
3. Safety measures
- Input filtering — prompts are scanned for prohibited content categories before they reach the model.
- Output filtering — responses are checked against the provider's safety classifiers and our own restricted-content rules; flagged responses are blocked or replaced with a refusal message.
- Prompt-injection mitigations — system instructions are isolated from tenant content; the model is given no privileged tools beyond read-only access to the requesting user's own records.
- Restricted content blocking — the Assistant is configured to refuse requests involving CSAM, sexual content, hate or harassment, self-harm, instructions for illegal activity, malware, or attempts to exfiltrate the personal data of other staff.
- Scope enforcement — the Assistant cannot read another employee's records, modify HR data, approve leave, or trigger payments. It is read-only against your own data.
- Logging and review — conversations are retained for 90 days for safety review and abuse investigation, then deleted.
4. In-app reporting flow
Every AI message in the app shows a small flag icon. To report a problem with an AI response:
- Tap the flag icon on the offending message.
- Select a reason: Harmful content, Privacy / contains another person's data, Factually incorrect, Off-topic for HR, or Other.
- (Optional) Add a short comment.
- Tap Submit. You will see an acknowledgement immediately.
Reports go to [email protected]. Our moderation team reviews each report within 1 business day.
5. What happens after you report
- You receive an acknowledgement in-app and by email (if you have a tenant email on file).
- The flagged response is removed from your visible chat history.
- The conversation transcript is reviewed by our safety team; if it indicates a systemic issue, we adjust system prompts or filter rules.
- If the report relates to harassment or a privacy breach involving another employee, we notify the tenant's HR administrator and may file an incident report.
- We aim to close every report within 7 days; serious incidents are escalated within 24 hours.
6. Limitations & user responsibility
- The Assistant can be wrong. Its answers are not legal, tax, or HR advice.
- Do not act on AI-generated leave approvals, payroll calculations, or policy interpretations without confirming with your HR administrator.
- Do not enter the personal data of other people into the Assistant. Doing so violates these Terms.
- Do not attempt to bypass safety filters, prompt-inject, or coerce the Assistant into producing restricted content.
7. Contact for AI complaints
Email: [email protected]
For privacy-related concerns: [email protected]
See also: Privacy Policy · Terms of Service · Data Safety summary