Horizons Architecture Systems

AI Transparency

Last updated: 2026-03-09

Our Commitment to Transparency

Horizons Architecture Systems is committed to transparent and responsible use of artificial intelligence. This page describes the AI systems integrated into the HA Learning Platform, their purposes, limitations, and the safeguards we maintain. This disclosure is provided in accordance with EU AI Act Article 50 transparency requirements.

AI Systems in Use

The platform integrates several AI-powered features to support educational development:

Horizons Architecture Agent: An adaptive AI agent that analyzes your work across six dimensions, surfaces connections and patterns, and provides structured feedback to support your development. It operates within defined guardrails and does not make autonomous decisions.

Analytical Engine: Examines your entries and reflections to identify tensions, inconsistencies, or patterns in your reasoning. These are surfaced as development opportunities, not errors.

Engagement System: Generates timely prompts and suggestions based on your activity patterns, encouraging consistent engagement and reflective practice.

Progress Aggregator: Compiles periodic summaries of your development trajectory across all dimensions, creating a longitudinal record of your growth.

AI Provider

AI capabilities are provided through third-party large language model APIs under data processing agreements that prohibit the use of your data for model training. We evaluate providers based on capability, safety practices, and data protection standards. The specific provider may change over time; this page will be updated accordingly.

Human-First Design

AI features are designed to augment human judgment, not replace it. All AI-generated analysis is presented as suggestions for your consideration. You maintain full control over your learning trajectory and are never required to accept AI recommendations. Instructors retain oversight of educational assessment.

Guardrails and Safety

AI systems operate within defined boundaries: they cannot access data outside your authorized scope, they are instructed to acknowledge uncertainty, they do not provide professional advice (legal, medical, financial), and they are designed to flag potentially harmful content for human review.

Training Data

Your personal data, journal entries, reflections, and platform interactions are never used to train or fine-tune AI models. AI providers are contractually prohibited from using platform data for model improvement. The educational frameworks and dimensional analysis structures are proprietary to Horizons Architecture Systems.

Human Oversight

Instructors and platform administrators have oversight capabilities to review AI interactions, adjust AI behavior parameters, and intervene when necessary. Users can report problematic AI outputs at any time. We conduct regular audits of AI system performance and safety.

Bias and Fairness

We acknowledge that AI systems may reflect biases present in their training data. We actively monitor for bias in AI outputs, particularly regarding cultural, linguistic, and demographic factors. The platform is designed to serve diverse educational contexts, and we continuously work to improve fairness and inclusivity.

Reporting Concerns

If you encounter AI behavior that seems inappropriate, biased, or concerning, please report it to info@horizonsarchitecture.ai. We investigate all reports and use them to improve our systems. You will receive a response within 10 business days.

Risk Classification

Under the EU AI Act risk framework, the HA Learning Platform operates as a limited-risk AI system. It is designed for educational support and does not make autonomous decisions with legal or similarly significant effects on users. We maintain documentation of our risk assessment and update it as the platform evolves.

This site uses cookies to ensure basic functionality and improve your experience. Cookie Policy