AI design principles
This page outlines our design principles for creating AI-powered experiences. We apply these principles to build what we call "smart systems", systems that leverage different types of data, including historical, descriptive, predictive, and generative, as well as artificial intelligence to create smart moments across a user's experience.
Our AI principles are customer-driven, actionable guidelines for the entire product lifecycle, from ideation and design to development and prioritization. In the evolving landscape of generative AI, these principles guide product and experience decisions, defining what must be true for smart systems to be responsible, intuitive, and effective.
Grounded in research and tailored to the realities of insurance, these principles ensure consistency, reduce risk, and help us build solutions our customers can trust.
Our smart system principles
1. Responsible use and user protection
Design for the user, not capabilities.

AI systems are powerful, but power without guardrails is a risk. Users want assurance that AI systems are fair, unbiased, and secure. They expect us to monitor for harm and bias, protect their data, and encourage thoughtful interactions.
In practice:
- Add friction where needed. For example, require the user to press a button to move forward rather than letting a machine complete the task.
- Use guardrails to ensure generative tools don't overwrite human expertise or propagate bias or faulty information.
- Prioritize purpose and safety over adding AI everywhere.
2. Transformation, not translation
Reimagine the problem, not just the solution.

Before applying AI to an existing task, ask: What could this look like if we weren't limited by today's workflows or assumptions? Smart systems aren't just upgrades. They're chances to fundamentally reshape how we solve problems.
In practice:
- Use AI not to automate a form, but to eliminate it.
- Don't just summarize.
- Ask what the user really needs to know.
- Consider where AI can uniquely bring value: real-time decision support, proactive insight delivery, or impactful assistance in the moment.
3. Transparency
Let me see the magic and know it's magic.

AI moments shouldn't be hidden. When something smart happens, users need to know about it and understand it. A well-placed indicator can create trust. It also invites curiosity.
In practice:
- Use icons (like a magic wand) to signal AI-generated suggestions.
- Clearly highlight when a prediction or summary comes from a model.
- If something looks human-made but isn't, call it out.
- Make it clear what the source of the generated information or task is.
4. Explainability
Help me understand what the system can (and can't) do, and how.

If users can't make sense of an AI system, they won't use it, or worse, they'll be inadvertently misled. Clear explanations about what the system does, how confident it is, and when not to trust it are essential.
In practice:
- Offer previews of what AI features can do.
- Include context-specific tooltips or blurbs.
- Make limitations visible and use plain language, not model jargon.
- Explain how a particular result was generated.
5. Harmonization
Don't disrupt my flow. Enhance it. Make smart systems feel like part of my team.

AI systems must feel like natural extensions of a user's workflow, not interruptions. When AI complements rather than competes with existing processes, adoption grows.
Protect users from informational noise and visual fatigue by enabling them to query data conversationally.
In practice:
- Augment tasks rather than replace them.
- Embed smart features within the natural flow, like generating a summary on demand or customizing suggestions to fit existing work styles.
- Keep AI visual indications and disclaimers concise and purposeful.
- Enable "assist me" moments where I can query the system and get focused, actionable guidance.
6. Human ownership
Keep the human in the loop.

Users need to feel in control. That means retaining the final say, easily editing AI-generated outputs, and understanding that accountability doesn’t disappear just because AI is involved.
In practice:
- Let users edit before proceeding.
- Design interfaces that encourage review and refinement.
- Provide guidance on prompting, and invite users to co-create, not just consume.
- Don't complete a task without user oversight and transparency.
7. Feedback and iteration
Design systems that learn and evolve.

AI systems must never be static. Feedback loops, both implicit and explicit, help improve both the system and the user experience over time.
In practice:
- Use reactions (like thumbs-up) to gather input on AI-generated results.
- Let users refine preferences.
- Always design with iteration in mind.
How to use these principles
We use these principles to plan, build, and prioritize our AI applications. We recommend teams:
- Review the 7 principles.
- Identify which principles apply to your work at different stages of design and development.
- Consider these principles when making product trade-offs, always prioritizing based on the risk and importance to your users.
How we identified these principles
We took a human-centered approach to developing these principles. This effort is customer-driven and informed by industry best practices as well as research into human-computer interaction.
- User-centered: The principles are based on our customers' needs. Insights from research, observations, and interviews with different user groups across insurers were used to drive them.
- Overarching: They are observed consistently across several insurers and lines of business.
- Evidence-based: They are informed by industry best practices.
Our core AI commitments
Generative AI is rapidly evolving from a novelty into a core differentiator in enterprise software. Leading platforms are setting new UX standards with co-pilots, embedded agents, and adaptive dashboards that respond in real-time to user context and intent. Interfaces are becoming conversational, configurable, and self-adjusting, built around AI that assists, adapts, and learns. Customers now expect AI-driven workflows that feel effortless, trustworthy, and personalized.
This presents a transformative opportunity: to lead with intelligent, responsive user experiences grounded in trust, performance, and industry depth.
This leads to our core AI commitments:
- Adopt leading enterprise UX patterns to design seamless, embedded AI experiences.
- Prioritize configurability of AI agents, empowering business users to tailor assistants to their needs.
- Design dynamic UIs that adjust based on user goals and context, shifting from linear flows to adaptive experiences.
- Use familiar visual cues to clearly signal AI actions and progress.
- Build trust into the interface with explainability, human fallback, and the ability to edit or "undo" AI responses.
- Embed ethical guardrails into the design system to address hallucination risks, bias, and regulatory constraints.
- Empower human correction and control by letting users adjust prompts, regenerate answers, or provide feedback effortlessly.
- Systematize a GenAI UX framework for consistency across applications, including design patterns, behaviors, accessibility, and trust.
- Prototype micro-interactions that make AI "feel" responsive, such as transitions, loading states, and intelligent nudges.
- Map end-to-end journeys where GenAI adds real value, from intake to decision-making.