American Family Insurance
SingleSource
Project Summary
Designed a data entry method for insurance agents that made entering large volumes of property and risk data into a proprietary policy system easier and faster, lessening job stress, and allowing them to complete and submit more quotes to underwriters per day.
The Goal
Drive growth and profitability by making the policy and quoting process faster and easier for Agents.
Timeline
9 months
Collaboration
CFR (Commercial, Farm, Ranch) division Vice Presidents (3)
CFR Business Stakeholders
(5)
Product Manager
Product Owner
Developers (2)
Risk Data Analyst
Underwriters (2)
Agents (2)
Tools
Figma
Axure
Miro
Zoom
Usertesting.com
Jira
Confluence
My Role
UX Lead
Research Lead
Improving Agent Efficiency
The final experience created a more pleasant and satisfying policy writing/quoting experience which
- Made task completion faster and feel easier.
- Empowered Agents by giving them more choices.
- Allowed Agents to complete 1.2 more quotes per day, increasing growth and profitability.

Business Hypothesis
After several meetings about how to achieve their goal, vice presidents, stakeholders, and business leads believed that using highly accurate API-sourced data to automatically populate fields in the legacy system would make the policy writing and quoting process faster and easier.
They assumed agents would accept the data as provided, which would reduce the time spent filling out forms.
Getting To Know Our Agents
After spending time virtually shadowing support staff agents as they complete the task of filling property data into forms, and a previous user research by another project team, I learned many things about our agents which indicated that the conclusion the business came to wouldn't work well for our agents.

Key Persona: The Support Staff Agent
Startling Discoveries
I created personas for each of our agent types after shadowing and interviewing them. We focused on the support staff agent, the agent that does the majority of data entry for quoting.
The most staggering finding was that of the 40-60 pieces of data agents were responsible for locating and entering, each data point was double, and often triple checked for its accuracy. A variety of external, third party sources were used to verify the data.
Agents felt the need to make sure the data about the property was correct and up to date to avoid time spent on task down the line, and confrontations with sales agents, customers, and underwriters.
Comments from agents when asked if they would trust an API source in Policy Center form fields.

Agent Challenges

Design Focus
Disempowered
Agents don't like to be told they must use data they don't trust that doesn't always reflect real-world conditions.
Enable agents to review, modify, and validate system-provided data.
Undervalued
Agents who entered policy data felt disrespected because their work was treated as less important than the work of sales agents, customers, and underwriters.
Position automation as an aid to expert decision-making, not a substitute for agent expertise and authority.
Strained
Data inaccuracies in the workflow frequently triggered
escalations and negative feedback directed at agents from sales agents, customers, and underwriters.
Make it easy for agents to validate, correct, and trust the data they submit.
Time-limited
Agents are limited in the number of quotes they can process each day due to time spent repeatedly validating and sourcing required data.
Reduce overall time on task without compromising agents’ need to thoroughly validate data.
Beyond The Original Hypothesis
Knowing what we now knew about our users, focusing solely on automatically populating form fields with API data in an effort to accelerate data entry would not meet most of our users’ unique needs, mindsets, and goals.
- Agents say the API data they have been presented with in the past did not possess a high degree of accuracy.
- Even if it did, agents insisted they would verify all prefill data given to them with a third party source. They will not blindly accept what is prefilled.


Much More Than Just Accelerating Data Entry
The original business hypothesis was flawed, but the goal of reducing time on task needed to be met. I wanted to achieve this goal without sacrificing our agents' needs and task completion method preferences.
Initial drafts began with these challenges in mind.
Business Proposal - Automatic Prefill
The VPs and stakeholders mocked up how they thought the fields on the page should look, with the data prefilled into the form fields. The following points were noted.
- Agents would have to delete the information in the field and type the correct data into the field if the prefilled data was incorrect, which took time.
- The source of the data was a mystery.
- The asterisk next to every form field was redundant, adding visual clutter to the form.
Original mockup proposed by business stakeholders.
Data sources and their descriptions, triggered by an information icon, were added.
Proposed Concept - Suggested Prefill
My design addressed potential issues with the design I had been given by business stakeholders.
- Users can speed the entry process with a new option to fill in all data fields. They can also remove the sources (ATTOM MLS, Zillow, etc.) if they do not find them useful.
- A message about required fields instead of extraneous asterisks before each form field made a cleaner interface.
- More space between entry fields made the form easier to read.
- An icon was added to let agents to simply click and apply the API data to the right of the field. The text is also a trigger for this action. This meant that they would not have to take time to delete and re-enter incorrect data.
- The data source is shown, but the text is clickable, instead of a small icon so the target is larger and easier to click.
The suggested prefill design.
Apply Every Suggestion Solution
This checkbox was added to the suggested prefill design to give agents the choice of having an automatic prefill experience. It would let agents populate all of the API data into the fields in one click.
The wishes of the business stakeholder designs are met, and It gave agents one more option to control how they entered the data.

The reasoning behind designing suggested prefill
The Power Of Choice
In the real world, If we see someone we think may need help, the polite thing to do is offer your assistance and then listen to their answer before automatically jumping in and helping the person, who may not always want or need the help you wish to give.
A suggested prefill asks users for permission to help them. Automatic prefill, assumes control before asking users how they want to work.
The suggested prefill approach increased agents’ sense of autonomy and worth.They reported feeling the company valued their expertise and judgment, treating them as active, knowledgeable contributors.
Agents reported that automatic prefill made them feel as if anyone—or even a machine—could do their job, which leaves them feeling undervalued.
- An real agent quote during a user test.
Agent Frustration
Agents reported that a suggested prefill option left them feeling they had to live with incorrect data and accept the consequences. Automatic prefill empowered them to accept or fix API data errors.
Repeatedly fixing a mistake you didn't make takes a psychological toll. It’s less frustrating to correct your own errors than someone else’s, and these types of small annoyances compound over time.
This buildup leads to corner-cutting, which compromises long-term data quality rather than saving time.
Improved Data Quality
Participants did not inherently trust prefilled data—even when labeled with known sources—and reported they would continue verifying data with third-party tools regardless of the system’s claims.
Suggested prefill helps agents catch errors more effectively. While reviewing data, agents may occasionally overlook fields. Automatic prefill makes it harder to tell which data has been corrected, skipped, or left unchanged, often resulting in unnoticed errors because skipped fields fail to trigger error messages. Suggested prefill uses visual cues to prompt agents to engage with each data point, making mistakes easier to spot and correct.
Begin User Testing
Participants
- 6 Agents From across the US, spanning 3 time zones, in Zoom sessions (one-on-one) with an interactive prototype
- Asked for the agent in the office who spends the the majority of their time entering data into our quoting system
Research topics
- 22 questions total (determined by Kyle and teammates)
- Risk Score Model Graphic (SingleSource)
- Generate Replacement Cost Graphic (SingleSource)
- Compare suggested prefill and automatic prefill patterns (Interactive Prototype)
Test Method
- Qualitative (observation & Interview)
- Quick way to learn users’ perceptions, thoughts, and feelings
- Results are qualitative, not quantitative
Methods And Participants
The study was qualitative, relying on observational and interview data. Agents interacted with a working Axure prototype for both automatic and suggested prefill interfaces using realistic scenarios and were asked to provide feedback on concepts like usability, data trust, and preference.
Data sources and their descriptions next to to automatically prefilled form fields, to help agents accept the data without verifying it elsewhere.
- Participants were asked to imagine that a new section had been added to Policy Center. The automatic prefill screen was shown first.
- They were given data and told to imagine they had already verified that data and were then asked to fill out the page using that data.
- The same scenario was repeated for the suggested prefill prototype screen, with different data.
- Each data correction list had its own unique data to prevent memorization of answers from page to page.
Al shown below, data and fields in red denote data changes that had to be made in the corresponding fields. The agents were not given these color coded views because they did not reflect the challenges the agents faced day to day.
Automatic Prefill Required 7 Corrections
Suggested Prefill required 8 Corrections
Testing Results
As suspected, Automatic prefill was not a success. But...
Suggested prefill was a success!
Reasons for this success include the following.
Eighty percent of participants preferred the suggested prefill model for entering data into Policy Center.*
* The remaining participant said they liked both designs equally, not that they preferred the automatic prefill approach.
- Participants would like an apply all suggestion feature, but most would prefer not to see data sources. Data mistrust influences everything we tested in this session, including the risk score and the replacement cost generator.
- 100% of the participants said they would not trust data provided in form fields, in Policy Center
- 100% of the participants said they would verify any data given to them with a third party source** Sometimes 2 or 3, because information might not be correct or up to date. Providing the source of the information did not make users more confident that the provided data was correct.
- Agents have been taught to verify data with third party sources. You would have to re-train agents specifically not to verify data if you want them to input data automaticall prefilled into fields without verifying it first.
Key Findings
Participants reported that suggested prefill reduces unnecessary rework of erasing and retype incorrect data, offers flexibility when applying data, supports their workflows by promoting verification rather than blind trust, and improves information retention and accuracy.
While some appreciated the implied efficiency of automatic prefill, they ultimately prioritized accuracy and control.
Participants stated that they did not trust the accuracy or currency of the prefilled data source even from reliable known sources, and will continue verifying data with third-party tools, disproving the original hypothesis that providing data sources will increase trust.

Agent challenges and their outcomes based on the new design
Disempowered - Solved
Suggested prefill empowers agents to rely on trusted data sources instead of API data.
Undervalued - Solved
By getting to choose which data they accept or deny gives agents a feeling of autonomy and they began to feel that felt their input and work gained importance in the eyes of sales agents, customers, and underwriters.
Strained - Solved
Sales agents, underwriters, and business stakeholders were consulted many times in group discussions and meetings with date entry agents about this redesign. The people in these roles gained new knowledge about
knowledgeable the data entry agents were, and what their workload was like. The knowledge came were a growing empathy for the data entry agents and these agents reported fewer escalations and negative feedback directed at them from sales agents and underwriters.
Time-limited - Solved
Time time spent typing every data point was decreased as agents could click to apply data, rather than typing every data point. The number of quotes they processed each day was increased.
Final Outcomes and Impacts
Based on user feedback, the automatic prefill method failed to align with agents’ real-world responsibilities and risk aversion. Efficiency alone won’t drive adoption or trust.The design did not acknowledge their risk sensitivity need for accuracy, and their desire for autonomy.
The suggested prefill model respects these needs and offers a scalable, user-centered approach that enhances satisfaction and business outcomes while reducing cognitive friction, providing better error detection, and aligning more closely with agents’ established workflows and psychological needs. It respects the agent’s role as a data expert rather than a passive error-corrector
While automatic prefill created the illusion off quicker task completion, the suggested prefill model did not increase time on task, and delivered a more satisfying experience and encouraged greater willingness to write more policies—supporting the original profitability goal through user satisfaction rather than speed alone.