SingleSource
Overview
Agents research and enter large volumes of property and risk data into a proprietary policy system—a time-consuming process. The business hypothesized that prefilled data from trusted sources would speed this process by reducing the need for verification.
But after shadowing and speaking with agents, I found that this solution didn’t reflect how they actually worked, or their unique needs, mindsets, and goals.
I designed an alternative approach that better aligned with their workflows, and we tested both methods to see which better supported user needs and business goals.
* Some images are intentionally blurred to comply with legal and confidentiality requirements related to my work with an insurance company.
Prefill Concept Testing
The test would inform how API-sourced data should be introduced into the policy system interface.
The initial belief was that showing trusted sources and descriptions for data automatically prefilled into form fields would lead agents to accept the data without further verification.
Agents emphasized that avoiding incorrect or out of date information in the data fields was their most important concern to prevent issues and criticism from both underwriters and customers. Agents verify each piece of data 2-3 times before entering it into a form field!

Methods and Participants
I conducted one-on-one sessions with insurance agents from different regions. Participants were selected based on their primary responsibility for entering data into the policy system.
The study was qualitative, relying on observational and interview data. Agemts interacted with both automatic and suggested prefill interfaces using realistic scenarios and were asked to provide feedback on usability, data trust, and preference.
Apply All Suggestions and Show Sources
An Apply all suggestions checkbox was added to the suggested prefill screens for aents who might want to quickly and easily apply all the data to the fields like the automatic prefill concept of the original hypothesis. It put Agents one more option to control how they entered the data.
As show sources checkbox was added to reduce screen clutter if agents found these sources unhelpful.
Key Findings
80% of the participants preferred the suggested prefill approach.
The remaining participants said they liked both approaches.
Participants reported that suggested prefill reduces unnecessary rework of erasing and retype incorrect data, offers flexibility when applying data, supports their workflows by promoting verification rather than blind trust, and improves information retention and accuracy.
While some appreciated the implied efficiency of automatic prefill, they ultimately prioritized accuracy and control.
Participants stated that they did not trust the accuracy or currency of the prefilled data source even from reliable known sources, and will continue verifying data with third-party tools, disproving the original hypothesis that providing data sources will increase trust.
Why the suggested prefill method was preferred
Increased agent Confidence
In the real world, If we see someone we think may need help, the polite thing to do is offer your assistance and then listen to their answer before automatically jumping in and helping the person, who may not always want or need the help you wish to give
A suggestion asks permission (“Can we help you?”), unlike automatic prefill, which assumes control (“There. I ‘helped’ you.”). This second approach— a UX best practice—increases agents’ sense of autonomy and worth. They feel the company values their expertise and judgment, treating them as active, knowledgeable contributors. Agents report that automatic prefill makes them feel as if anyone—or even a machine—could do their job, which leaves them feeling undervalued.
Improved Data Quality
Participants did not inherently trust prefilled data—even when labeled with known sources—and reported they would continue verifying data with third-party tools regardless of the system’s claims.
Suggested prefill helps agents catch errors more effectively. While reviewing data, agents may occasionally overlook fields. Automatic prefill makes it harder to tell which data has been corrected, skipped, or left unchanged, often resulting in unnoticed errors because skipped fields fail to trigger error messages. Suggested prefill uses visual cues to prompt agents to engage with each data point, making mistakes easier to spot and correct.
Reduced Agent Frustration
With suggested prefill, agents feel empowered to ignore incorrect suggestions, whereas automatic prefill forces them to fix system errors by deleting the wrong value and entering the correct one.
Repeatedly fixing mistakes they didn’t cause takes a psychological toll. It’s less frustrating to correct your own errors than someone else’s, and small annoyances compound over time. This buildup often leads to corner-cutting, which compromises long-term data quality rather than saving time.
Conclusion
Based on user feedback, the automatic prefill method failed to align with agents’ real-world responsibilities and risk aversion. Efficiency alone won’t drive adoption or trust.The design did not acknowledge their risk sensitivity need for accuracy, and their desire for autonomy.
The suggested prefill model respects these needs and offers a scalable, user-centered approach that enhances satisfaction and business outcomes while reducing cognitive friction, providing better error detection, and aligning more closely with agents’ established workflows and psychological needs. It respects the agent’s role as a data expert rather than a passive error-corrector
While automatic prefill may theoretically speed up task completion, the suggested prefill model delivers a more satisfying experience and may encourage greater willingness to write more policies—supporting the original profitability goal through user satisfaction rather than speed alone.