Policygenius is designed to be the easy way to compare and buy insurance, and with its start in life insurance, the company has expanded into new product verticals. I was working on launching the new home & auto insurance vertical. This started as a small team and grew rapidly as the product far exceeded expectations. Initially I came on while the product was a one page lead gen form and then the rest handled by phone calls with our single sales agent and the team product manager. I worked closely with our agent and PM to map out what a full home & auto insurance online form flow would need. The spreadsheet pictured here is where I mapped out the fields and question sequences for our competitors. The flow iterated forward with 50+ UXpin prototypes and 200+ Usertesting.com sessions to help guide our product direction.
As we built out the experience, I did extensive user research with dozens of phone calls with our clients, Respondent.io market research interviews, as well as a SurveyGizmo survey I set up at the end of the online form flow. We also learned from Mouseflow recordings and their page-by-page and field-by-field funnel tracking, as well as our tracked user events passing through Segment into BigQuery and later Tableau. There was also a lot of competitive research / benchmarking with competitors and adjacent industries to make sure we were considering all options. As our traffic increased, we were able to ramp up A/B testing with Optimizely bucketing and our own analytics, but before that, every new idea was thoroughly researched with Usertesting, user interviews, survey questions and talks with our sales agents.
To get the necessary information to accurate quote clients, the proof-of-concept for home & auto insurance had agents asking for people’s insurance policy (their declarations page) over the phone. This was proposed as something to bring into the flow, which was I concerned few people would 1) know what it was 2) have it accessible 3) be willing to share, and that asking for it could tank conversion. I set out to assess the risk of this feature by conducting over a dozen phone calls, adding a survey questions to end of the flow, and building and testing a few prototypes with a declarations page uploader. The feedback was surprisingly positive, with many people excited by the idea, and most having their policy documents already on hand as they were looking at them as they shopped around. We launched the feature as an A/B test, still expecting conversion to take a sizable hit, but again, we were surprised at how willing people were to share these documents.
Our marketing department was hard at work producing high-quality SEO content that drove more traffic and leads that our ops team could handle, so the focus shifted from optimizing the funnel to qualifying leads. Based on our end-of-flow survey and dozens of research calls and Usertesting sessions and talks with our sales agents, we knew there was a mismatch between what people were expecting when they started the flow to when they received their quotes days later. So the main way we decided to tackle qualifying leads better was to temper some unrealistic expectations, while also promoting the unique value we offer. We brainstormed as a team and came up with some key points to communicate, and then I explored a variety of places we could inject those points, including a detailed progress bar, a loading interstitial between pages, and extra information below the form fields.
One of the early experiments we launched for better expectation management was to create an introductory page. There were a number of initial ideas that I explored with Usertesting, but we were seeing testers mindlessly move past the content we were trying to communicate, and in asking them at the end about the key expectation points, they didn’t seem to have processed and remembered, so we decided to be a little heavy-handed and make a relatively slow animated page that required a click to continue. Counter-intuitively, we saw funnel conversion go up when A/B testing this animated intro page, as well as a positive bump in down-funnel sales conversion.
Expectation management continued into the home & auto landing pages that much of our content marketing efforts drove traffic through. The thought was that following the success of the intro animation page, we could shift expectation management further up funnel. A few options were narrowed down through Usertesting, and then an A/B test was launched, with positive results. You can see an alternative design below with the hero photo that was being used at the time. The spreadsheet image was a summary of 20 Usertesting sessions that ended with questions concerned the main points we wanted to communicate with our expectation management messaging. These tests helped us narrow down the options for what the engineering team would build out to A/B test.
Through phone interviews and Usertesting, we had seen the expectation-reality gap really hit users at the end of online flow on the confirmation page where they are told they will not be getting instant quotes. To reduce the disappointment here for those who still hadn’t fully processed the expectation management injected earlier in the flow, we worked on spelling out exactly what to expect for the rest of the process, along with some work humanizing the experience of our expert advice. A few options were run through Usertesting and the final design A/B tested. Our expectation management survey questions were also compared between test groups to help us decide if we had solved the problem.
As the online form flow matured, we dug in on some phone interviews with users who had converted or dropped off later in the process to see where the latest problems were. While our longer wait time for receiving quotes was certainly an issue, and one the operations team was actively working on, some clients communicated that trust had played a factor vs. choosing to go with a more well-known insurance company or a local agent. I explored various ways of increasing trust, including using our (very positive) Trustpilot reviews, press mentions, and company stats. After a some Usertesting sessions, I was able to get up a couple Optimizely tests to get a quick read on impact. Survey questions around trust were also added to make sure trust was actually being impacted.
We had learned from phone interviews, survey results and talks with our agents that uncertainty around appropriate coverages and deductibles was a point of concern for our clients. To address that, we explored an in-content recommender widget as well as a more flushed out end-of-flow experience charting out what different coverages and deductibles meant along with our personalized recommendations for those. Survey questions were used to de-risk the initiative, followed by Usertesting of a prototype and then the feature was launched as an A/B test to encouraging results.
From my competitive research, I identified a handful of different formats for form flows that could be worth testing. Some were more visual, like new graphic headers I ran comparative Usertesting on. Other explorations were more extreme format changes like sentence-style forms and chat-bot. The feedback from users on the phone, and from Usertesting, was that our current flow was simple, clean and intuitive, and while maybe a little bland, the overall impression of the design and style was quite positive. Because the research did not support this as a big problem, and a significant style change would impact all of the other product verticals, this effort was put on the back-burner until the marketing creative team completed their new brand guidelines. The bullet list shown here is a catalogue I made of all the elements I could find in different form flows, to help guide our choices.
I also explored some alternative form field styles. In Usertesting and in our Mouseflow recordings, I would see users click around on the label box instead of the input field and express some frustration as the label area looks like a field itself. For various designs, I ran through a bunch of Usertesting to compare some usability and design metrics against the control (one test analysis shown in the spreadsheet image). Scores did not show much increase vs. control, which suggested that while a visual change might make us designers feel better, it might not have as much of an impact on our users.
One problem identified early for shoppers of home and auto insurance was that it was hard to compare rates provided by different carriers with different terminology and formats. We had built out a basic comparison chart PDF to send to clients when their comparison quotes were ready, and I was iterating on that to factor in the complexity of bundling and mixed carrier comparisons. I also had heard clients talk about savings as percentages or absolute dollar amounts, so I added in a clear callout for how much savings were offered, and the difference between columns. These designs were then run through Usertesting to test for comprehension and value with a very positive reception.
While I was mainly focused on our new home & auto insurance consumer facing product, I also mentored and workshopped with a Senior Product Designer who was focused on our custom CRM tool called Umbrella. We would meet twice a week and I’d help provide feedback and guidance on her projects. Outside of her regular workstream, this included formalizing the design system for Umbrella, auditing the current use cases of UI patterns, and both of us working together to clean up the design language of the platform, particularly around the areas we identified in a design workshop of 1) consistency 2) hierarchy 3) best practices, to start improving the usability of the internal platform. Unfortunately, due to the internal nature of the product, I can’t show much here.
On two seperate internal hackathons, I worked with a small group of PM’s and engineers, including the VP of Engineering, on some fun data science and machine learning projects. One involved identifying which content articles to show next to help maximize down-funnel lead conversion. The second project involved cluster analysis of our users based on on-page behavior and then using a linear regression algorithm to predict how those clusters (and form flow answers) impacted lead and in-force conversion for our life insurance customers. I did hands-on work in DataLab with pandas and numpy and various charting libraries as well as some scikit-learn linear regression.


(917) 727-7936