I believe all good product design starts with a deep understanding of our users and their unique problems and needs. When starting on a project, I like to begin with user phone interviews to get a first-hand sense of the value people derive from the product and where problems/opportunities might lie. After listening for specific insights, I like to then validate those at scale through wider surveys.
Getting products and features in front of the target audience early and often keeps the concepts grounded and can validate (or invalidate) assumptions. Whether that is getting feedback on mocks, interactive prototypes or further testing on released features, looking for friction points along with frustration/confusion can help make for a successful product.
I believe all user research should produce actionable insights, things that can be done to solve issues that arose in usability testing, or possible adjustments to test in following sessions. At small scale, not all issues are going to generalize to the larger population, so it’s important not to fixate on individual complaints, particularly expressed opinions, but to look for patterns, or at least take note of issues that can be validated at scale through analytics or a wider survey. In general, you get more accurate data observing what people do than asking what they think they will do.
To me, design is problem solving. Either we are starting with a validated user problem or alternatively, a business goal, like moving a specific KPI or OKR. Ultimately, it’s not the user’s job to propose solutions to their problems, it is our job as product designers to come up with elegant solutions. Articulating clear jobs-to-be-done from user research can be very effective in guiding solutions that are solving real problems rather than just brainstorming solutions in search of problems.
Once a user problem or business goal has been established, I like to start with competitive research, not just within the industry, but any product that has solved a similar problem. I look for parallel structures, metaphors, that could apply to the problem at hand. That covers awareness of current design patterns, and knowing how to combine design elements to create a cohesive whole that is familiar and intuitive but also artfully solving the problem by pulling inspiration from diverse sources.
I typically start my ideas as quick sketches on paper, trying out variations, exploring concepts and listing questions that could be answered through further thought or research.
Depending on who's doing the final mock ups, where if it's me, I might skip the mid/high-fidelity wireframes and jump right to high-fidelity mocks which is fast these days with well structured Sketch assets / style guides. Interaction notes might be annotations on the final designs or as bullet points on a trello/jira ticket for the engineers, also great if they are sitting nearby and we can work together to fine tune things as they build, all really depends on how the team likes to operate.
I’m a little skeptical when I hear of UX designers who have no skill in visual design. Any wireframing is a skill of layout, and one where even at low-fidelity, the shapes and proportions and positioning of elements all come together to create usable, intuitive and beautiful composition.
Having done visual design along with my work in product planning and wireframing, the end product stays in my mind, as that is what the user will be experiencing.
With the rise of ubiquitous mobile devices and the design systems proliferated by Apple and Google, the average person has come to expect decent design as table stakes just as apps are expected to function according to basic rules of usability. It’s never been easier to copy functionality, and when two products with similar features are competing in a global marketplace, great design can be the meaningful differentiator, creating an emotional brand connection highlighted by designed moments of delight.
I believe that bringing an idea to life with an interactive prototype is one of the best ways to test an initial concept with users, and also for getting stakeholder buy-in. Catching an awkward interaction or confusing copy in a user testing session before a feature gets build can save user frustration and engineering churn. Animated prototypes are also very helpful in communicating interactions and flows to engineers. How something looks versus how it feels on a device can be rather different, so getting the interactions feeling just right is key.
I’ve always enjoyed working closely with engineers, product managers, data analysts, as well as marketing and sales. Understanding the needs and goals and activities of other parts of the organization inform how I think about product. User experience covers the intersection of many different customer touch points, so having a holistic perspective of the larger business efforts helps inform my own product decisions.
In leading design teams, I see my job as facilitating the work of talented designers to come together to meet the design needs of the organization. That means planning design sprints, running design ideation sessions, organizing design reviews and mentoring other designers to help them solve problems and level-up their skill set.
I am a strong proponent of A/B testing, having run dozens of experiments to quantitatively evaluate the product hypothesis. A/B testing experiments are the scientific method applied to business practices, validating product ideas with realistic market conditions. In my experience, A/B testing can be used for much more than small copy and color tweaks, but releasing full features, to judge user response at a small percent of the user base before rolling it out to the full population (It’s also handy for dark launching features for QA purposes).
You can do plenty of small scale user testing and collect qualitative information, but until the product is used in its natural context with real users with real intent, you don’t really know how it’s going to perform. A/B testing lets you do that, and make sure you’re actually making the product better, not worse.
A/B testing can be a great way to keep making iterative improvements in the product, and is particularly well suited to optimizing conversion funnels. Testing results can even get engineers excited as they can see unambiguous evidence that their efforts are making an impact. A/B testing also helps temper the HiPPO (highest paid person’s opinion) with irrefutable data where otherwise losing ideas might be over-invested in or winning ideas are dismissed.
To make an accurate assessment of an A/B test, it is essential that you have your data analytics instrumentation well structured and capable of clearly defining what success or failure looks like before you start the experiment. Making calls on tests with data that you don’t trust is not advised.
When we’ve settled on a viable solution for a user problem or a business goal, next I ask: how will we evaluate its success or failure? Collecting more qualitative feedback from interviews or surveys can provide insight into problem-solution fit, but peering directly into users’ behaviors is the best way to see if your work has changed anything about your business.
Having good user tracking is essential for A/B testing, but also seeing where friction is happening, where drop-offs in the funnel are occurring, and patterns in user behavior that can all be used to identify targets for future product improvements, which can now be evaluated from a defined baseline and not relying on customer feedback where product success is defined by “our users seem to like it”.
I have worked on setting up product analytics instrumentation for event and user property tracking along with troubleshooting the system, making sure that our data could be trusted. Making decisions off of bad data is worse than having no data. Having clean data also allows you to slice into cohorts where trade-offs might be occurring and make informed decisions about those costs. For example, it’s easy to create a big, bold, attention-grabbing new feature, and just by the attention it demands, it will get usage, but if there is a cost in retention, or at the detriment of other more important features that are now overshadowed, or positive brand association (“it’s spammy” is not what you want to hear) then those trade-offs need to be weighed.
When discussions come up with the product team about what users do, there are often assertions made about what our users do or don’t do, and I find having a definitive source for user behavior can dramatically cut down on those arguments.
I’m a supporter of the Lean Startup methodology, where beginning with an MVP means testing your assumptions as quickly as possible to learn and course-correct with each iteration. A/B testing comes in handy for evaluating each iteration, to know that you are moving in the right direction (or not). Without that build-measure-learn feedback loop, product development can fall into the realm of guesswork. Of course, you have to start somewhere, which is where the deep understanding of your users and their problems allows for the initial product to be more than seeing what sticks against the wall.
When it comes to optimizing the product, especially funnel optimization, iterating through novel variations can over time achieve meaningful lift. I find experimenting with ideas gleaned from competitive research can give a long list of possible approaches and copy to test (It’s fair guess that many top apps optimize their own funnels, so trying out their approaches are a great place to start).
The value of iterations is to know that we are moving the product forward towards either solving user problems or achieving business goals. Each iteration should be evaluated through further user testing to discover if the initial problems were well-solved and what new problems may now be in the forefront. Product metrics also inform how the product is progressing. Factoring in qualitative data (to check to see if the larger problems are being solved) helps reduce the odds of hitting a local maximum.