UXR teams run concept tests to help establish product-market fit. They run usability tests to ensure products are easy to understand and use. And they run post-launch evaluative interviews (or surveys) to enable smart iterations and hopefully a delightful experience. In a perfect world, those “research lines of defense” would suffice, and we’d end up with a bunch of delightful experiences and therefore no churn.
Software has become exponentially more complex. Creating usable, not to mention delightful user experiences has become increasingly more challenging. Over the last decade, we sliced and diced data in every possible way - known to humanity - but we still fail to understand people at scale. We need more research
One thing stands out in the discourse on empathy in UX research; the focus is on the one-directional empathy of a researcher towards the user they are attempting to understand. What about the other direction? What’s the impact of users’ empathy towards the researchers interviewing them? That’s a question we’ve not seen discussed.
AI is already driving profound innovation and leading us to societal transformation. Yet, as systems like Genway AI gain more traction in their respective areas, legitimate concerns arise regarding transparency, safety, and accountability. The answer to these concerns doesn't lie in halting technological progress but rather in crafting a foundation of trust from the get-go. Building trust-first AI software means ensuring it aligns with foundational humanistic values, safeguards user data, and operates within ethical boundaries.
My grandmother has never learned to use Google Maps, she couldn’t be bothered. But she hasn’t tried to use any other kind of map either… So whenever she gets lost around town and needs help getting home, she calls her daughters or worse. Genway AI DEI Series Chapter 1 - Navigating Technology