Following user feedback is a Product Management virtue. Is there an actual way to implement it, between all the noise, bugs, and stakeholder requests? Well… Most teams claim they are customer-driven. Yet the moment you open Zendesk, App Store reviews, survey results, and Slack threads, you instantly remember why everyone quietly avoids this work. Feedback is everywhere, contradictory, emotional, duplicated, and nearly impossible to turn into decisions. It is chaos disguised as “insights.” This is why the new Amplitude AI Feedback release caught my attention and made it all the easier to decide to partner with them on this update. It successfully connects what users say with what they actually do, in one workflow. No extra tools. No extra tabs. You see their words, frustrations, and praise. You see their behavior. And AI transforms it into ranked themes, rising trends, top requests, and complaints. Noise turns into clarity. Opinions turn into patterns. Patterns turn into action. And because it is native inside Amplitude, it kills the biggest problem in feedback work: Fragmentation. Everything flows into analytics, session replay, and cohorts, creating a full loop from insight to fix. You can trace why an issue matters, how many users care, how it impacts behavior, and which actions you should take. Finally, a single source of truth for PMs, UX, CX, and marketing. I’m also genuinely impressed with the supported sources of feedback: App Store, Google Play, Zendesk, Intercom, Freshdesk, Salesforce Service, Gong, Trustpilot, G2, Reddit, Discord, and X. Slack arrives in Q1, and there will be more! If you ever felt overwhelmed by feedback, this is one of the first attempts I have seen that genuinely solves the operational pain, not just the reporting part. It launches… Today! Take a look: https://lnkd.in/dAJKeTez What was the most successful update you know that came from the product’s users? Let me know in the comments. #productmanagement #productmanager #userfeedback
UX Design Feedback Loops
Explore top LinkedIn content from expert professionals.
-
-
The frequency of design reviews and the speed of user feedback are the best indicators of amazing design projects. The process is less critical. It doesn't matter whether your team uses design sprints, loops, or diamonds. The goal is to align the complexity of the user journey with the design process. A design process aligns with stakeholders to create user experiences that drive the business forward. It should match the business's goals with user needs. However, this often fails when the user becomes an abstract concept in the design (it’s hard to manage this messiness). The process alone will not produce the perfect design. The success of a design reflects the team creating it and the audience it serves, considering timelines, feasibility, and project management. All these factors matter. However, after years of running design projects, the two indicators that have a disproportionate impact on design success: • Regular design reviews • Fast user research Why? Consistent design reviews offer opportunities to align: → Stakeholders with shared goals → The team collaborating and sharing ideas → Business goals with clear direction → Technical limits within available tech → Market trends with industry standards → Following laws and regulations Fast user research and feedback provide: → Understanding user needs and behaviors → Regular iteration and improvement → Ongoing usability testing for better designs This combination keeps everyone on the same page, informed by the latest user data, and able to make quick, informed design decisions. I’ve seen it create more effective and timely design outcomes. Stakeholders are usually happier, too. While upfront user research is important to understand the problem, regular reviews and consistent testing with a targeted audience will quickly reveal gaps. #productdesign #productdiscovery #userresearch #uxresearch
-
In UX, we talk a lot about what users think, but we rarely study how their attitudes actually change over time. Most research still relies on one-time surveys like SUS, NPS, or post-test ratings. These snapshots are useful, but they tell us almost nothing about how trust grows, how frustration accumulates, or how confidence rises and collapses after a single confusing update. Attitudes are not steady states. They are trajectories shaped by experience. There are scientific ways to track those trajectories. Continuous-Time SEM lets researchers measure how satisfaction or trust evolves in real time, even if we collect feedback at irregular moments. A streaming app can trigger a question after each session and see exactly when enjoyment starts to drop, so recommendations can intervene before disengagement sets in. Latent Transition Analysis helps us understand how people move between hidden states such as novice, intermediate, competent, or stuck. Instead of guessing who needs help in onboarding, we can calculate the probability a user will progress or remain frustrated and then redesign tutorials to move them forward. Bayesian Hierarchical Models solve a common UX problem. What if we do not have huge samples like consumer apps do? With twenty or thirty enterprise users, traditional statistics break down, but Bayesian methods still model growth and decline in attitudes. They can reveal that confidence improves for new employees but decreases for experts after a redesign, a pattern that would otherwise remain invisible. Joint Modeling goes further by connecting attitude trends with real outcomes such as churn. It can show that a drop in usability or motivation predicts cancellation two weeks before users actually leave, turning measurement into prevention. One of the most powerful and practical tools is Hidden Markov Modeling. Instead of relying on surveys, it infers emotional states from behavior like hesitation, rage clicks, repeated backtracking, or abandoned tasks. It detects frustration even when people are silent, revealing emotional shifts that traditional surveys fail to capture. If you want to go deeper into these methods and see more concrete examples, I put together a full breakdown on the blog. You can read it here: https://lnkd.in/eY_Nwme2
-
we spent $2M building "the most comprehensive patient experience dashboard in the industry." hospital executives loved the demos. the visualizations were beautiful. the data was clean. nobody used it. three months in, I finally understood why: we'd built a quantitative masterpiece that ignored qualitative reality. our dashboard could predict average length of stay across thousands of patients. but it couldn't tell our clinical leads what she actually needed at 2 PM on a Tuesday — whether patient A in room 123 was getting anxious about discharge. That's the trap most data product teams fall into. We pick a side: the quant folks build dashboards and A/B tests. Great for "what" questions but terrible for "why." the qual folks run user interviews and read support tickets. Rich context but doesn't scale. Both miss the magic that happens when you combine them. Here's what changed for us: we built what Sachin Rekhi calls "feedback rivers" — continuous streams of customer feedback that merge quantitative signals with qualitative context in real time. (didn't have for a name for it back then) Traditional approach: schedule focus groups, design surveys, manually dig through tickets. Takes weeks. our nlp-powered feedback system surfaced this in 30 minutes: → dozen support tickets: "confusing medication reminders" → multiple support calls: "managers don't understand the app" → Interview quote: "its pretty but i don't know what to do about it" we simplified the interface. Two weeks later: → 30% improvement in completion rates → 25% increase in adherence scores it was about connecting quantitative signals with qualitative context instantly. i just published a deep dive on this: how to build your own feedback river, avoid common pitfalls (drowning in data, over-relying on AI summaries), and create a culture where stories and stats inform each other. also includes a 30-day action plan to get started. Link in comments. 👇
-
We don’t guess what users want we ask… That’s how we build digital products users rely on. Here’s how we make feedback the superpower behind great UX 👇 Step 1: Listen Deeply We run: ‣ 1:1 user interviews ‣ In-app surveys & session recordings ‣ Live usability testing Step 2: Turn Chaos into Clarity We map raw feedback into themes: ‣ Usability issues (e.g. confusing navigation) ‣ Feature gaps (e.g. missing integrations) ‣ Friction points (e.g. slow checkout) Step 3: Design, Test, Validate We co-create with your team: ‣ Interactive prototypes (Figma) ‣ Real user validation before dev ‣ Accessibility & performance checks Step 4: Ship Fast, Measure Faster Every improvement is: ✔️ A/B tested ✔️ Backed by analytics ✔️ Tied to measurable ROI Who This Helps ‣ SaaS & Tech → Reduce churn, improve onboarding ‣ Fintech → Simplify UX, boost adoption ‣ Healthcare → Design for clarity & trust ‣ Enterprise tools → Optimize internal workflows What You Get ✅ UX audit + feedback dashboard ✅ High-fidelity mockups & tested flows ✅ Real user insights + recordings ✅ Optional: Monthly UX performance reports 💡 User feedback is the fastest way to build what people love. Let’s make it part of your product growth strategy.
-
Relying on intermittent feedback to drive product decisions is a risky move. I see this a lot with teams just getting started with quantitative research. They’ll run an ad hoc survey, get a snapshot, and use that to guide decisions. But snapshots don’t tell the whole story. Think of it like running a restaurant. Would you rather see performance across all seven days, or just Friday night? Friday might look great. But it could mask problems on Saturday through Thursday. It’s the same with one-off surveys. If your feedback is incomplete or outdated, your decisions can be, too. Continuous feedback (i.e. getting data every night) solves for this. More advanced teams like Ramp, Figma, and HelloFresh run ongoing surveys to make sure they’re always working from fresh, relevant insights. It’s a live dataset, not a frozen one. And it gives them the full picture. So I’ll leave you with this: which dataset would you rather use to guide your next product decision?
-
A few days ago, I was reviewing a client’s product with a fresh pair of eyes. Everything looked fine..until I watched a few new users try it. Clicks in the wrong places. Confusion over labels. Abandoned tasks. But this was not a bad user problem. It was a lack of feedback problem. If you’re not actively asking users what they think while you build, you’re designing blind. Simple ways to fix it: — Quick surveys after key actions — In-app feedback buttons — Regular user testing sessions Every insight gathered is a tiny roadmap. It tells you what works, what frustrates, and what could be better. Your product doesn’t have to be perfect. It just has to keep learning. And that learning comes from the people who actually use it. 💡 #software #UX #SaaS
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development