🥇 “How To Prioritize Design System Requests” (+ Figma templates) (https://lnkd.in/eTsVNdcU), a step-by-step approach to manage and prioritize requests in your design system — against reusability, product area, alternative solutions and effort, then to be reviewed, groomed and broken down into tasks by on-call squad. A practical case study by Alexander Fandén and the wonderful Agoda team. 👏🏼👏🏽👏🏾 Guide + video: https://lnkd.in/e2x78wuC New component request (Figma): https://lnkd.in/ezxSbX8r Component improvement template (Figma): https://lnkd.in/e_4A_-a3 Icon request template (Figma): https://lnkd.in/erwnwAiZ Presentation + Notes: https://lnkd.in/e9UgB_Qc 🤔 As design teams grow, so do requests for the design system. 🤔 Different teams have conflicting needs → conflicting requests. 🤔 With 60 product teams, 1000 running A/B tests, time is critical. 🚫 Poor coordination → misaligned priorities, dropped requests. 🚫 If a design system can’t deliver on time, it’s a bottleneck. ✅ Set up a new board exclusively for feature requests. ✅ It’s organized by status and priorities (highest → lowest). ✅ 4 request types: features, visual assets, tokens, tooling. ✅ Set up problem statement/solution kits, Figma templates. ✅ Figma templates include design specs, use cases, context. ✅ Requests are scored (high → won’t fix) on 4 key criteria. ↳ Product area, Reusability, Alternative solutions, Effort. ✅ Set up rotating on-call squad: designer, engineer, PM, QA. ✅ Squad reviews requests, team grooms them every 2 weeks. ✅ Store tickets in separate boards for each scrum team. Personally, I love how simple yet well-structured the process is. Too often decisions are made based on the loudest voice in the room, without any workflow that prioritizes work that has the highest impact and the highest relevance for all product teams. This approach changes that. Plus, as Alexander noted, it’s important that stakeholders can track the progress by viewing the status of all linked tickets within the feature request. Also, they can also add themselves as watchers to receive automated updates on any changes or comments — along with automated Slack announcements. And: for any process to be followed, it’s not enough to make it easy to follow. What has been helpful is to also make sure that it’s difficult not to use it. That’s where templates in Jira and in Figma can help — and make sure that we don’t miss all the critical details, dependencies, variants and use cases. Kudos to the Agoda team for the fantastic work and sharing their insights and Figma templates in public! 👏🏼👏🏽👏🏾 #ux #DesignSystems
Engineering Design Process Steps
Explore top LinkedIn content from expert professionals.
-
-
𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧 𝐢𝐬 𝐭𝐡𝐞 𝐑𝐞𝐚𝐥 𝐃𝐞𝐬𝐢𝐠𝐧 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 Every space we create begins with an idea. Yet the idea is only the starting point. What gives it shape, depth, and purpose is the collaboration that follows. Over the years, I have realised that design moves forward when people do. When architects, engineers, various stakeholders and Clients work together with openness, the process becomes far richer than any single vision could achieve. Collaboration, for me, is less about coordination and more about chemistry. It is that invisible connection between disciplines that allows creativity to flow freely. When everyone at the table feels ownership of the outcome, something remarkable happens, ideas begin to build on each other instead of competing for space. Architecture has taught me that every expert brings their own layer of intelligence. The engineer who simplifies a complex detail. The Client who sees how people will use the space. The craftsman who turns drawings into something tangible. When these perspectives converge, design starts to breathe. At JTCPL Designs, this spirit defines the way we work. We believe that good design is never an individual pursuit. It grows out of dialogue, trust, and shared intent. Collaboration is where design finds its rhythm. #Design #Leadership #Creativity #Collaboration #Success
-
Innovation doesn’t happen in isolation It happens when teams, disciplines and companies decide to build real relationships—the kind that push boundaries instead of protecting comfort zones That’s why the story of IDEA Design Mindset in Spain stands out A real reminder of the power of collaboration done right IDEA Design started as a product-development studio in Murcia with a clear aim: blend strategy, engineering and design into solutions that genuinely solve problems Their work now spans medical devices, industrial design, packaging, and technical product development What matters isn’t just the portfolio—it’s how they operate They partner deeply, stay close to customer challenges, and co-create instead of designing in a vacuum That relationship-first mindset is why their journey has been packed with global recognition: iF Design Awards in the Medicine/Health category New York Product Design Awards Red Dot and BIG SEE accolades across multiple years Awards don’t matter on their own What matters is why they’ve won them: because they build trust with clients, learn the nuances of the industries they serve, and create long-term engagement instead of transactional output In healthcare and medtech—where risk is high, timelines are tight, and user experience is mission-critical—this approach isn’t optional It’s the difference between shipping a product and shaping a market Their work with companies like INBENTUS Medical Technology, developing rugged field-ready ventilators, is the perfect example That type of device doesn’t happen without tight collaboration between designers, engineers, clinicians and manufacturers. It takes aligned teams, clear communication and shared accountability It’s a demonstration of how the right relationships multiply capability And that’s the point worth highlighting IDEA Design’s journey is proof that strong partnerships drive stronger outcomes. Looking ahead, their future will be shaped by the same principles that built their past: Deep collaboration with clients Cross-functional development A commitment to understanding needs before solving them Good People making a difference, sounds so simple But it's the simple things people miss, and that really make a difference!
-
The usual thinking often goes, "We're changing the website/platform, so there's no point optimizing what we already have." This perspective, while common, can inadvertently equate experimentation solely with optimisation, potentially overlooking the enormous benefits of integrating a truly experimental approach into development and innovation. A replatforming or redesign project typically involves a complex decision-making and MoSCoW-style exercise centered around a set of features. It's often impossible to exactly replicate old features on a new platform, meaning crucial decisions must be made about what's essential and what might be dropped. Likewise, new platforms can introduce various potential new features, but are they truly worth the investment? These decisions can become complex, political, and increasingly stressful as deadlines loom. The risk is that choices are made based on internal influence rather than what will genuinely serve the customer, which is inherently difficult to guess. How can you better manage this process? How can you genuinely know what will deliver the best customer experience and commercial outcomes? EXPERIMENTATION! When done properly, experimentation (including but not limited to A/B testing) can fast-track this entire process and help you deliver a project that actually works. Consider starting by creating a comprehensive list of all feature disparities that need to be addressed. Then, establish an initial prioritization. Next, plan and run experiments for each consideration. Finally, assess the likely benefit. Some experiments are remarkably straightforward. If a new platform won't include a particular feature "out of the box," you could A/B test removing it from your existing site to understand its true importance. Others might be more challenging. If a new platform offers recommendations but at additional cost, you could conduct more rudimentary experiments on your existing site to test the core concept. Moreover, these features don't have to be front-end; the same process can be applied to backend operational features if you have the right expertise. Experimentation isn't just optimisation; it's a critical tool for informed innovation. #experimentation #cro #productmanagement #growth #digitalexperience #experimentationledgrowth #elg #growthexperimentation
-
Cross-functional collaboration isn’t just a “nice to have” — it’s what makes products succeed. In larger organizations especially, it’s critical to know who to involve, when, and how. Without clarity, tasks get handed over like hot potatoes… and too often, things get lost in translation. The reality: handovers break context. Every time work moves from one function to another, you risk misalignment, delays, or simply missing the why behind a decision. The ideal? A flow where teams move together, not in silos. Where PMs, designers, engineers, data, and PMMs aren’t just “next in line” but actively engaged across the product journey. That’s what cross-functional teamwork does: it reduces handovers, increases shared ownership, and creates a smoother path from discovery to go-to-market. 👉 If you’re in product, you’ve probably felt the difference between a handover-heavy project and one where collaboration flows. The latter is faster, clearer, and way less messy.
-
As Business Analysts, we often face a mountain of stakeholder requirements—but not all can be delivered at once due to time, budget, or resource constraints. That’s where requirement prioritization techniques come in—to help teams focus on what delivers maximum value first. 👇 Here are 7 practical techniques I use (with real-world examples): 1️⃣ MoSCoW Technique (Must, Should, Could, Won’t) ✅ Used in: Agile projects with tight sprints. Example: In a mobile banking app, Must: User login and money transfer Should: View recent transactions Could: Set custom notifications Won’t: Currency conversion (for this release) 👉 Helps align delivery with MVP scope. 2️⃣ Kano Model ✅ Used in: Product feature analysis based on user satisfaction. Example: For a food delivery app: Basic Needs: Track order, payment integration Performance Needs: Fast delivery, real-time tracking Delighters: AI-based food recommendations 👉 Helps differentiate must-haves from innovation drivers. 3️⃣ Value vs. Complexity Matrix ✅ Used in: Sprint planning or roadmap decisions. Example: In a healthcare dashboard: High Value, Low Effort: Show patient vitals summary High Value, High Effort: Integration with wearable devices Low Value, High Effort: Dark mode for admin panel 👉 Focus first on quick wins and high-impact items. 4️⃣ WSJF (Weighted Shortest Job First) ✅ Used in: SAFe (Scaled Agile) environments. Formula: WSJF = (User/Business Value + Time Criticality + Risk Reduction) / Job Size Example: In a regulatory compliance portal, WSJF helps prioritize GDPR compliance (high risk reduction, medium effort) over UI enhancement (low risk, high effort) 👉 Promotes economic decision-making in large programs. 5️⃣ 100-Dollar Test ✅ Used in: Stakeholder workshops How it works: Stakeholders are given “$100” to allocate across features based on value. Example: In a CRM tool upgrade: Lead Scoring: $40 Email Automation: $30 Social Media Integration: $20 Custom Dashboard: $10 👉 Useful for collaborative and quantifiable feedback. 6️⃣ RICE Scoring (Reach, Impact, Confidence, Effort) ✅ Used in: Product-led companies and SaaS prioritization. Example: For a subscription service platform: Reach: Will it affect many users? Impact: How much will it improve their experience? Confidence: How sure are we of success? Effort: How many hours/weeks of work? 👉 Ideal for objective scoring and backlog management. 7️⃣ Eisenhower Matrix (Urgent vs. Important) ✅ Used in: Time-sensitive, operational projects. Example: In IT Service Management tool enhancement: Urgent & Important: Fix for ticket assignment bug Not Urgent but Important: Knowledge base restructuring Urgent but Not Important: Color change in UI Neither: Feature used by very few users 👉 Great for visual prioritization and firefighting tasks. 🎯 Key Takeaway Prioritization isn't just about ranking features. It’s about strategic decision-making that balances value, effort, risk, and urgency—all while keeping stakeholders aligned. BA Helpline
-
Many products are unable to get traction because companies still think of #Sales, #Marketing, and other relevant skills as separate "teams" - instead of one collaborative unit. For a great product to be continuously built, to be effectively taken to market, and to continuously grow - different people must collaborate intensely And that goes beyond the skills you typically have in a cross-functional product team Most companies architect their teams by "discipline" And as a result, Sales gets disconnected from Product Management Product Design gets disconnected from Marketing Engineering gets disconnected from Customer Success And so on. But here's the thing: Sales talk to customers every single day and are incredibly familiar with the buyer's journey. These insights are pure gold for a Service Designer or a Product Manager in, for example, B2B The Product Manager may learn continuously from user interviews that customers search for X, Y, and Z to find their product. These insights are gold for Marketing, especially for whoever is dealing with SEO. Customer success is extremely aware of the common malfunctions users experience with various products. These insights are vital for Product Design, Product Management, and Engineering. Yes yes, some companies have various systems and habits in place to track and cross-share all of the above, also meeting regularly to share these insights and "update" each other But I don't think this is enough in many cases... In some orgs, there are still very different and competing incentives between each "discipline" Which causes conflict and blurs the real mission we're all here for: creating value for our customers and for our business This usually also leads to a deep lack of empathy for each other's craft (e.g. Product doesn't get Sales, Sales are frustrated with Product, Engineering complains about Design, etc) But this happens mostly because they are not an actual team Instead, I'd love to see more cross-functional teams as an actual discipline With Product, Engineering, Design, Product Marketing, and someone representing Sales, Customer Success, and other relevant core skills. Keep teams small... But think more about the "jobs to be done" in order to create value in your context - and how you might leverage different skills to do those "jobs" better And less about the disciplines / roles / "departments" that are already established A simple adjustment in this direction is to bring a Product Marketing Manager to be part of the already established cross-functional product team (Engineering, Product, Design) - and complement it with someone representing Sales and Customer Success They all share the same OKRs and all incentives are aligned with the true north star, rather than "discipline" performance The environment is designed for a small group of people to continuously discover, deliver, and market value - that is aligned with both customer and business outcomes
-
Interdisciplinarity is not a challenge for design science: It is our superpower! 🦸 🦸♀️ That is one of the key insights that emerged while working on our new paper (thanks for making it open access SBUR!): “Design Science Across Disciplines: Building Bridges for Advancing Impactful Business Research” co-authored with René Mauer Jan vom Brocke Marvin Hanisch Stephanie Schrage Orestis Terzidis Prof. Dr. Barbara E. Weißenberger Across information systems, strategy, business ethics & sustainability, entrepreneurship, and accounting, we found something remarkable: Each discipline brings its own rich traditions of problem framing, normative reasoning, artefact design, evaluation logic, and engagement with practice. Design science is not one method or one lineage. It has many flavors and strong traditions within each discipline — yet all are united by an interest in addressing questions of “how things should be” and “how to get there.” In my view this diversity is exactly what makes the DS research community so powerful. 🌍 Business Ethics & Sustainability brings deep normative thinking 🧩 Information systems brings strong artefact and evaluation methods 💡 Entrepreneurship brings experimentation and action 🔍 Accounting brings institutional perspectives 🎯 Strategy brings tools for shaping desirable futures Instead of trying to unify these traditions, what if we started intentionally recombining them? Imagine: - Strategy scholars drawing on design echelons and artefact logic from information systems. - Sustainability researchers using evaluation methods from design-oriented system development. - Entrepreneurship scholars integrating normative frameworks from ethics and political philosophy. - Accounting researchers using design thinking and experimentation to build new institutional solutions. What new forms of design knowledge could emerge if we proactively borrowed, blended, and hybridized methods across our disciplinary borders? For me, that is one of the biggest opportunities ahead: 👉 The more diverse our design science traditions become, the more powerful the approach gets in addressing real-world problems. I would love to hear your thoughts: Which tradition from your field has untapped potential to strengthen the broader design science community? DS:E - Center for Design Science in Entrepreneurship ESCP Business School ERCIS German Association for Business Research
-
One of the hardest challenges for product teams is deciding which features make the roadmap. Here are ten methods that anchor prioritization in user data. MaxDiff asks people to pick the most and least important items from small sets. This forces trade-offs and delivers ratio-scaled utilities and ranked lists. It works well for 10–30 features, is mobile-friendly, and produces strong results with 150–400 respondents. Discrete Choice Experiments (CBC) simulate realistic trade-offs by asking users to choose between product profiles defined by attributes like price or design. This allows estimation of part-worth utilities and willingness-to-pay. It’s ideal for pricing and product tiers, but needs larger samples (300+) and heavier design. Adaptive CBC (ACBC) builds on this by letting users create their ideal product, screen unacceptable options, and then answer tailored choice tasks. It’s engaging and captures “must-haves,” but takes longer and is best for high-stakes design with more attributes. The Kano Model classifies features as must-haves, performance, delighters, indifferent, or even negative. It shows what users expect versus what delights them. With samples as small as 50–150, it’s especially useful in early discovery and expectation mapping. Pairwise Comparison uses repeated head-to-head choices, modeled with Bradley-Terry or Thurstone scaling, to create interval-scaled rankings. It works well for small sets or expert panels but becomes impractical when lists grow beyond 10 items. Key Drivers Analysis links feature ratings to outcomes like satisfaction, retention, or NPS. It reveals hidden drivers of behavior that users may not articulate. It’s great for diagnostics but needs larger samples (300+) and careful modeling since correlation is not causation. Opportunity Scoring, or Importance–Performance Analysis, plots features on a 2×2 grid of importance versus satisfaction. The quadrant where importance is high and satisfaction is low reveals immediate priorities. It’s fast, cheap, and persuasive for stakeholders, though scale bias can creep in. TURF (Total Unduplicated Reach & Frequency) identifies combinations of features that maximize unique reach. Instead of ranking items, it tells you which bundle appeals to the widest audience - perfect for launch packs, bundles, or product line design. Analytic Hierarchy Process (AHP) and Multi-Attribute Utility Theory (MAUT) are structured decision-making frameworks where experts compare options against weighted criteria. They generate transparent, defensible scores and work well for strategic decisions like choosing a game engine, but they’re too heavy for day-to-day feature lists. Q-Sort takes a qualitative approach, asking participants to sort items into a forced distribution grid (most to least agree). The analysis reveals clusters of viewpoints, making it valuable for uncovering archetypes or subjective perspectives. It’s labor-intensive but powerful for exploratory work.
-
Stop Trying to Rank Stories by Business Value Ranking user stories is fundamentally more challenging than ranking features or epics due to the granular and context-specific nature of stories. Features and epics are larger, cohesive units of value that can be evaluated against strategic priorities like business value, customer impact, and urgency. These higher-level items lend themselves well to frameworks like WSJF (Weighted Shortest Job First), which leverage quantifiable attributes such as Cost of Delay and Job Size to provide clear prioritization. At the story level, though, these attributes become difficult to define and apply. Stories are small, incremental pieces of work, often so narrow in scope that evaluating their individual "business value" becomes impractical. This mirrors the challenges of "hedonic pricing models," where assigning value to small components of a product (like a gasket in a washing machine) is nearly impossible without context. A single story may not deliver direct, visible value on its own but instead contributes to the larger functionality of a parent feature or epic. Its importance lies in its sequence, dependencies, or role in enabling other stories rather than its standalone value. Prioritization at the story level requires a nuanced approach that accounts for their role in enabling larger outcomes. Instead of relying solely on "business value" assessments, story ranking must consider factors such as: 1) Feature-Driven Prioritization: Align story prioritization to the WSJF-ranked features or epics they belong to, focusing first on stories that unblock or complete critical functionality. 2) Dependencies: It's not always possible to eliminate dependencies between stories. In such cases, rank stories based on their ability to unlock downstream value or de-risk related work. 3) Risk Reduction and Learning: Prioritize stories that reduce technical uncertainty or compliance risks, or which provide critical feedback. 4) Flow Efficiency: Focus on minimizing WIP and maximizing delivery flow by prioritizing smaller stories or those that clear bottlenecks. 5) Complexity vs. Urgency (Mini-WSJF): Adapt WSJF principles at the story level using proxies for Cost of Delay (e.g., urgency or risk impact) and Job Size (e.g., story points). 6) Customer-Centric Focus: Prioritize customer-visible stories unless technical stories block essential functionality. 7) Hedonic or Functional Contribution: Evaluate stories based on their contribution to the overall functionality of the parent feature or epic (similar to assigning functional value in hedonic pricing). Whereas features and epics can often be ranked based on clear, high-level business priorities, prioritizing user stories demands a deeper understanding of context, dependencies, and workflows. Teams need dynamic and situational prioritization techniques to maintain alignment with their overarching goals.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development