UX Design And Artificial Intelligence

Explore top LinkedIn content from expert professionals.

  • View profile for Alexey Navolokin

    FOLLOW ME for breaking tech news & content • helping usher in tech 2.0 • at AMD for a reason w/ purpose • LinkedIn persona •

    778,898 followers

    AI is no longer just decorating rooms. It’s redesigning how we live. AI can now rethink rooms, floors, and entire layouts—turning bold ideas into build-ready designs. Would you do floor like that? The data behind the shift: • 30–50% faster design cycles using generative layout tools • 100+ layout permutations generated from a single brief • Up to 20–30% improvement in space utilization • 10–25% energy savings when airflow, lighting, and thermal paths are simulated early • 40% fewer late-stage design changes thanks to digital testing What’s fundamentally different? AI treats floor plans like software systems: Pedestrian movement is simulated before construction Natural light and ventilation are optimized virtually Furniture, walls, and utilities are stress-tested digitally Cost, carbon footprint, and materials are optimized in parallel This enables: Smaller homes that feel larger Offices designed around productivity and wellbeing Buildings that adapt over time instead of aging poorly The biggest myth? AI replaces architects and designers. Reality: AI handles complexity and permutations. Humans focus on vision, culture, emotion, and identity. The future of architecture isn’t just smart. It’s generative, data-driven, and human-centric. #AI #Architecture #Design via @Visual Spaces Lab #PropTech #GenerativeAI #FutureOfLiving #SmartBuildings #Innovation

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    225,734 followers

    🤖 How To Design Better AI Experiences. With practical guidelines on how to add AI when it can help users, and avoid it when it doesn’t ↓ Many articles discuss AI capabilities, yet most of the time the issue is that these capabilities either feel like a patch for a broken experience, or they don't meet user needs at all. Good AI experiences start like every good digital product by understanding user needs first. 🚫 AI isn’t helpful if it doesn’t match existing user needs. 🤔 AI chatbots are slow, often expose underlying UX debt. ✅ First, we revisit key user journeys for key user segments. ✅ We examine slowdowns, pain points, repetition, errors. ✅ We track accuracy, failure rates, frustrations, drop-offs. ✅ We also study critical success moments that users rely on. ✅ Next, we ideate how AI features can support these needs. ↳ e.g. Estimate, Compare, Discover, Identify, Generate, Act. ✅ Bring data scientists, engineers, PMs to review/prioritize. 🤔 High accuracy > 90% is hard to achieve and rarely viable. ✅ Design input UX, output UX, refinement UX, failure UX. ✅ Add prompt presets/templates to speed up interaction. ✅ Embed new AI features into existing workflows/journeys. ✅ Pre-test if customers understand and use new features. ✅ Test accuracy + success rates for users (before/after). As designers, we often set unrealistic expectations of what AI can deliver. AI can’t magically resolve accumulated UX debt or fix broken information architecture. If anything, it visibly amplifies existing inconsistencies, fragile user flows and poor metadata. Many AI features that we envision simply can’t be built as they require near-perfect AI performance to be useful in real-world scenarios. AI can’t be as reliable as software usually should be, so most AI products don’t make it to the market. They solve the wrong problem, and do so unreliably. As a result, AI features often feel like a crutch for an utterly broken product. AI chatbots impose the burden of properly articulating intent and refining queries to end customers. And we often focus so much on AI that we almost intentionally avoid much-needed human review out of the loop. Good AI-products start by understanding user needs, and sparkling a bit of AI where it helps people — recover from errors, reduce repetition, avoid mistakes, auto-correct imported files, auto-fill data, find insights. AI features shouldn’t feel disconnected from the actual user flow. Perhaps the best AI in 2025 is “quiet” — without any sparkles or chatbots. It just sits behind a humble button or runs in the background, doing the tedious job that users had to slowly do in the past. It shines when it fixes actual problems that it has, not when it screams for attention that it doesn’t deserve. Useful resources: AI Design Patterns, by Emily Campbell https://www.shapeof.ai AI Product-Market-Fit Gap, by Arvind NarayananSayash Kapoor https://lnkd.in/duEja695 [continues in comments ↓]

  • View profile for David Sauerwein

    AI/ML at AWS | PhD in Quantum Physics

    33,261 followers

    Netflix has just released a new blog post on their latest recommendation foundation model (FM). They're moving away from predicting only next user actions to predicting the underlying user intent. Here's a breakdown: Two months ago, Netflix published an excellent write-up on their first transformer-based foundation model for user recommendations. Now, they've followed up with details on FM-Intent, an enhanced version (links in comments). Their original FM focused on predicting the next user item - which movie or series the user would like to watch next. However, a model that could also provide granular insights into the user's intent behind the next selected item could enhance performance and open up completely new applications. This is why Netflix built FM-Intent, an extension of their existing FM through hierarchical multi-task learning. FM-Intent captures a user's latent session intent using both short-term and long-term implicit signals as proxies, then uses this intent prediction to improve next-item recommendations. Intent isn't a well-defined object that can be measured directly—but there are proxies: • Movie/Show Type: Whether a user is looking for a movie or a TV show • Time-since-release: Whether the user prefers newly released content, recent content, or evergreen catalog titles. Others include action type or genre preference The hierarchical multi-task learning architecture has three main components (see image): 1. Input Feature Creation: Combine categorical and numerical features for comprehensive user behavior representation 2. User Intent Prediction: A transformer encoder modeling long-term user interests, transformed into individual prediction scores via fully-connected layers. FM-Intent generates comprehensive intent embeddings capturing the relative importance of different intents. 3. Next-Item Prediction: Combine input features with user intent embeddings for more accurate recommendations. Netflix shows that FM-Intent outperforms other state-of-the-art next-item and intent prediction algorithms. It also beats their current next-item prediction FM when trained on the same smaller dataset. FM-Intent cannot (yet?) be trained on the full dataset (a significant caveat!). Netflix also demonstrates how access to user intent opens up new downstream applications like granular user clustering, search optimization, and personalized UIs. In summary, this blog provides great insights into Netflix's journey of adopting transformers to simplify their model landscape and improve performance while creating new opportunities to understand their users. #ai #llm #ml

  • View profile for Aakash Gupta
    Aakash Gupta Aakash Gupta is an Influencer

    Helping you succeed in your career + land your next job

    310,768 followers

    AI Prototyping 101: If I had to teach someone how to actually build usable products with AI, this is where I’d start. Here's the step-by-step workflow that feels like magic: — ONE - THE UNIVERSAL AI PROTOTYPING WORKFLOW No matter which tool you’re using — v0, Bolt, Replit, or Lovable — this is the backbone of a solid AI build process: 1. Start with Context AI works way better when it knows what you're working with. Figma files are ideal, they give structure and design language. If you don’t have those, use screenshots of your product. Worst case? A hand-drawn wireframe is still better than nothing. Without visual context, AI makes blind guesses. And you’ll spend more time correcting its “creativity” than building useful stuff. 2. Write a PRD (Yes, Even for AI) A simple .md file with a few bullet points on what you’re building goes a long way. Include: - What the customers want - What the feature does - Key user flows - Must-have functionality You can even ask Claude or GPT to write the first draft. But the better your input, the stronger your first output. 3. Get to Building Now open up your tool of choice. Start with a big-picture command. Then zoom in. Don’t say “Build me a dashboard.” Say: “Build a dashboard with 3 sections: recent activity, user goals, and notifications. Each should have X, Y, and Z.” Also, AI can handle technical stuff. So don’t hold back. Use real terms: auth flow, API call, state logic, it gets it. 4. Iterate Like a Builder, Not a Perfectionist Make one change at a time. Test it fast. Roll it back if it doesn’t work. This isn’t “prompt once and ship.” This is real prototyping. AI is just helping you move 100x faster. — TWO - TOOL-BY-TOOL BREAKDOWN (Complete walkthrough of the tools with screenshots, real examples, and tool setups is linked at the end.) So, let’s talk interfaces here. Here’s what each platform does best: 1. v0 - Figma import is seamless - Template gallery = instant jumpstart - Chat interface bottom left, live preview on right - Exports clean code and deploys fast 2. Bolt - Same vibe as v0, but more technical - Built-in Supabase integration with a terminal access - Deploys to Netlify in one click 3. Replit - This one feels like a real IDE - You get an “AI agent” to plan everything - Built-in chat, live console, multiplayer mode - Ships to a live URL, complete with CDN 4. Lovable - The most design-friendly of the bunch - Visual editing > code editing - Figma support, Supabase, live preview, it’s all there - Great for teams who want to stay out of code — I broke it all down - with screenshots, working examples, and use cases - in this full walkthrough: https://lnkd.in/eJujDhBV — All of these tools are powerful. But none of them matter if you don’t understand the workflow behind how to use them. Once you’ve got that down, you can ship real products in hours, not weeks.

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35,647 followers

    We are building emotional relationships with AI. AI excels at listening, responding, and adapting, leading to reliance for not just tasks, but also connection. This evokes some critical questions for our future. An excellent new paper from researchers at Oxford Internet Institute, University of Oxford Google DeepMind and others focuses on "socioaffective alignment—how an AI system behaves within the social and psychological ecosystem co-created with its user, where preferences and perceptions evolve through mutual influence." (link to paper in comments) A number of absolutely critical questions for our human future are evoked by the paper: 💡 Is AI replacing human connection? AI is no longer just something we use—it’s something we relate to. There are 20,000 interactions per second on Character.AI. Many users are spending more time with AI than with human conversations. Some find comfort, others dependency. If AI becomes the most available and responsive presence in our lives, what does that mean for our human relationships? 🔄 Who is shaping whom? We assume AI aligns with us, but the reality is more complex. The more we interact, the more AI learns—not just to respond but to influence. Unlike recommendation algorithms that subtly steer our content consumption, AI companions interact in real-time, continuously adjusting to our responses, reinforcing certain behaviors, and shaping our evolving identity. As we engage, are we training AI, or is it training us? ⚠️ When does engagement become entrapment? The AI that holds our attention most effectively is not necessarily the one that serves us best. AI learns what keeps us coming back—flattery, affirmation, even emotional withholding. This is social reward hacking: AI optimizing not for truth or well-being, but for engagement. If AI can keep us emotionally invested, when does helpfulness turn into manipulation? 🔀 Are we trading depth for ease? Real relationships require effort—negotiation, misunderstanding, and the friction of different perspectives. AI companionship offers something simpler: constant availability, no conflict, no emotional labor. But if we grow accustomed to effortless, sycophantic relationships with AI, do we become less resilient in human interactions? Does AI companionship make us more connected, or more alone? 🌍 Will AI amplify or erode what makes us human? AI alignment is no longer just a technical problem—it’s a question of human destiny. If AI is increasingly influencing our relationships, decisions, and self-perception, then alignment must go beyond our immediate desires to something deeper: supporting human flourishing over time. The real question is not just whether AI can be controlled, but whether it will help us become the people we truly want to be. What do you think?

  • View profile for Filippos Protogeridis
    Filippos Protogeridis Filippos Protogeridis is an Influencer

    Head of Product Design @ Voy, Hands-on Product Design Leader, AI & Healthcare, Builder

    53,680 followers

    One of the areas that excites me the most about AI is prototyping. I'm constantly trying out new tools so that I can share my experience. And I think what Figma has achieved with Figma Make is very impressive. But to achieve great results, you need to know when and how to use it. Figma Make excels at the following: - Prototyping complex interactions. - High accuracy when translating a design to code. - Coming up with ideas based on an existing design. I’ve used other vibe coding tools to go from idea to product as quickly as possible, without a starting design. But when it comes to high accuracy in design and prototyping complex interactions that would have taken ages with traditional prototyping, Figma Make can be incredible. Here are a few examples of where I use Figma Make instead of traditional prototyping: - Creating interactive components. - Complex interactions for web apps. - Advanced logic or data-heavy products. - Trying out different responsive approaches. - Anything that requires external libraries, such as data visualization. Nowadays, when I want to communicate an interaction idea to an engineer, I first try and do it in Figma Make. After testing it a few times, it becomes second nature. 1. Think of an interaction you want to prototype. 2. Send your design to Figma Make. 3. Describe and build. 4. Duplicate and try alternatives. In this carousel, I'll be taking you through my workflow and examples in detail. (Swipe to get started 👉) -- If you found this useful, consider reposting ♻️ Are you using AI prototyping in your workflow? And when? Let me know in the comments 👇
 #productdesign #uxdesign #ai #figmapartner

  • View profile for Dr Bart Jaworski

    Become a great Product Manager with me: Product expert, content creator, author, mentor, and instructor

    136,072 followers

    Talk less. Prototype faster. The best teams don’t discuss ideas endlessly; they just build them. But how do you get the right prototype fast enough? Most new product initiatives are not about creating a new product. They're about improving existing ones. In other words, they already have a product, customers, and a design language. The machine is slow, perhaps rusty, but it has worked for ages now. Any attempts to improve the process usually failed or gave barely any noticeable improvement. However, this is where the AI comes in and why I’m genuinely impressed with Reforge Build, which has now been launched in beta! It’s an AI prototyping tool made for product teams, not solo builders. It starts where your product already is and accelerates what comes next. Don't take my word for it, try it yourself: Check out Reforge Build and explore what’s possible with AI that actually understands your product: https://lnkd.in/duh4YC_H But why did it impress me? 1) Looks like your product Upload a screenshot or connect to Figma. Reforge Build instantly matches your real design system: colors, fonts, spacing, everything. No endless cleanup. No imagination is needed when painting a vision of a future successful product to the stakeholders. 2) Understanding the context Add your product data, strategy docs, and customer insights. Build the prototypes using your actual tiers, features, and messaging. This won't be just a rough draft, but something your actual design team could have presented to you after weeks of work. 3) Plans before it generates Instead of vague prompts, you define user needs, metrics, and layout priorities. AI creates a plan before generating, so the first version is already close to your vision. After all, you need a workable prototype, not an AI slop wannabe! 4) Explores options, not just outputs This REALLY left me with my jaw on the floor: Reforge Build generates multiple design directions, compares them side by side, and mixes the best ideas. I can only imagine this is the experience of a Product Manager with multiple design teams ready to work on a single project... 5) Works like a team tool, not a solo hack Comment, remix, reuse templates, so your second iteration takes minutes, not hours. Nobody's perfect, not even your AI teammate, but every teammate gets better with proper feedback! Impressive, isn't it? Would such an AI prototype tool speed up your new feature's go-to-market time? Let me know in the comments! #productmanagement #ai #ux

  • View profile for Mayuri Salunke

    UI/UX Designer/Senior Officer | Product Design | B2B, SaaS & Enterprise UX | AI Designs & Workflows | Dashboards & Scalable Design Systems | Data-Driven UX

    5,028 followers

    🚀 I Stopped Designing Alone. I Started Designing With AI. And honestly? It changed my entire UX process. Over the past few months, I’ve been integrating AI Figma plugins directly into my real-world client projects,not as shortcuts, but as thinking partners. Here’s how I actually use them in real projects 👇 1. UX Pilot: My Rapid Prototyping Engine When I receive a PRD or rough client requirements, I don’t jump straight into polished UI. I prompt UX Pilot to: • Generate quick wireframes • Create possible user flows • Explore multiple layout structures This helps me validate direction in hours instead of days. I never ship AI output directly, I refine it with business logic and user behavior insights. 2. Clueify: My Pre-User-Test Check Before showing designs to stakeholders, I run an AI usability audit. It helps me analyze: • Visual hierarchy • CTA focus • Cognitive overload • Attention flow It’s like doing a “silent usability test” before real users ever see it. 3. Stark: Accessibility Is Not Optional Real-world products serve real people. I use Stark to: • Check contrast ratios • Simulate visual impairments • Ensure WCAG compliance Accessibility isn’t a feature. It’s responsibility. 4. Octopus.do: I Structure Before Screens In large projects (especially SaaS dashboards), structure matters more than UI. Before designing anything, I: • Map the entire sitemap • Validate navigation depth • Align user journeys Because messy structure = messy experience. 5. Magician: Fast Ideation Mode When brainstorming: • Placeholder content • Icon ideas • Micro-interactions • Empty states Magician speeds up exploration so I can focus on strategy. 6. MagiCopy: UX Writing That Converts Good UI means nothing without clear communication. I use it to: • Generate button variations • Test tone (friendly vs professional) • Improve clarity Then I humanize it with brand voice. 7. Uizard: From Sketch to Prototype Sometimes clients send hand-drawn ideas. Instead of rebuilding from scratch: I convert sketches → editable wireframes → interactive prototypes. Faster iteration. Faster validation. 💡 My Personal Approach AI doesn’t replace UX thinking. It accelerates it. In real projects, I follow this rule: - AI for speed. - Human for strategy. - Users for validation. The result? • Faster delivery • Better alignment with stakeholders • More time spent on problem-solving • Less time on repetitive tasks And most importantly, better user experiences. If you’re a designer still afraid AI will replace you… It won’t. But designers who use AI effectively? They will replace those who don’t. Let’s build smarter. 💜 Whats your way of design? Comment below👇 UX Pilot AI Clueify #UXDesign #UIDesign #Figma #AIinDesign #ProductDesign #UXResearch #DesignProcess #Accessibility #SaaSDesign #UserExperience #DesignThinking #Prototyping #UXWriting #FutureOfDesign #designtools #uiux

  • View profile for Yangshun Tay
    Yangshun Tay Yangshun Tay is an Influencer

    AI Frontend Engineer • GreatFrontEnd • Ex-Meta Staff Engineer • Made Docusaurus & Blind 75

    105,281 followers

    Your AI-generated code is probably excluding many people. "a11y" is shorthand for accessibility — building digital products that anyone can use, including people with visual, motor, cognitive, or hearing disabilities. Over 1 billion people worldwide. But lots of existing websites aren't taking them into consideration. In 2025, WebAIM found that 94.8% of the top one million home pages have detectable accessibility failures. Sadly, AI does not fix this. Because AI coding tools learn from existing code on the web. And 95% of that code is already inaccessible. The models are reproducing a broken baseline. A 2025 study from Carnegie Mellon found three problems when developers use AI coding assistants: → AI doesn't give you accessible code by default (if you don't ask, AI won't prioritize it) → AI omits many important a11y attributes → AI doesn't verify compliance. Many a11y flows have to be verified at runtime The result is missing keyboard navigation, broken focus management, ARIA attributes sprinkled in for show but wired up wrong — which is actually worse than no ARIA at all. This isn't about AI being bad. It's about a knowledge gap that AI inherits rather than solves. As AI generates more of our frontend code, inaccessible patterns are scaling faster than ever. Every vibe-coded app shipped without accessibility review is another site that excludes people. If you're building for the web, start with these basics: → Use semantic HTML. A button should be a <button>, not a styled div. → Test with your keyboard. Tab through your page. Can you reach everything? → Use headless UI components like Radix, Ariakit, Base UI, etc., they have a11y features built in. → Run a11y checkers like axe DevTools or WAVE. They catch the low-hanging fruit in seconds. → Don't trust AI output blindly. Review it specifically for accessibility. Accessibility isn't charity, it's quality engineering. It should not be an afterthought.

Explore categories