Designing For Multimodal Interactions

Explore top LinkedIn content from expert professionals.

  • View profile for Scott King

    Director @ Full Fathom | Sensory Branding, Design & Experiences Specialist

    20,329 followers

    Smelling with your EYES and EARS. Digital retail is visual and auditory by nature. But what about the senses we can’t physically activate through a screen? In Full Fathom’s research collaboration with the University of Leeds we explored how multisensory cues work in online environments, particularly for scent-led categories. The standout insight: 76% of participants were able to identify a fragrance using audiovisual cues alone. This reinforces something powerful: the brain doesn’t need direct physical stimulation to construct sensory experience. When cues are designed intentionally, people can perceive scent without actually smelling it. Key takeaways from the study: • Non-olfactory cues can trigger clear scent perceptions • Layered multisensory inputs improve identification accuracy • Sensory congruency strengthens understanding of scent character • Well-matched stimuli can positively influence purchase intent online For sectors like beauty, wellness, and personal care, where scent plays a central role, this opens up important strategic opportunities. Digital environments don’t have to be sensory limitations. They can be carefully orchestrated sensory translations. In increasingly saturated markets, multisensory thinking isn’t a creative extra - it’s a competitive differentiator. How are you translating your physical brand and product experiences into digital ones? #multisensory #branding #digitalexperience #retailinnovation #designstrategy #beauty #personalcare #wellness

  • View profile for Srinivas Mahesh

    AI-Martech & GTM Expert | 🚀 120K+ Followers | 📈 700 Million Annual Impressions | 💼 Ad Value: $23.75M+ | LinkedIn Top Voice: Marketing Strategy | 🚀 Top 1% of LinkedIn’s SSI Rank | 📊 Digital CMO | 🎯 StartupCMO

    124,616 followers

    🛋️✨ What if the future of furniture is not just beautiful… but intelligent enough to improve health, comfort, and sustainability? The science behind Smart Furniture Design is becoming impossible to ignore. Recent research shows smart furniture is increasingly being built around 3 core pillars: sensors for data collection, IoT for processing, and actuation for real-time response. One 2025 systematic review also found that health monitoring is the dominant application area, while most concepts are still early-stage, which means the biggest breakthroughs are still ahead. (MDPI) Here is the real shift 👇 The next generation of furniture will not compete only on style. It will compete on: ✨ ergonomics ✨ adaptive comfort ✨ space optimization ✨ wellness monitoring ✨ sustainable materials ✨ human-centered personalization Research in ergonomics and smart seating already shows growing focus on how furniture can influence posture, sitting behavior, environmental comfort, and overall well-being. At the same time, new design studies are combining AI, VR, deep learning, and user-review mining to create furniture that responds better to real human needs. (ScienceDirect) 📌 The problem: traditional furniture is static in a dynamic world. 📌 The solution: smart, responsive, data-informed furniture systems. 📌 The benefit: healthier living, better productivity, safer environments, and more sustainable spaces. (ScienceDirect) 🌍 My view: the future of furniture design belongs to creators who combine design thinking + AI + sustainability + human behavior science. Within the next few years, smart furniture may move from a premium niche to a serious design standard in homes, offices, healthcare, and public spaces. That is where innovation becomes truly life-changing. (MDPI) 💬 What’s your perspective — will smart furniture become a luxury trend, or the new global standard for design? Credits: 🌟 All write-up is done by me (P.S. Mahesh) after in-depth research. All rights for visuals belong to respective owners. 📚  

  • View profile for Samuel Hess

    Boost Revenue Per User by 10% in < 6 Months | Over $248M added with A/B-Tests for HelloFresh, SNOCKS, and 250+ other DTC brands

    77,584 followers

    A voice note on product pages boosted ARPU by 1.72%... for men's apparel. We added a simple audio clip describing key features right under the ratings on men's PDPs. Think boxers and socks: a quick 30-second rundown of fit, fabric, and why it stands out. Hypothesis? Auditory cues pair with visuals to amp up engagement (hello, Dual Coding Theory), cutting decision time and making the pitch more vivid. Results across 66k users: ARPU: +1.72% (from €9.46 to €9.63) Conversion rate: +2.74% AOV: -0.99% (slight dip) All statistically significant. Mobile users loved it most (+2.88% ARPU), while desktop was neutral. Extra revenue during the test? Over €8k. Scaled monthly? Mid-five figures. Why it clicked: Shoppers skim text but tune into a confident voice. It feels personal, builds trust fast... especially for guys who want facts over fluff. Even non-listeners (93% didn't play) sensed the transparency, sparking that "this brand gets me" vibe. Pro tip: For men's lines, voice notes beat walls of copy. They humanize the sell without overwhelming. The lesson? In ecom, ears can out-earn eyes. Test audio on your PDPs... before competitors do. Follow for more CRO wins from Drip.

  • View profile for Maham Zafar

    Biotechnologist for Scientific Writing | Digital Marketer | Founder & CEO – Ilm o Hunar | Graphic Designer at RSG Pakistan

    10,988 followers

    Blinking is usually thought of as a simple reflex that protects and moistens the eyes, but new neuroscience research shows it also affects how the brain handles sound. Each blink briefly suppresses visual input, creating a short interruption in sensory flow. During this moment, the brain does not simply pause. Instead, it rapidly adjusts how attention is distributed across senses, influencing how auditory information is processed immediately afterward. Studies examining brain activity and behavior found that sound perception changes in the moments following a blink. Neural networks involved in attention and sensory integration showed altered timing, suggesting the brain temporarily recalibrates how it prioritizes incoming sounds as vision resumes. Participants in these experiments displayed subtle shifts in reaction speed and auditory sensitivity right after blinking. These effects were small but consistent, demonstrating that even brief interruptions in vision can ripple across other sensory systems. These findings suggest blinking plays a broader role in perception than previously assumed. Rather than being a passive maintenance function, blinking appears to help coordinate sensory processing in dynamic environments. By briefly shifting the balance between visual and auditory attention, the brain may optimize how it samples information from the world. This work adds to growing evidence that perception is highly interconnected, with even simple actions influencing how the brain integrates sights and sounds. Research Paper 📄 DOI: 10.1177/23312165251371118

  • View profile for Nick Tudor

    CEO/CTO & Co-Founder, Whitespectre | Advisor | Investor

    13,827 followers

    Blending AI and IoT isn’t just about connecting devices - it’s about ensuring intelligence, reliability, and trust at every layer. Here are the key factors to consider when designing AI-powered IoT systems: ➞ 1. Data Readiness: AI is only as good as the data it learns from. Focus on accuracy, timestamps, and data coverage across environments. Historical and real-time data, sensor metadata, and contextual information are key. ➞ 2. Model Selection: Pick models that fit the use case - not the trend. Balance between rule-based, machine learning, and deep learning approaches. Choose models that are maintainable and aligned with performance goals. ➞ 3. Edge vs Cloud AI: Decide where intelligence lives - on the edge or in the cloud. Latency, compute power, and connectivity constraints define the best strategy. Consider edge-only, cloud-only, or hybrid setups based on system needs. ➞ 4. Model Evaluation: Models must perform consistently in real-world conditions, not just labs. Prioritize precision, consistency across environments, and monitoring for drift. Ensure models maintain accuracy and reliability over time. ➞ 5. Interpretability: Transparency builds trust in AI decisions. Enable explainable insights and root cause analysis. Ensure operators and users understand model outputs clearly. ➞ 6. Automation Strategy: Define the right balance between autonomy and human oversight. Decide whether systems trigger alerts or take automated actions. Integrate automation tightly with control systems and business workflows. Building Smart IoT is about combining AI intelligence with IoT reliability - where every decision counts. 🔁 Repost if you're building for the real world, not just connected demos. ➕ Follow Nick Tudor for more insights on AI + IoT that actually ship.

  • View profile for Will Bremridge

    Communication Consultant to Tech & Finance Leaders | I support executives and their teams to present, pitch, and lead conversations with confidence and credibility.

    14,162 followers

    If you want people to pay attention to you, here is what makes a great speaker... It all comes down to how well you make people see, hear, and feel your message. The framework I teach leaders is called V.A.K. - Visual. Auditory. Kinaesthetic. 👉 Visual: Paint the picture for your audience Numbers and abstract statements wash over people. But when you show them where things were and where they are now, this becomes more visual. Use contrast. Use timelines. Give people a mental image they can hold onto. If your audience can't visualise what you're describing, you've probably lost them 👉 Auditory: Your voice is an instrument. Play it. The words you say matter far less than how you deliver them. A well-placed pause creates tension. A shift in pace signals importance. When you speak at the same speed and volume the entire time, your audience stops hearing you. They might be looking at you, but their brain checked out two minutes ago. 👉 Kinaesthetic: Get them to feel something. This is the one that separates forgettable speakers from persuasive ones. Stop telling people something is important. Instead, show them what happens if it goes wrong. What's at risk? What do they stand to lose? Data alone has never changed anyone's mind, but emotion has. See it. Hear it. Feel it. I broke down the full framework in under 90 seconds in the video below 👇 💬 Which one do you struggle with most? Visual, Auditory, or Kinaesthetic? Tell me in the comments. 🔁 Share this with someone preparing for a big presentation this week. 🔔 Follow Will Bremridge for more on how to become one hell of a good communicator.

  • View profile for Ofir Zan

    AI Inference SME at VAST ▪️ Everything beyond storage 💿

    10,046 followers

    “Find every scene in a 3 hour film where a [lead character] shows [frustration] while mentioning [a specific phrase], then match similar visual style and soundtrack for an epic trailer creation” This used to take weeks. You needed: 📄 text transcription 🎬 video tagging 🎵 audio analysis 💱 multiple models, and a lot of glue code to align everything Each modality lived in its own vector space, and you needed to stitch them together. But a quiet shift is happening. Instead of forcing every piece of data (text, images, video clips, audio) into separate silos and then trying to glue them together, leading models now turn all of them into comparable vectors in a shared space. That’s the power of #multimodal #embedding 🔥 Models like Google Gemini Embedding 2 and TwelveLabs Marengo 3.0 are starting to represent text, images, video, and audio in a shared space. And when paired with a strong vision language model (VLM), you get * More expressive queries across time and modalities * More coherent understanding of unstructured content * Can reduce hallucinations in retrieval-augmented setups 🚘 Another example in autonomous systems: you can now retrieve precise driving footage segments that combine specific visual conditions (rain + pedestrian gesture), audio cues (horn sounds), and telemetry data for better edge-case analysis and simulation. The current leaders in this space are Gemini Embedding 2 Qwen3-VL embeddings TwelveLabs Marengo 3.0 Jina Embeddings v4 Voyage Multimodal VAST Data is integrating with all of them as part of the AI OS! I’m expecting this to quietly become the new foundation for any system that needs to reason over real-world, multi-sensory data. #multimodal #embedding #realworld #contextunderstanding

  • View profile for Amin Shad

    Founder | CEO | Visionary AIoT Technologist | Connecting the Dots to Solve Big Problems

    8,439 followers

    Why Hardware-Software Co-Design Is Non-Negotiable? Dangerous assumption: Design them independently, then stitched together later. From my experience building scalable, field-tested industrial IoT solutions, I can confidently say this approach is flawed—and costly with cause of many failures in industrial deployments. Whether you're monitoring pressure in oil & gas pipelines or automating maintenance in a smart city infrastructure, the reliability, scalability, and total cost of ownership of an IoT system depend deeply on how well the hardware and software are integrated—side by side—from day one. Technical Reasons 1. Power efficiency and performance Battery-operated devices, especially in LPWAN and NB IoT environments, require tightly optimized firmware that aligns with hardware capabilities (sleep modes, sensor wake cycles, transmission windows, and many other factors). Designing software without a deep understanding of the hardware's physical and firmware limitations results in shorter lifespans, inconsistent data, or both. 2. Connectivity optimization Protocols like LoRaWAN, NB-IoT, or Cat-M1 are not just plug-and-play. Reliable transmission depends on antenna design, shielding, payload formatting, and retry mechanisms that must be embedded in both hardware specs and software logic—together. 3. Real-time fault detection and recovery Industrial environments are noisy—electrically, physically, and digitally. Integrating diagnostics, fallback strategies, and sensor validation into both firmware and cloud platform ensures that small glitches don’t turn into expensive field failures. 4. OTA updates and lifecycle management Without co-design, firmware updates become a logistical nightmare. A unified design ensures that remote updates are reliable, secure, and hardware-aware—so they don't brick your devices in the field. Non-Technical (But Just as Critical) Reasons 1. Lower long-term cost Reworking firmware or cloud APIs post-production is exponentially more expensive than doing it right upfront. Co-design reduces iteration cycles, deployment delays, and support overhead. 2. Faster time to market When teams work in silos, integration becomes a bottleneck. Side-by-side development removes surprises and streamlines validation—cutting months off your release timeline. 3. Better user experience From installation to data visualization, a co-designed solution feels cohesive. Installers don’t struggle with mismatched instructions. Platform users don’t question sensor data accuracy. Everyone wins. 4. Future-proofing the solution When hardware and software evolve in sync, scaling to new features or integrating with third-party platforms becomes a natural progression—not a painful migration. So, be assured hardware and software designed in the same room, by teams who speak the same language? If not, you're probably not building a solution. You're building a future problem. Let’s build smarter. #lpwan #IoT #lorawan #nbiot #ellenex

  • View profile for Dr. Leon Pietschmann

    Driving XR Innovation | PhD in XR & HCI | prev. BCG, Cambridge, Harvard

    4,232 followers

    💐 As XR becomes more intelligent, design must become more empathetic. AI can make immersive systems adaptive, but without understanding human perception, cognition, and comfort, “intelligence” can quickly turn into overload. A couple of years ago, my entire PhD focused on this exact gap: how to guide attention, reduce cognitive load, and improve performance in immersive environments. One thing became very clear: in XR, small design choices create big outcomes. A few examples that keep showing up in practice: 🧠 Where you place information can make or break decision-making 🎯 How you guide attention can boost performance or disrupt the user 🤢 If you disrupt the user too many times, comfort (and trust) disappears As we move toward AI-driven XR systems, we shouldn’t optimise only for efficiency. We need to design for trust, wellbeing, and human agency, especially when systems start adapting in real time. I am convinced: The future of XR isn’t just about smarter systems. It’s about systems that understand and serve us better.

  • View profile for Steven Dodd

    Transforming Facilities with Strategic HVAC Optimization and BAS Integration! Kelso Your Building’s Reliability Partner

    31,523 followers

    BAS Retrofit Design in Occupied Buildings — Designing for Reality, Not the Original Drawings Most BAS retrofits don’t fail because of technology — they fail because we design to the original intent, not the current reality. After 5–20 years, buildings rarely operate as designed: • Spaces repurposed • Loads shifted • Tenants changed • Equipment added or bypassed • Staff reduced • Cyber risks increased If we simply “replace controls like-for-like,” we automate yesterday’s problems. A high-performance retrofit starts with operations first, controls second. Here’s the framework we use: 1) Operational Discovery (not just drawings) Interview operators. Review work orders. Trend alarms. Identify chronic overrides and nuisance trips. These are your weak signals. 2) Validate Current Loads & Sequences Re-baseline occupancy, schedules, and diversity. Many legacy sequences are oversized or fighting each other. 3) Standardize & Simplify Reduce custom logic. Use repeatable templates and naming. Fewer exceptions = fewer failures at 2am. 4) Design for Decision-Making Under Pressure Clear graphics, actionable alarms, defined ownership. Operators must instantly know: what happened, why, and who acts. 5) Plan Phased Cutovers Occupied buildings require surgical change windows, temporary controls, and rollback plans. Reliability > speed. 6) Cyber + Remote Ops by Design Secure architecture, remote visibility, and analytics should be native — not bolted on later. 7) Commission for Drift, Not Day-One Trend for weeks. Prove stability under real weather and real occupancy. Bottom line: Retrofits should deliver fewer alarms, clearer authority, and faster decisions — not just new hardware. Design for how the building is actually used today. That’s where resilience, efficiency, and trust converge. #BuildingAutomation #SmartBuildings #BAS #HVAC #FacilityManagement #Retrofit #DigitalBuildings #OperationalExcellence

Explore categories