If you work with the NorSand constitutive model, you know that calibrating elasticity and hardening parameters via visual fitting can be a tedious and subjective task. In our new paper, we propose a better way: a Bayesian approach that integrates experimental triaxial data with predefined priors to objectively estimate the most likely parameter values. This method removes the bias of manual fitting and quantifies the uncertainty in your parameters. We validated the methodology using Fraser River sand and have released the Python code so you can try it on your own datasets. 🔗 Paper: https://lnkd.in/eub3ShpC 🔗 GitHub Repository: https://lnkd.in/edshx9v9 #Geotechnics #ConstitutiveModeling #DataScience #Engineering #Mining Thanks to all the co-authors: Luis-Fernando, Humberto and Alexandra SRK Consulting Geosyntec Consultants Red Earth Engineering A Geosyntec Company The University of Western Australia
Bayesian Analysis in Engineering
Explore top LinkedIn content from expert professionals.
Summary
Bayesian analysis in engineering is a statistical approach that uses prior knowledge and observed data to update beliefs and make decisions in complex engineering problems. This method helps engineers better estimate uncertain parameters, prioritize experiments, and personalize models for a wide range of applications—from material science to product design.
- Quantify uncertainty: Always report not just a single value, but also the range of plausible parameter estimates, so others can understand how confident you are in your model’s predictions.
- Prioritize experiments: Select measurements or tests that will most reduce uncertainty, saving time, effort, and resources compared to uniform or random sampling approaches.
- Respect individual variability: Use hierarchical or Bayesian models to account for differences between users, materials, or processes, rather than assuming everyone behaves the same way.
-
-
A recurring challenge across science & engineering: you need to align a computationally expensive black-box simulator (PDEs, etc.) to data in order to infer hidden parameters like material coefficients or boundary conditions. In many such cases, you don't have access to gradients, adjoints, etc. If you only want point estimates, then Bayesian optimisation (BO) is an option. But if you care about the full posterior distribution, Monte Carlo or MCMC quickly become infeasible. You could fall back on Laplace approximations, but for most PDE-based inverse problems the posteriors are horrible: multimodal, non-identifiable, with tangled geometries, reflecting sensitivity scales and invariances. ABC is an option: but this typically requires huge amounts of evaluations, and has a tendency to inflate posteriors. So the homework question was: just as BO uses Gaussian Process surrogates and acquisition strategies to explore costly functions, can we design sampling strategies the same way, to approximate a posterior under a fixed compute budget? With the brilliant Takuo Matsubara, Simon Cotter, and Konstantinos Zygalakis, we introduce Bandit Importance Sampling (BIS): • A new class of importance sampling that designs samples directly via multi-armed bandits. • Combines space-filling sequences (Halton, QMC) with GP surrogates to adaptively focus where evaluations matter most. • Comes with theoretical guarantees and works well on multimodal, heavy-tailed, and real-world Bayesian inference problems. Takeaway: BIS works well: it can cut evaluations by orders of magnitude. For problems with ~10–20 parameters, it’s a very viable option. Preprint here: https://lnkd.in/egrZX_NJ Next steps: packaging this up for the community.
-
"Ah yes, Bayesian optimization... but I don't really do DOE." I hear this often when talking with lab scientists about Bayesian statistics. The assumption seems to be that Bayesian ideas only matter if you're running formal design of experiments or optimization campaigns. But Bayesian statistics is more than that. It's a way of thinking about exploring the unknown. You start with what you believe, you measure something, you update that belief, and you decide what to do next based on that updated belief instead of gut feeling. A recent ChemRxiv paper provides a very concrete and relatable example of what this looks like in the lab: measuring the critical micelle concentration (CMC) of a surfactant. Traditionally, you prepare a large number of solutions across a wide concentration range, measure the surface tension for each, and only afterward identify where the transition happens. It works, but it treats all experiments as equally informative, even though some measurements clearly teach you more than others. The authors reframe this as an iterative learning process that quantitatively determines the most informative new measurement, allowing you to zero in on the CMC faster. Here's what each iteration looks like: 🔹Measure surface tension at one concentration 🔹Update your belief about where the CMC likely is by fitting all data collected so far to a thermodynamic model, which also quantifies your remaining uncertainty 🔹Pick the next concentration by calculating which measurement would reduce that uncertainty the most By strategically choosing the most informative measurements, the authors reached the same CMC results with half the experiments, meaning less time, effort, and chemical waste for a task many labs perform routinely. If you're already thinking "why not just use bisection instead of uniform sampling?", you're on the right track. That intuition is closely aligned with Bayesian thinking. I'd encourage you to read the paper to discover the connection and explore what lies beyond simple bisection. 📄 Adaptive, Bayesian experimental design to efficiently determine the critical micelle concentration of a surfactant, ChemRxiv, December 9, 2025 🔗 https://lnkd.in/eteJejGF
-
Most choice models assume everyone shares the same preferences. But in the real world, that assumption rarely holds. Users bring their own priorities, price sensitivities, and product biases into every decision. Some are laser-focused on cost, others won’t consider anything without a specific brand, and many fall somewhere in between. If we ignore this individual variability, we risk building models that are too simplistic to guide real product or marketing decisions. That’s exactly why Hierarchical Bayes (HB) choice modeling is so powerful. Rather than fitting a single model for the entire population, HB models estimate individual-level part-worth utilities - that is, how each person values different product features - while still leveraging shared structure across the full sample. It’s like running a custom model for every respondent, but doing it in a way that’s statistically grounded and efficient, even with limited data per person. Technically, HB choice modeling involves two levels. At the first level, each individual is assumed to make decisions by maximizing their own utility function. At the second level, we assume these individual utility parameters are drawn from a population distribution (often multivariate normal). Bayesian inference ties these levels together: the individual’s estimates get “shrunk” toward the population mean unless there’s strong evidence in their own choices to pull them away. This makes HB especially useful in survey-based conjoint tasks where each person only answers a small number of questions. In R, this is typically implemented using the ChoiceModelR package, which fits HB models using Markov Chain Monte Carlo (MCMC). The package handles data preparation, defines priors for the utility coefficients, and provides posterior draws of both individual and aggregate part-worths. You can use these results to segment customers, predict individual behavior, or simulate responses to new product configurations. What makes this even more useful is that HB models don’t just give you a number - they give you a distribution of plausible values. That means you can estimate uncertainty, construct credible intervals, and quantify how stable a user’s preferences seem to be. This level of granularity opens the door for richer personalization, demand forecasting, and strategic product optimization. In essence, HB choice modeling is about respecting human variability while maintaining statistical rigor. It’s computationally more intensive than standard logit models, but the payoff is clear: you gain insight not only into what people prefer, but how preferences differ from one person to the next. As personalization becomes table stakes across UX, e-commerce, and product strategy, tools like HB modeling are helping researchers move from averages to nuance - giving us a better chance at designing experiences people actually want.
-
Materials, like people, have many different but related qualities: Materials properties are not independent—they are interconnected, each representing a different facet of the same material. Leveraging these correlations can significantly enhance multi-objective materials optimization. I'm excited to share our recent paper, led by my student Ahnaf Alvi, with my colleagues, Danny Perez, Jan Janssen and Douglas Allaire, published in Acta Materialia. We demonstrate the substantial advantages of incorporating Deep Gaussian Processes (DGP) into multi-objective Bayesian Optimization for materials discovery. Unlike conventional methods, DGPs effectively encode complex, nonlinear correlations between properties, accelerating convergence and improving prediction accuracy. A key insight from our work is that this method strategically exploits differences in property evaluation costs. By measuring cheaper-to-assess properties more frequently, we infer expensive-to-measure properties with fewer experiments, greatly enhancing resource efficiency in materials discovery campaigns. This approach is particularly impactful in complex design spaces like high-entropy alloys (HEAs), where traditional optimization methods struggle. Read more about our findings here: https://lnkd.in/ewppE6jp #MaterialsDiscovery #BayesianOptimization #GaussianProcesses #HighEntropyAlloys #MachineLearning #MaterialsScience
-
I am delighted to share that our latest research has been published in the ASCE Journal of Materials in Civil Engineering! Title: Probabilistic Assessment of Chloride Ion Migration in Concrete: Enhancing Predictive Accuracy and Gaining Insights into Distribution Patterns. This study addresses chloride ion ingress, one of the primary causes of reinforced concrete deterioration and reduced service life. We developed a Bayesian-adjusted probabilistic model that more accurately predicts chloride distribution under different conditions of temperature, humidity, and water–cement ratio. Experimental validation confirmed that the model significantly improves predictive accuracy, with all test results falling within the predicted confidence intervals. The findings provide a stronger scientific basis for durability assessment and offer valuable guidance for extending the service life of reinforced concrete structures. I am grateful to the entire team for their contributions and collaboration, and I extend my special thanks to Professor Antonio Nanni for his unparalleled support and guidance throughout this work. Stay Tuned... there will be more..... Link: https://lnkd.in/ePinaUZ8 یا پدر من
-
I'm happy to share our latest paper published in Computer Methods in Applied Mechanics and Engineering entitled "Bayesian neural networks for predicting uncertainty in full-field material response." Most uncertainty quantification methods predict uncertainty in low-dimensional quantities of interest derived from a more complicated model. In this work, we use Bayesian convolutional neural networks to predict uncertainty in the full-field mechanical response of heterogeneous materials under specified loading conditions. The neural network predicts the stress in the material at each discretized point in space, along with a measure of uncertainty in the prediction at each point. We compare three different strategies for Bayesian neural network implementation: Hamiltonian Monte Carlo (HMC), the variational Bayes by Backprop (BBB) algorithm, and Monte Carlo dropout. The results show that HMC can be computationally tractable, even for some large, high-dimensional problems while variational BBB provides robust uncertainty estimates that are largely consistent with HMC. Monte Carlo dropout, meanwhile, shows inconsistencies with the other methods and is very sensitive to its design parameters. For more details, see the paper below: https://lnkd.in/eh--Zh3r The credit for this work goes to our awesome postdoc, George Pasparakis, and my incredible collaborator Lori Graham-Brady. This work has been supported under the Center for High-Throughput Materials Discovery for Extreme, funded by the U.S. Army DEVCOM Army Research Laboratory. We are grateful for their support! #bayesian #networknetwork #computationalmechanics Johns Hopkins Whiting School of Engineering Johns Hopkins Department of Civil and Systems Engineering Hopkins Extreme Materials Institute at Johns Hopkins University
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development