Machine Learning Based Optimization - Methods & Use Cases for Design Engineers
Product design engineers can be helped by various algorithms to support their daily product optimization tasks.
Machine learning, a branch of artificial intelligence, has attracted a lot of attention and is already helping companies and consumers in various contexts; such as email spam and malware filtering or face recognition. We will discuss how machine learning techniques based on neural networks can dramatically improve the throughput of product design departments.
We will focus on examples of machine learning for engineers in design departments of automotive or aerospace manufacturers.
Although the concept of optimization algorithm is not new, a design optimisation revolution is occurring thanks to AI and a method based on deep neural network technology.
We will explore the underlying aspects of Expert AI and see how much this can be translated into AI for design teams (see figure), with practical use cases using an optimization algorithm. We will show how a learning-based optimization algorithm can boost your daily operations (without requiring a PhD in maths) in a collaborative scenario where product designers have access to Production AI.
How to Create an Optimal Shape: Machine Learning-Based Optimization Method
Considering the design of an industrial object, the question is: what is the best shape to meet performance targets and requirements, such as constraints on materials?
As strange as it may seem, design optimization is sometimes considered a mere geometrical modification. However, there is much more than changing a shape.
Once we have created a new shape, there are several requirements to address:
- Space requirements that could be immediately tangible by inspection and in any case solvable within CAD.
- Industrial functional requirements such as manufacturability.
- Engineering performance requirements that need multi-disciplinary expertise. For example, will the new design for a car's rear window also be aerodynamically efficient? Will a new mechanical bracket be resistant to fatigue, on top of being lighter?
Therefore, meeting functional requirements is the most critical question to ask!
The question of how to create an optimal shape is technically de-coupled into two topics - simulation and optimization.
We need a way to assess an object’s behaviour starting from its shape, constraints and operating conditions (for example, mechanical loads). For efficiency, these assessments must take place before prototypes are built for final laboratory tests to ensure the object can be safely sent to production. CAE simulation (CFD, FEA) answers this question at the cost of waiting times ranging from minutes to hours or even days. Therefore, an optimization loop of usually hundreds of design modifications could last for days or weeks and require big computing resources. Companies would benefit from solutions with CAE capabilities but can deliver results in almost real-time.
So we wonder, how to increase CAE speed and make it usable for optimization algorithms?
- Increasing CAE throughput via HPC investment. Using current technologies accelerates the process but without dramatic leaps. HPC (clusters, either private or on the Cloud) require initial CAPEX and further OPEX for IT maintenance, upgrades, and energy consumption.
- Increasing CAE throughput via GPUs. GPUs are intended to perform calculations in parallel, making them ideal for CFD. GPU-accelerated CFD simulations are often faster than CPU-only simulations, but as of today, not every CFD software is GPU-accelerated; you may need specialized software.
- Increasing CAE throughput via Quantum Computing. Quantum algorithms have the potential for a dramatically better performance than conventional algorithms and have been demonstrated for various uses ranging from materials science to random number generation. Quantum computing is not at the stage of delivering answers for the following years for industrial CAE. Any such claim at the time of writing is highly unrealistic.
- Increasing CAE throughput via AI. The proposed method is to leverage algorithms coming from AI and data science. A deep neural network can deliver a real-time simulation to be used in an optimization loop by exploiting the proprietary data sets that all major companies have in PLM CAD CAE.
The second step, either in traditional CAE-based or Machine Learning-Based Optimization, is optimization.
A blind procedure of trying a new design and checking the simulation would lead to enormous trials and errors. Also, we generally think of space as three-dimensional, but a design space can have any dimensions, and it is impossible to meet an optimum.
We need a way to have an algorithm to consider multiple objective functions for an object’s performance in a design space and simultaneously satisfy imposed constraints due to materials, costs, or assembly of the design part within a larger system.
Available Solutions and Way Forward with AI with Optimization Algorithm
Many software solutions are available on the market to simulate components for one or more physical behaviours: mechanical, fluid, thermal, or electromagnetic.
CAE (computer-aided engineering) starts from the information on the industrial object produced by design engineers in CAD (computer-aided design) and stored in the company PLM (Product Lifecycle Management) system. There are several choices available provided by CAD-PLM companies or specialized CAE companies.
The question is, what is the most efficient simulation software for design optimization in an industrial environment? Artificial intelligence with a machine learning-based optimization algorithm gives smart, efficient answers to the question.
We will review the available optimization methods, such as metaheuristic optimization algorithms. Firstly, we need to lay down the role of deep learning as a proposed algorithmic solution. The optimization process steers an initial design to an optimum solution to converge to the required targets for the objective functions. Before making a design better, we need to be able to assess it most efficiently. Deep Learning is the solution.
What Is CAE Software?
Engineers can test and improve their designs with Computed-Aided Engineering (CAE) software by simulating how a product or system will operate in various scenarios before ever constructing a physical prototype. Finding and fixing issues early in the design process can save time and money and represents an element to win over the competition and increase the company's reputation.
FEA and CFD, i.e. Finite Element Analysis Computational Fluid Dynamics, are the main recognized branches of CAE tools, dealing respectively with solid mechanics aspects such as stress and strain in mechanical parts and fluid/ thermal aspects as in aircraft aerodynamics.
What Is Optimization Software?
Optimization software can solve many problems, such as finding the shortest path between two points, maximizing profits, or minimizing costs. Many different types of optimization software are available, ranging from simple spreadsheet-based tools to specialized software packages that use advanced algorithms to solve complex optimization problems.
Genetic algorithms are a type of optimization algorithm inspired by natural evolution. They are used to find the best solution to a problem by simulating natural selection.
In a genetic algorithm, a set of solutions (called chromosomes) to a problem is represented as a population. Each chromosome is evaluated according to an objective function, which measures how good a solution is. The best solutions (those with the highest objective function values) are then selected to reproduce and create new solutions. This reproduction process involves combining and mutating the selected chromosomes to create a new population of solutions. This is repeated over multiple generations until a satisfactory solution is found.
Genetic algorithms help solve problems that are difficult to solve using traditional techniques. They are beneficial for solving problems with many variables or when the relationships between variables are complex. However, they can be computationally intensive and may not always find the globally optimal solution to a problem.
What Is the Proposed Algorithm Deep Learning?
The first step in assessing an industrial design, starting from its three-dimensional CAD representation, is traditionally thought to be computer-aided engineering (CAE). However, while giving the highest-end results in the field, CAE has the disadvantage of requiring massive computational resources. This disadvantage is usually tolerated for a “single-shot” analysis, say one per day or week.
As we will see, any algorithm will require running several (hundreds or thousands) of design iterations, and therefore, a real-time replacement (surrogate model) for CAE would be desirable. This is provided by simulation with deep learning.
Deep learning is a specialized branch of machine learning based on deep neural networks.
Using a visual example, we can see that the “deep” in deep learning stands for hidden layers of neurons in the artificial neural network. It introduces the capability to recognize, in the learning process, the geometrical features describing an object.
What is the connection between artificial intelligence and data science? Contemporary AI exploits data to build a predictive engineering model, i.e. data-driven predictions. Data points arranged consistently (for example, all the temperature and pressure fields for all the design changes in past projects and the associated CAD designs) represent the so-called dataset.
Both learning and optimization methods exploit neural networks with very different objective functions:
- Neural network learning aims to build a predictive model that minimizes deviations between predictions and the reference data (ground truth).
- Engineering optimization aims to minimize the deviation between a design and its objectives.
We will see how a big advantage of machine learning-based optimization is a subtle mathematical detail called differentiability that helps to manage the above-mentioned deviations.
The Importance of Differentiability in Machine Learning-Based Optimization
Artificial neural networks are differentiable because they are constructed using differentiable functions, such as the sigmoid function or the rectified linear unit (ReLU) function, as the activation functions for the artificial neurons. This means that the output of a neural network can be computed as a differentiable function of its inputs, which allows for the application of gradient-based optimization algorithms, such as the backpropagation algorithm, to find the optimal set of weights for the network.
After a separate section explicitly explaining the concept of gradient, this important mathematical tool will be better grasped.
The Importance of Data
Data are essential for deep learning (DL) and the raw material the model uses to learn and help users make intelligent decisions. DL models learn from data by finding patterns and features that allow them to make predictions. The more data the model has, the better it can learn, and its predictions will be more accurate.
In this phase, we will restrict AI to prediction and delegate any decisions to a user. Still, moving on, we will see that at least a down-selection of optimal solutions can be trusted to AI.
Additionally, the data quality is also essential. If the data is noisy or has errors, it can negatively impact the model’s ability to learn and make accurate predictions.
The importance of data is so overwhelming that many teams decide to walk off from DL even before starting, believing that thousands or millions of data points are needed. How much data do we need for training? It is possible to start a DL project with few hundred cases of simulation and CAD, and the efficiency for further DL projects can reduce the number by 90% or more, making the DL approach very appealing even for sceptics.
Using Statistical Models to Understand Relationships
Design of Experiments (DoE) and Response Surface Models (RSM) are different but closely related:
- DoE is used to collect data on the response of a system to different levels of predictor variables.
- RSM is used to model the relationship between the response and predictor variables and to predict the system's response to different levels of the predictor variables.
Collecting Data - Design of Experiments
Design of Experiments (DoE) is a statistical methodology used to vary the factors that influence a system to understand the relationships between the inputs and the output. It is often used with optimization techniques to find optimal settings.
Understanding Data - Response Surface Models
Response Surface Models (RSM) are statistical models that describe the relationship between a response variable and one or more predictor variables. The response variable measures the outcome or dependent variable being predicted or modelled, and the predictor variables are the independent variables used to predict the response.
RSM is often used in engineering and statistics to optimize processes and design experiments. They can be used to predict the response of a system to different levels of the predictor variables and to identify the optimal combination of predictor variables that results in the best response.
Example of DoE+RSM Application - Automotive Engineering
One example of the use of Response Surface Methods (RSM) in automotive engineering is in the optimization of fuel consumption for a vehicle. In this case, the response variable might be the vehicle’s fuel consumption, and the predictor variables might include factors such as car speed, engine load, and transmission efficiency.
To optimize fuel consumption, experiments might be designed with DoE and conducted to collect data on the vehicle’s fuel consumption at different speeds, loads, and transmission efficiencies.
This data could then fit an RSM to the data, which could be used to predict the vehicle’s fuel consumption at different speeds, loads, and transmission efficiencies. The RSM could also identify the optimal combination of speed, load, and transmission efficiency that results in the lowest fuel consumption.
What Are the Classic Requirements for Optimization Methods?
Optimization theory studies algorithms and strategies for finding the optimum solution to a problem.
In engineering, optimization techniques are often used to design systems or processes that operate efficiently and effectively.
Some of the primary optimization algorithms used in engineering include:
- Linear and quadratic programming. These techniques optimise a linear or quadratic objective function subject to linear constraints. They are often used in engineering and finance.
- Nonlinear programming. This is a technique for dealing with an objective function that is not necessarily linear and is used to solve problems with both nonlinear objectives and constraints.
- Genetic algorithms. Genetic algorithms are algorithms that are inspired by the process of natural evolution. They are often used in engineering for complex, nonlinear systems.
- Gradient-based optimization. These are algorithms that involve computing the gradient of the objective function and updating the parameters in the direction that reduces the objective.
Objective Functions in Deep Learning
In Supervised Learning, the artificial neural networks' goal is to use known data to minimize a “cost function” C.
The C function is the difference between the neural network’s prediction (y) and the actual solution (Y) and is also a function of the inputs (x); therefore, C = C(x;y, Y) over the entire dataset. The concept is that of working statistically over a whole dataset composed of samples, each with its own "loss" (i.e. the deviation for a single data point, whereas the cost is for all the data points).
The minimum of this function is the artificial neural networks' learning solution. There are several ways to express the cost function mathematically, and a few of them are listed here:
- Mean Absolute Error (MAE). MAE is one of the loss functions used in regression problems. The MAE of a neural network is calculated by taking the mean of the absolute differences between the predicted and actual values.
- Mean Squared Error (MSE). With respect to MAE, MSE squares the individual errors instead of calculating their absolute values.
- Root Mean Squared Error (RMSE). RMSE is computed by taking the square root of the MSE.
- R2-score. The R2-score or R², known as the coefficient of determination, gives the proportion of the variance in the response variable of a regression model that the predictor variables can explain. This value ranges from 0 to 1. The higher the R² value, the better a model fits a dataset.
It is possible to talk about cost functions because, in contrast to other approaches like reinforcement learning, the actual solution is known in supervised learning. As a result, one predictive model will outperform another simply because it will predict better.
At this stage, focused on learning, we are still focused on “optimally realistic” predictions rather than optimal designs.
The Loss Landscape
What is a loss landscape? We will give a simple explanation and come back to the same concept in a more mathematical form in the next chapter.
A loss landscape represents how well a machine learning model predicts the correct output for a given input.
Imagine having a machine learning model that is trying to predict the price of a house based on its size, number of bedrooms, and location. You can think of the model’s prediction as a point on a graph and the actual house price as a target the model is trying to hit. The difference between the model’s prediction and the actual price is the loss for that prediction.
A loss landscape is a graph that shows the loss for all possible combinations of the model’s parameters. In this case, the parameters might be the size of the house, the number of bedrooms, and the location. On the loss landscape, the house size would be on one axis, the number of bedrooms would be on another axis, and the location would be on a third axis. The height of a point on the graph would represent the loss for each combination of these parameters.
The training stage of a machine learning model aims to find the combinations of parameters that result in the lowest loss, like finding the lowest point on the loss landscape. If the loss landscape has many low points, it can be difficult for the model to find the very lowest one. On the other hand, if the loss landscape has only one low point, it will be easier for the model to find it.
Objective Functions in Deep Learning - Gradient Descent Method
Please bear with us; before tackling the subject of optimized object shapes, we have to lay the foundations of optimized learning. Previously, we introduced the concept of a cost function, C, in a loss landscape. This function is generally the function of some unknown x, and we need to know the change in C (shorthand: ΔC) based on the change in x (shorthand: Δx).
The Gradient Vector
“∇C” is the symbol that represents the rate of change ΔC (increment/decrement in Cost) to Δx (increment/decrement in a variable on which the cost depends). This happens in a multi-dimensional x space (we usually see two x variables just for visualization). Hence ∇C is a multi-dimensional vector called the gradient vector. The symbol ∇ is a mathematical symbol called a gradient, and ∇C relates changes in the x variable to changes in the cost function C.
Let us now lay down a straightforward equation that lets us see how to choose a change in x, Δx, to make the variation in the cost, ΔC negative (remind: we must reach a minimum cost, so negative variations are good).
In particular, we choose Δx = − η ∇C, where η is a small, positive parameter known as the learning rate.
The Gradient Descent and the Loss Function (LF)
Let us put together the two previously introduced concepts.
The shape of the loss landscape can give insight into how the model is behaving and how difficult it is to find the minimum loss. For example, a landscape with many local minima (valleys) may be more challenging to optimize than a landscape with only a single global minimum (valley). During the optimization, the algorithm uses the gradients of the LF for the model parameters to update the parameters in a direction that will reduce the loss. This process continues until the LF is minimized or until the algorithm reaches a predefined stopping criteria.
So, in short, the descent optimization method is used to find the minimum of the LF, and the loss landscape is a visualization of the LF in relation to the model parameters.
Objective Functions - ADAM
ADAM (Adaptive Moment Estimation) is a popular choice for training deep neural networks since it can outperform other optimizers on various tasks. ADAM is a default choice when training a machine learning model, especially for high-dimensional parameter spaces and noisy, complex loss landscapes.
ADAM is an algorithm that can be used to update the parameters of a machine-learning model.
Like gradient descent, ADAM uses the gradient of the loss function for the model parameters to update the model in the direction that reduces the loss. However, ADAM also includes additional terms that allow it to adaptively change the η for each parameter based on the historical gradient information. This can help to stabilize training and improve the convergence rate in neural networks.
ADAM has several hyperparameters that can be adjusted, including the learning and decay rates for the moment estimates. Using the default values for these hyperparameters is generally recommended, although some fine-tuning may be necessary for some problems.
How to Optimize a 3D Shape
Starting from a digitalized representation of a product's shape via CAD, which options are available to designers to get assistance from algorithms to produce shapes that are better in terms of functional requirements, such as weight and mechanical robustness?
Some common methods approach to couple CAD to optimization algorithms include:
- In topology optimization, numerical simulations find the optimal distribution of material within a given volume, subject to certain constraints such as stress and displacement. Topology optimization can be used to create highly efficient and lightweight designs.
- In parametric design, the prerequisite is creating a design that can be easily modified by adjusting a set of parameters. Given the amount of control and constraint due to the parametric box embedding the project, if different manufacturing processes or materials are embedded in the description, the outcome of the design is, in general, acceptable without refinements. The question is, will it be the optimal result or just an amelioration?
- In shape optimization, you are in search of a better but also the best solution or a good approximation to the best solution. This is possible due to a more flexible and accessible approach than a parametric design, as described above. Here, the shape of an existing design improves its performance with algorithm-driven modifications, for example, by reducing aerodynamic drag or increasing structural stiffness. This is the most general approach and has several advantages, especially over parametric design, since Shape Optimization does not impose extra parametric work. Thus, it is a free-form deformation, as opposed to a parametric design, that can be imagined as a (multi-dimensional) box bounding the system's freedom to evolve its shape.
The idea is now to train the artificial neural network on the AI software to yield, for example, the hydrodynamic coefficients of a wing. The three-dimensional optimization based on machine learning is performed on meshes, whose shape evolves for every iteration until it reaches an “optimal” form for the given constraints.
What Are the Best Algorithms for Machine Learning-Based Optimization?
While there is no best absolute optimizer for every machine learning problem, here are a few factors influencing the choice of an optimizer:
- The type of model. Different optimizers work better with different types of models. For example, gradient descent is well-suited for linear models, while more complex models, such as deep neural networks, may benefit from more advanced optimizers such as Adam or RProp.
- The nature of the optimization problem. Some optimizers are better suited for convex optimization problems, while others are more robust in the presence of local minima or saddle points.
- The size of the training dataset. Some optimizers, such as stochastic gradient descent, are more efficient when the training dataset is large, while others may be more suitable for smaller datasets. Please note that given the total amount of data, a fraction of them should be allocated as training data and the remaining as testing data or for hyperparameter tuning purposes.
- Availability of computational resources. Some optimizers are more computationally intensive than others, so an approach based on the least computational power is more attractive and usable in various contexts without access to big HPC clusters.
Applications of Machine Learning Based Optimization: Case Studies
The use of machine learning-based algorithms to find optimal solutions has several benefits, and the main two are automation and speed:
- Machine Learning Automation: Optimization based on machine learning can be automated, enabling quick and effective optimization without human interaction.
- Machine Learning Speed: Compared to conventional optimization techniques, machine learning-based can frequently uncover answers to optimization issues significantly more quickly.
Case Study 1: Heat Exchanger Optimization
Introduction - Heat Exchangers for the Automotive Industry
Heat exchangers are devices that transfer heat between two or more fluids.
Heat exchangers are widely used in many industries and applications, for example, in HVAC systems for the automotive industry, to control the degree of comfort for drivers and passengers in the vehicle. There is a wide range of acceptable designs for heat exchangers, and the number of parameters describing the heat exchanger geometry can quickly rise as the design becomes more and more complex.
Several design parameters must be considered when designing heat exchangers for the automotive industry. These include:
- Heat transfer rate: The rate at which heat is transferred between the two fluid streams must be sufficient to meet the system's needs.
- Pressure drop: The pressure drop across the heat exchanger should be minimized to reduce energy losses and increase efficiency. Minimizing the pressure drop across a heat exchanger can help reduce energy losses and increase efficiency for a few reasons. First, a large pressure drop across the heat exchanger requires a larger pressure difference to drive the fluid flow, which requires more energy. This can lead to increased pumping losses and a reduction in system efficiency. Second, a large pressure drop can cause a reduction in the fluid velocity, leading to a decrease in the heat transfer coefficient and a reduction in the overall heat transfer rate. This can result in a reduction in the performance of the heat exchanger and a decrease in system efficiency and even lead to events such as boiling. By minimizing the pressure drop across the heat exchanger, the fluid velocity can be maintained at a high level, which can help to increase the heat transfer coefficient and the overall heat transfer rate. This is translated into improved performance and increased efficiency of the system.
- Size and weight: The size and weight of the heat exchanger should be minimized to save space and reduce the vehicle's overall weight.
- Durability: The heat exchanger must withstand the harsh operating conditions of the automotive environment, including high temperatures, vibration, and corrosion.
Since the corresponding numerical simulation is computationally costly, the design engineer can only afford to iterate on a few design parameters to try and improve the system.
Heat Exchanger Optimization With Neural Concept Shape (NCS)
PhysicsX, a team of UK-based scientists and engineers borne out of F1, collaborated with Neural Concept to build a surrogate model to predict and optimize the performance of various heat exchanger designs with different topologies in real time.
The predictive performance of the trained Neural Concept Shape network was excellent (R2-score=R²=1 means perfect prediction compared to the ground truth; in this case, ground truth was Siemens Simcenter STAR-CCM+ simulations):
- Mean Efficiency R² = 0.993
- Temperature (in the hot fluid) R² = 0.993
- Pressure drop (across the cold fluid) R² = 0.906
- Pressure drop (across the hot fluid) R² = 0.994
We anticipated that pressure drops are an overall number of importance for design engineers. The high R² scores mean that, for design purposes, the neural network predictions are reliable, especially for the hot fluid side. A value of 0.993 means that the predictive model explains 99.3% of the variation in the data.
The neural network model also predicts the detailed flow within the volume. This provides engineers with a deeper understanding of the phenomenon and the capability to investigate the heat exchanger point by point in 3D space (as with CFD).
On top of the surrogate model, the optimization library of Neural Concept was used, bringing substantial improvements to the final design.
Thus, the prediction + optimization programme, stated at the beginning of the article, has been followed. With a mirror reflection, the figure shows an accurate spatial comparison between the NCS AI prediction and CFD ground truth (reference value) for temperature and fluid velocity.
Case Study 2: UAV Optimization
The project aimed to satisfy an unmanned aerial vehicle’s (UAV) aerodynamic and geometric requirements while optimizing its aerodynamic performance. In other words, the algorithm had to strike a balance between the pursuit of an engineering performance outcome that was optimal and the satisfaction of actual requirements.
As common in engineering, the designers had to balance meeting requirements and achieving the best possible performance.
During the optimization loop, it is a must to use simulation software (CAE) to predict the aerodynamic performance of the UAV for different configurations. The optimization algorithm can use this simulation to evaluate the performance of different design alternatives and guide the optimization procedure.
We will show how NCS shifted the paradigm from difficult, time-consuming traditional simulation to lightweight AI prediction.
UAV Case Study: Levels of CFD simulation
We will now give more details for a use case developed in collaboration between Neural Concept, Sensefly and AirShaper (with a downloadable white paper), dealing with the optimal aerodynamic shape for a drone.
For the CFD simulation, the designers had access to three different levels of simulation.
- The first level provides a rough estimate and corresponds with a simplified CFD solver. This level also provides an overview of the optimization process.
- The second simulation level was an AirShaper concept simulation.
- The third simulation level was the most accurate one, and it was also the one that corresponded to a detailed AirShaper simulation.
Due to the high cost of running the third detailed level of simulation, the total budget for such simulations is limited to just fifty (50) iterations, which is an amount that is insufficient to improve a design inside our high-dimensional design space effectively. How was it possible, then?
The project began with a complete optimization loop that included low-fidelity CFD. The neural network was pre-trained with this optimization, and intriguing design solutions were found with its assistance.
UAV Case Study: Objective and Procedure
The lift-to-drag ratio, also known as the L/D ratio or "glide ratio", is a measure of the aerodynamic efficiency of an aircraft or other object capable of generating lift. It is defined as the ratio of the lift force acting on an object to the drag force acting on the object.
The project's primary goal was to have a better lift-to-drag ratio, thus also increasing the UAV autonomy during operations, a crucial parameter for drones.
The L/D ratio is depicted throughout the several iterations of the optimization process in the picture. Together with their respective centred moving averages, the neural network and the CFD are presented below. Additionally, the moving average of the relative error is shown alongside the reported relative error.
UAV Case Study: Optimization Framework
The optimization algorithm framework within NCS lets us pre-train a network with lower accuracy simulations, which is then fine-tuned with higher accuracy ones to do the final optimization at the most detailed level of simulation accuracy.
A parameterization of the vertices was introduced to reduce the number of variables, ensure a smooth surface deformation, and handle the geometric constraints. Neural Concept Shape lets the user conveniently define a set of parametrization modules that can be stacked to generate and optimize deformations starting from an initial design.
All the modules can be interchanged at any point in the optimization process. The shape constraints representing onboard electronics that cannot be subject to interference can also be defined as a ”projection parameterize".
The geometric constraints are shown in the figure: the objective block lets the user define an objective function to work on.
The NCS framework allows us to define all these components separately and connect them within the framework to perform the algorithm and, in particular, to propagate the derivatives for gradient-based methods.
How Was the Shape Deformed?
The key technology to evolve from a basic design to an optimal one, in terms of shape deformation, was Radial Basis Functions (RBFs).
RBFs are a function defined based on the distance between a point and a fixed location. In the context of shape optimization, radial basis functions can be used to represent the shape of an object. This is done by defining a set of RBFs, each corresponding to a different point on the object's surface. The values of the RBFs then define the object's shape at these points. Because the object's shape is represented in this way, it is possible to use optimization algorithms to adjust the parameters of the RBFs to improve the object's shape. This can be done by minimizing some objective functions, such as the L/D ratio in our case.
In the UAV project, 50 control points were chosen on the drone surface using a farthest point sampling heuristic. Every point was free to move in the three spatial directions with 3 X 50 = 150 RBF parameters.
RBF is a powerful and flexible technique for shape modelling with many advantages over traditional methods.
UAV Case Study: Results
The lift-to-drag ratio was raised by 4.25% compared to the original design, while the drag decreased by 6.25% at the highest lift/drag ratio. It should be noted that the optimization process may already settle on substantially more efficient designs while employing a small number of detailed simulations (40).
It is interesting to note that the automated optimization algorithm settles, on its own, on a few designs and methods that have been used before but were not known to the AI.
The final result was a more “organic” looking shape, featuring an interesting anhedral wing layout. The optimized design’s wings have a more pronounced anhedral setup (wings pointing downward), as seen in the figure.
This configuration will affect the pressure pattern on the wings and, while not the purpose of this optimization, will produce a more dynamic reaction of the UAV (as opposed to the self-stabilizing effect of a dihedral wing setup wings pointing upward).
Many other examples are available for HVAC and Heat Exchangers.
Further Examples and Conclusion
Two successful examples highlighted how optimizing a shape may be applied to complicated industrial devices in the automotive and aerospace industries. The presented use cases were a heat exchanger and the entire body of a UAV. For the aerospace and automotive industries, many more external aerodynamics simulation examples show the applicability of AI.
Simulation and optimization software has been on the market for 30+ years. However, using CAE and optimization as decision-making tools has been constrained by two factors:
- The time necessary for execution (hours or days).
- The specialized knowledge needed (10% of engineers are typically competent for simulations).
By offering real-time simulation and democratized custom solutions accessible to all engineers, deep learning simulation and optimization shatter the constraints of conventional methodologies.
As a result, AI is ready to help engineers design faster, better products.