Discovering the mechanics of artificial and real meat

Artificial meat is an eco-friendly alternative to real meat that is marketed to have a similar taste and feel. The mechanical properties of artificial meat significantly influence our perception of taste, but how precisely the mechanics of artificial meat compare to real meat remains insufficiently understood. Here we perform mechanical tension, compression, and shear tests on isotropic artificial meat (Tofurky® Plant-Based Deli Slices), anisotropic artificial meat (Daring™ Chick’n Pieces) and anisotropic real meat (chicken) and analyze the data using constitutive neural networks and automated model discovery. Our study shows that, when deformed by 10%, artificial and real chicken display similar maximum stresses of 21.0 kPa and 21.8 kPa in tension, -7.2 kPa and -16.4 kPa in compression, and 2.4 kPa and 0.9 kPa in shear, while the maximum stresses for tofurky were 28.5 kPa, -38.3 kP, and 5.5 kPa. To discover the mechanics that best explain these data, we consulted two constitutive neural networks of Ogden and Valanis-Landel type. Both networks robustly discover models and parameters to explain the complex nonlinear behavior of artificial and real meat for individual tension, compression, and shear tests, and for all three tests combined. When constrained to the classical neo Hooke, Blatz Ko, and Mooney Rivlin models, both networks discover shear moduli of 94.4 kPa for tofurky, 35.7 kPa for artificial chick’n, and 21.4 kPa for real chicken. Our results suggests that artificial chicken succeeds in re-producing the mechanical properties of real chicken across all loading modes, while tofurky does not, and is about three times stiffer. Strikingly, all three meat products display shear softening and their resistance to shear is about an order of magnitude lower than their resistance to tension and compression. We anticipate our study to inspire more quantitative, mechanistic comparisons of artificial and real meat. Our automated-model-discovery based approach has the potential to inform the design of more authentic meat substitutes with an improved perception of taste, with the ultimate goal to reduce environmental impact, improve animal welfare, and mitigate climate change, while still offering the familiar taste and texture of traditional meat. Our source code, data, and examples will be available at https://github.com/LivingMatterLab/CANNs.

: Mechanical testing of artificial and real meat. Tofurky® deli slices, Daring TM artificial chick'n, and real meat, from left to right, tested in uniaxial tension using the Instron 5848 test device, top, and in compression and shear using the AR-2000ex torsional rheometer, bottom. The artificial chick'n and real chicken were along the fiber direction in all testing modes.

Sample Preparation
For the tension tests, we prepared strip samples of 2 cm in width to match the width of the specimen holder. We aligned the fiber direction of the artificial and real chicken with the direction of loading. For the compression and shear tests, we prepared cylindrical samples of 8 mm diameter using a biopsy punch to extract full-thickness cores from the center of each material. We aligned the fiber direction of the artificial and real chicken with the long axis of the cylindrical punch.

Sample Testing
For all test modes, we tested the samples raw and at room temperature at 25 • C. We performed all uniaxial tension tests using an Instron 5848 (Instron, Canton, MA) with a 100N load cell, see Figure 1, top row. We mounted the sample, applied a small pre-load of 0.5 N, and calibrated the initial gage length L. We then increased the stretch quasi-statically at a rate ofλ = 0.2%/s for t = 50 s to a total stretch of λ = 1. 1 We performed all uniaxial compression and shear tests using an AR-2000ex torsional rheometer (TA Instruments, New Castle, DE), see Figure 1, bottom row. We mounted the sample, applied a small pre-load of 0.5 N, and calibrated the initial gage length L. We then compressed the tofurky samples quasi-statically at the rheometer's minimum rate of 10 µm/s and the real chicken and artificial chick'n samples at a rate ofλ = 0.2%/s for t = 50 s, all to a total stretch of λ = 0.9. For the shear tests, we applied a small compressive pre-load and calibrated the initial gage length L. We then rotated the sample quasi-statically atφ at a shear rate ofγ = 0.2%/s for t = 50 s, to a total shear of γ = 0.1. To prevent slippage of the samples during the shear tests, we used a sandpaper-covered base plate of 20 mm diameter and a sandpaper-covered top plate of 8 mm diameter.

Analytical Methods and Data Processing
For each sample and each test mode, we used MATLAB (Mathworks, Natick, MA, USA) to smooth the curves using smoothingspline and SmoothingParam = 1. We interpolated the fit curve over 20 equally spaced points in the ranges 1.0 ≤ λ ≤ 1.1 for tension, 1.0 ≥ λ ≥ 0.9 for compression, and 0.0 ≤ γ ≤ 1.0 for shear. Finally, we averaged the five interpolated curves to obtain the mean, standard error of the mean, and standard error.

Kinematics
During testing, particles X of the undeformed sample map to particles x of the deformed sample via the deformation map ϕ such that x = ϕ(X). Similarly, line elements of the dX of the undeformed sample map to line elements dx of the deformed sample via the deformation gradient F such that dx = F · dX. The deformation gradient F is the gradient of the deformation map ϕ with respect to the undeformed coordinates X. Its spectral representation introduces the principal stretches λ i and the principal directions N i and n i in the undeformed and deformed configurations, where F · N i = λ i n i , and Here we assume that all samples are perfectly incompressible, and their Jacobian, J = det(F), always remains equal to one, J = 1. For simplicity, we also assume that all samples are isotropic and have three principal invariants, 2 3 and I 2 = λ 2 1 λ 2 2 + λ 2 2 λ 2 3 + λ 2 1 λ 2 3 and I 3 = λ 2 1 λ 2 2 λ 2 3 = J 2 , which are linear, quadratic, and cubic in terms of the principal stretches squared. The principal stretches depend on the type of experiment.

Tension and compression
In the tension and compression experiments, we apply a stretch λ = l/L, that we calculate as the ratio between the current and initial sample lengths l and L. We can write the deformation gradient F in matrix representation as and immediately identify the principal stretches, In tension and compression, the first and second invariants and their derivatives are I 1 = λ 2 + 2/λ and I 2 = 2λ + 1/λ 2 with ∂ λ I 1 = 2 λ − 2/λ 2 and ∂ λ I 2 = 2 − 2/λ 3 . (5)

Shear
In the shear experiment, we apply a torsion angle φ, that translates into the shear stress, γ = r/L φ, by multiplying it with the sample radius r and dividing by the initial sample length L. We can write the deformation gradient F in matrix representation as and calculate the principal stretches, λ 1 = 1 2 γ + 1 + 1 4 γ 2 and λ 2 = 1 and λ 3 = 1 2 γ − 1 + 1 4 γ 2 .

Constitutive Equations
Constitutive equations relate a stress like the Piola or nominal stress P, the force per undeformed area that is commonly measured in experiments, to a deformation measure like the deformation gradient F. For a hyperelastic material that satisfies the second law of thermodynamics, we can express the Piola stress, P = ∂ψ(F)/∂F, as the derivative of the Helmholtz free energy function ψ(F) with respect to the deformation gradient F, modified by a pressure term, −p F -t , to ensure perfect incompressibility, Here, the hydrostatic pressure, p = − 1 3 P : F acts as a Lagrange multiplier that that we determine from the boundary conditions. Instead of formulating the free energy function directly in terms of the deformation gradient ψ(F), we can either express it in terms of the invariants, ψ(I 1 , I 2 , I 3 ), to yield the following expression for the Piola stress, or in terms of the principal stretches, ψ(λ 1 , λ 2 , λ 3 ), to result in the following Piola stress, Here we focus on principal-stretch based constitutive models.

Neural network modeling
Motivated by these kinematic and constitutive considerations, we reverse-engineer two families of principal-stretch based neural networks that satisfy the conditions of thermodynamic consistency, material objectivity, material symmetry, incompressibility, constitutive restrictions, and polyconvexity by design [20,41]. Instead of building these constraints into the loss function [8,38], we hardwire them directly into our network input, output, architecture, and activation functions [27] to satisfy the fundamental laws of physics. We compare two different network architectures, an Ogden type network in Section 3.1 and a Valanis-Landel type network in Section 3.2. Special members of this family represent well-known constitutive models, including the neo Hooke [42], Blatz Ko [3], and Mooney Rivlin [32,39] models, for which the network weights gain a clear physical interpretation [28,40].

Ogden type neural network
Our first neural network is inspired by the Ogden model [35] and uses a free energy function that is parameterized in terms of the principal stretches, λ 1 , λ 2 , λ 3 , In this general form, the Ogden model consists of n terms and introduces 2n parameters, n stiffness-like parameters µ i and n nonlinearity parameters α i , which collectively translate into the classical shear modulus µ from linear theory [19]. Motivated by the free energy function (12), we reverse-engineer a hyperelastic, perfectly incompressible, isotropic, principal-stretch-based Ogden type neural network [40]. The network takes the deformation gradient F for tension and compression (3) or shear (6) as input and computes the associated principal stretches λ 1 , λ 2 , λ 3 using (4) or (7). From these stretches, it determines n = 20 Ogden terms, with fixed exponential coefficients, here ranging from α 1 = −30 to α n = +30 in increments of three, that make up the n nodes of the hidden layer of the model. The sum of all Ogden terms defines the strain energy function ψ as the network output. Figure 2 illustrates our Ogden type network with n = 20 nodes, for which the free energy function takes the following form, The network represents a 20-term Ogden model with fixed exponents α k ranging from -30 to +30 in increments of three. It takes the deformation gradient F as input and computes the three principal stretches λ 1 , λ 2 , λ 3 . From them, it calculates the Ogden terms for the 20 nodes of the hidden layer, multiplies them by the network weights w k , and sums up all terms to the strain energy function ψ(F) as output. The derivative of the strain energy function defines the Piola stress, P = ∂ψ/∂F, whose components P 11 or P 12 enter the normal force N or torque M in the loss function to minimize the error between the model and the tension, compression, and shear data.
Here, w k are the network weights and ∑ 3 i=1 [λ α k i − 1] are the activation functions. From the free energy (13), we calculate the stress using equation (11), in terms of the derivative of the free energy ψ with respect to the stretches λ i , The network weights w k are non-negative [2], and relate to the stiffness-like parameters µ k and fixed exponential coefficients α k as w k = µ k /α k ≥ 0. For our specific network with n = 20, the α k coefficients are α k = 3k − n − 13 for k ≤ 10 and α k = 3k − n − 10 for k ≥ 11. For this particular network, we recover the classical shear modulus µ from the linear theory in terms of the stiffnesslike parameters µ k as µ = 1 20 ], or, equivalently, in terms of the network weights w k as µ = 1 2 [30 2 w 1 + 27 2 w 2 + .... + 27 2 w 19 + 30 2 w 20 ]. During training, our network autonomously identifies the best subset of activation functions from ( 2 n − 1) = 1, 048, 575 possible combinations of terms, and discovers the best model from more than a million possible models. At the same time, it naturally trains the weights of the less important terms to zero.

Valanis-Landel type neural network
Our second neural network is inspired by the Valanis Landel model [44], which postulates that the free energy function ψ can be expressed as the sum of any function subfunction f of the principal stretches, λ 1 , λ 2 , λ 3 , Comparisons with experimental data showed that logarithmic subfunctions f (λ i ) = 2µ ln(λ i − 1) with derivatives ∂ f /∂λ i = 2µ ln(λ i ) perform well in fitting uniaxial and biaxial data from natural rubber [44]. Other possible choices for the subfunction are the classical Ogden model [35] from equation (12), . Motivated by the free energy function (15) and these considerations for the subfunction f (λ i ), we reverse-engineer a hyperelastic, perfectly incompressible, isotropic, principal-stretch based Valanis-Landel type neural network. Similar to the Ogden type network, the Valanis-Landel type network takes the deformation gradient F for tension and compression (3) or shear (6) as input and computes the associated principal stretches λ 1 , λ 2 , λ 3 using (4) or (7). From these stretches, it calculates the positive and negative second and fourth powers, +4 and, for all four, calculates the Ogden type, exponential, and logarithmic terms. To give the network the additional freedom to discover larger Ogden type powers, we also include two terms, [λ , with trainable exponents, −w (•) and +w (•) , resulting in a total of n = 4 × 3 + 2 = 14 terms. The sum of all Valanis-Landel terms defines the strain energy function ψ as the network output. Figure 3 illustrates our Valanis-Landel type network for which the free energy function takes the following form, in the first hidden layer, multiplies them with their weights w 1,(•) , calculates their Ogden type terms, exponentials, logarithms in the second hidden layer, multiplies them with their weights w 2,(•) , and adds all terms to the strain energy function ψ(F) as output. The derivative of the strain energy function defines the Piola stress, P = ∂ψ/∂F, whose components P 11 or P 12 enter the normal force N or torque M in the loss function to minimize the error between the model and the tension, compression, and shear data.
Here w 1,k are the unit-less network weights from the first hidden layer and w 2,k are the stiffnesstype network weights from the second hidden layer as shown in Figure 3. From the free energy (16), we calculate the stress using equation (11), in terms of the derivative of the free energy ψ with respect to the principal stretches λ i , The network has 14 unit-less weights w 1,(•) between the two hidden layers and 14 stiffness-type weights w 2,(•) after the second hidden layer, with four redundant weights (w 2,1 w 1,1 ), (w 2,5 w 1,5 ), (w 2,8 w 1,8 ), (w 2,12 w 1,12 ), resulting in a total of 24 independent weights.

Special cases
Our constitutive neural network in Figure 3 is a generalization of popular constitutive models. Specifically, we obtain the following one-and two-term models by setting the remaining network weights to zero.
The neo Hooke model [42] only uses the fixed exponent +2. Its free energy function is and its shear modulus is The Blatz Ko model [3] only uses the fixed exponent −2. Its free energy function is and its shear modulus is µ = 2w 1,1 w 2,1 .

Loss function
Our constitutive neural networks learn the network weights, w = w 1 , ..., w k , by minimizing a loss function L that penalizes the mean squared error, the L 2 -norm of the difference between model and data divided by the number of training points n train , Specifically, the loss function contains the mean squared error between the normal forces of the model N(λ i ) and the stretch-force pairs, {λ,N} i of the tension and compression experiments, and between the torque of the model M(φ i ) and the torsion angle-torque pairs, {φ,M} i of the shear experiment. To reduce potential overfitting, we also study the effects of Ridge or L2 regularization, where α is the penalty parameter or regularization coefficient and ||W|| 2 2 = ∑ k w 2 k is the weighted L2 norm.

Tension and compression
In the tension and compression experiments, the data are the recorded as stretch-force pairs, {λ,N} i and the model output is the normal force as a function of the stretch N(λ i ), which we calculate by integrating the normal component of the network model stress P 11 across the cross section, dA = r dr dθ, of the sample [16], The normal stress P 11 follows from the general stress equation (9) , to obtain the following explicit uniaxial stress-stretch relation for a hyperelastic, isotropic, incompressible material, Alternatively, we can evaluate the general stress equation (9), in the principal stretch space, P ii = ∂ψ/∂λ i − p/λ i for i = 1, 2, 3, using a principal-stretch-based formulation. Here, p denotes the hydrostatic pressure that we determine from the zero normal stress condition, P 22 = 0 and P 33 = 0, as p = −2λ ∂ψ/∂λ, to obtain an explicit equation for the normal stress, where ∂ψ/∂λ is the derivative of the free energy ψ with respect to the applied stretch λ according to equations or (16) or (17). From equations (25) and (26) we conclude that the normal stress is constant across the cross section, independent of the radius r, and we can evaluate the normal force explicitly as, where the cross section area is A = π r 2 for cylindrical compression samples and we replace this term by A = b t for rectangular tension samples. This implies that we can translate the recorded stretch-force pairs {λ,N} into stretch-stress pairs {λ,P 11 } pairs with

Shear
In the shear experiment, the data are the recorded as torsion angle-torque pairs, {φ,M} i and the model output is the torque as a function of the torsion angle M(φ i ), which we calculate by integrating the shear component of the network model stress P 12 times its moment arm r across the cross section, dA = r dr dθ, of the sample [16], The shear stress P 12 follows from the general stress equation (9), and we obtain the following explicit shear stress-stretch relation for a hyperelastic, isotropic, incompressible material, Unlike the normal stress in equations (25) and (26), the shear stress in equation (30) is not constant across the cross section but varies with the radius r. In other words, the derivatives dψ(r)/dI 1 and dψ(r)/dI 2 can be functions of the radius r through the explicit dependence on the shear strain, γ = rφ/L. This implies that we can obtain a relation between the torque M and the torsion angle φ, but we cannot simplify the integral any further. Here, we evaluate the integral along the radius r using numerical integration [12]. For example, using the trapezoidal rule, r 0 f (r)dr ≈ r [ f (0) + f (r)]/2, we obtain the following explicit relations between the torque M, torsion angle φ, shear F 12 , and shear stress P 12 , These equations are exact for the popular examples of the neo Hooke and Blatz Ko models with ψ = µ [I 1 − 3] / 2 and ψ = µ [I 2 − 3] / 2, for which we recover the classical explicit linear relations between the torque M, torsion angle φ, shear γ, and shear stress P 12 , For this special case, we can translate the recorded torsion angle-torque pairs {φ,M} into shear stretch-stress pairs {γ,P 12 } with Traditionally, rheometer tests have been performed on stiff materials with small deformations, well in the linear regime, and have assumed this linear constitutive behavior a priori. For hyperelastic soft materials, with finite deformations and a nonlinear constitutive behavior [18], depending on whether the shear stress-stretch curve is concave or convex, equations (32) to (34) might very well under-or over-estimate the shear stressP 12 .
Motivated by these considerations, we reparameterize the loss function in terms of the mean squared error between the normal stresses of the model P 11 (λ i ) and the stretch-stress pairs, {λ,P 11 } i of the tension and compression experiments, and between the shear stress of the model P 12 (γ i ) and the shear-stress pairs, {γ,P 12 } i of the shear experiment, both scaled by the maximum absolute stress, max i { ||P 11,i || } or max i { ||P 12,i || }, supplemented by L2 regularization, where α is the penalty parameter or regularization coefficient and ||W|| 2 2 = ∑ k w 2 k is the weighted L2 norm. We train the network by minimizing the loss functions (35) or and learn the network parameters w k using the ADAM optimizer, a robust adaptive algorithm for gradient-based firstorder optimization, and constrain the weights to always remain non-negative, w k ≥ 0.

Training and Testing Data
We train and test our Ogden and Valanis-Landel networks using tension, compression, and shear data from the tofurky, artificial chick'n, and real chicken as reported in Table 1. We perform singlemode training using a single loading case, either tension, compression, or shear, as training data and the remaining two cases as held-out test data. We perform multi-mode training using all three loading cases simultaneously as training data. Figure 4 reports the tension, compression, and shear data for the tofurky, artificial chick'n, and real chicken converted into stretch-stress and shear-stress pairs using equations (28) and (34) with trapezoidal-rule type numerical integration. The means ± standard error of the means reveal Table 1: Tofurky, artificial chick'n, and real chicken tested in tension, compression, and shear. Stresses are reported as means from the loading and unloading curves of n samples tested in the ranges 1.0 ≤ λ ≤ 1.1 for tension,   that there is a relatively small sample-to-sample variation. Notably, tofurky displays the stiffest response of all three materials, in all three modes, in tension, compression, and shear. Artificial chick'n and real chicken exhibit similar mechanical behaviors. We use the mean curves and values reported in Table 1 to train and test our neural networks. Figure 5 illustrates the automatically discovered constitutive models for tofurky using the twenty term isotropic, perfectly incompressible Ogden type network from Figure 2. The three rows illustrate the Piola stress as a function of the stretch or shear for tension, compression, and shear. The first three columns indicate single-mode training for tension, compression, and shear, with training on the diagonal and testing on the off-diagonal. The last column shows the results of multi-mode training with all loading modes as training data. Each graph reports the goodness of fit R 2 for either training or testing. The circles indicate the experimental data, while the colorcoded regions designate the contributions of the twenty model terms to the free energy function ψ. Warm red-type colors indicate that the exponential power is negative, while cold blue-type colors indicate that the exponential power is positive. Table 2 shows the discovered weights and shear   data, the network discovers a single negative term, the orange λ −18 i term in the left column, while for the other loading modes, it discovers a wide range and number of terms. Figure 6 illustrates the automatically discovered models for artificial chick'n using the twenty term isotropic, perfectly incompressible Ogden type network from Figure 2. Table 2 shows the discovered weights and shear modulus of 31.00 kPa for multi-mode training. First, we note that for single-mode training, the model succeeds in fitting the individual sets of training data with R 2 train values of 0.9967, 0.9409, and 0.9705 for tension, compression, and shear. Second, for singlemode training, the model performs well in predicting the shear data for training on compression with R 2 test of 0.9682. Third, the network finds a moderately good fit of the data for multi-mode training, with training fit R 2 train of 0.6079, 0.7627, and 0.8567 for tension, compression, and shear. Fourth, single-mode training with tension or shear finds negative red-type terms, while singlemode training with compression or multi-mode training finds positive blue-type terms. Lastly, all types of training discover a small subset of the twenty total possible terms even without additional regularization, with between one to five terms visibly contributing to the stress function. Multimode training discovers only five positive terms from λ +18 i to λ +30 i . Figure 7 illustrates the automatically discovered models for real chicken using the twenty term isotropic, perfectly incompressible Ogden type network from Figure 2. Table 2 shows the discovered weights and shear modulus of 21.39 kPa for multi-mode training. First, we observe that for single-mode training, the principal-stretch-based model succeeds in fitting the individual sets of training data with R 2 train values of 0.9994, 0.9764, and 0.9727 for tension, compression, and shear. Second, for single-mode training, the model performs well in predicting the tension data for training on compression with R 2 test of 0.9827. Third, the network is unable to find a single model to fit the data for multi-mode training, with training fit R 2 train of 0.3226, 0.3424, and 0.0000 for tension, compression, and shear. Fourth, for single-mode training with tension, the discovered terms are primarily negative red-type terms, for single-mode training with compression the terms are primarily positive blue-type terms, for single-mode training with shear there are both positive and negative terms, and multi-mode training also find both positive and negative terms. Lastly, all types of training discover a small subset of the twenty total possible terms even without additional regularization, with between two to seven terms visibly contributing to the stress function. Multi-mode training discovers only two terms, the dark blue and dark red λ −30 i and λ +30 i terms in the right column. Figure 8 illustrates the automatically discovered constitutive models for tofurky using the fourteen term isotropic, perfectly incompressible, Valanis-Landel type network from Figure 3 with a penalty parameter of 0.001 for L2 regularization. First, for single-mode training, the model succeeds in fitting the individual sets of training data with R 2 train values of 0.9993, 1.0000, and 0.9989 for tension, compression, and shear, respectively. In tension, the negative exponent trains to -14.42, while the positive exponent is not activated significantly. In compression, both exponents train to non-zero values, -11.43 and +13.14. In shear, they train to comparable values of -13.76 and +12.83. Second, for single-mode training, the model performs moderately well in predicting the tension data for training on compression with R 2 test of 0.6908 and for predicting compression data for training on shear with R 2 test of 0.7257. Third, the network is able to find a good fit to the data for multi-mode training, with training fit R 2 train of 0.8753, 0.9583, and 0.8268 for tension, compression, and shear, respectively. In multi-mode training, the exponents train to -23.25 and +16.43. Fourth, single mode training on tension data activates primarily the red-colored negative exponent while the other training modes activate both the red-and turquoise-colored exponents with large contributions to the stress function. Table 3: Tofurky, artificial chick'n, and real chicken and discovered Valanis-Landel type models and parameters. Models and parameters are discovered for simultaneous training with tension, compression, and shear data using the Valanis-Landel type neural network from Figure 3. Tofurky weights for regularization parameter α = 0.1 from Figure  11. Artificial chick'n and real chicken weights for α = 0.001 from Figures 9 and 10. Summary of the weights w k and goodness of fit R 2 for training in tension, compression, and shear. tofurky chick'n chicken ten+com+shr ten+com+shr ten+com+shr n = 5, 5, 5 n = 5, 5, 5 n = 5, 5, 5  Figure 9 illustrates the automatically discovered constitutive models for artificial chick'n using the fourteen term isotropic, perfectly incompressible, Valanis-Landel type network from Figure 3 with a penalty parameter of 0.001 for L2 regularization. Table 3 shows the discovered weights for multi-mode training. First, for single-mode training, the model succeeds in fitting the individual sets of training data with R 2 train values of 0.9998, 0.9867, and 0.9989 for tension, compression, and shear, respectively. In tension, the negative exponent trains to -12.89, while the positive exponent is not activated significantly. In compression, the positive exponent trains to +10.13, while the negative exponent is not activated significantly. In shear, both exponents contribute with comparable values of -9.23 and +9.19. Second, for single-mode training, the model performs moderately well in predicting the shear data for training on compression with R 2 test of 0.8649 and for predicting compression data for training on shear with R 2 test of 0.7257. Third, the network finds a moderately good fit to the data for multi-mode training, with training fit R 2 train of 0.5368, 0.9352, and 0.8345 for tension, compression, and shear, respectively. Fourth, multi-mode training does not activate either of the power terms and only discovers four terms that contribute to the stress function, the yellow λ −4 i term, the green ln(λ −4 i ) term, the blue λ +4 i term, and the dark blue ln(λ +4 i ) term in the right column. Figure 10 illustrates the automatically discovered constitutive models for real chicken using the  fourteen term isotropic, perfectly incompressible, Valanis-Landel type network from Figure 3 with a penalty parameter of 0.001 for L2 regularization. Table 3 shows the discovered weights for multimode training. First, for single-mode training, the model succeeds in fitting the individual sets of training data with R 2 train values of 0.9999, 0.9949, and 0.9995 for tension, compression, and shear, respectively. In tension, the exponents train to -8.90 and +12. 43. In compression, the positive  exponent trains to +11.15, while the negative exponent is not activated significantly. In shear, both train to -8.32 and +9.96. Second, for single-mode training, the model performs well in predicting the compression data for training on tension with R 2 test of 0.9774 and in predicting tension data for training on compression with with R 2 test of 0.9773. Third, the network is unable to find an adequate fit of the data for multi-mode training, with training fit R 2 train of 0.2821, 0.2857, and 0.0000 for tension, compression, and shear, respectively. In multi-mode training, the red trains to -42.23. Fourth, single-mode training discovers a wide range and number of terms while multimode training only discovers two terms, the red λ −n i term and the dark blue ln(λ +4 i ) term in the right column. Figure 11 illustrates the effect of added L2 regularization in multi-mode training for tofurky. The L2 penalty parameter α varies between 0.100, 0.010, 0.001, 0.000. Table 3 summarizes the discovered weights for multi-mode training with the α = 0.100 parameter. First, the variation in the number of discovered terms ranges from eight with zero penalty, on the right, to four with the largest penalty, on the left. With a penalty parameter of 0.1, the four discovered terms are the red λ −n i term, the turquoise λ +n i term, the yellow λ −4 i term, and the blue λ +4 i term, with the first two terms dominating in the left column. The exponents train to -21.19 and +20.54. Second, with an increasing penalty parameter α, the red and turquoise power terms contribute more to the stress function while all other terms contribute less. Third, increasing the penalty parameter has a negligible effect on the combined fit of the data with R 2 train values varying by less than 0.01 across all penalty parameters and loading modes. Figure 12 illustrates the automatically discovered parameters for tofurky when the Valanis-Landel type network from Figure 3 is restricted to the green neo Hooke term,

Neo Hooke, Blatz Ko, and Mooney Rivlin mechanics of artificial and real meat
, dark red and green Mooney Rivlin terms, , and red and turquoise Ogden terms, ψ = 2 2 , and simultaneously trained with tension, compression, and shear data. First, the neo Hooke, Blatz Ko, and Mooney Rivlin models all provide notably similar fits to the data, with R 2 train values of 0.8777 or 0.8778, 0.9286, and 0.7329 or Nominal stress as a function of stretch or shear strain for the Valanis-Landel type network from Figure 3 trained with all three load cases simultaneously. Circles represent the experimental data. The neo Hooke model uses only the +2 term, the Blatz Ko the -2 term, the Mooney Rivlin the -2 and +2 terms, and the two term Ogden the -n and +n terms. Color-coded regions designate the +2, -2, +2/-2, and +n/-n model terms to the stress function according to Fig. 3 multiplied by the weights from Table 4. Coefficients of determination R 2 indicate goodness of fit for train data. Summary of the non-zero weights from Figure 3 and detailed in equation (16), the shear moduli µ, and the goodness of fit R 2 for the training data.
neo Hooke Blatz Ko Mooney Rivlin two term Ogden ten+com+shr ten+com+shr ten+com+shr ten+com+shr n = 5, 5, 5 n = 5, 5, 5 n = 5, 5, 5 n = 5, 5, 5 tofurky tofurky tofurky tofurky 0.7330 in tension, compression, and shear. Second, the two term Ogden model is able to discover a better simultaneous fit with a significant improvement for the shear data. The R 2 train values are 0.8717, 0.9734, and 0.8147. The negative Ogden exponent trains to −α 1 = −17.27, while the positive Ogden exponent trains to +α 2 = +16.76. Table 4 shows the weights, shear moduli µ, and goodness of fit R 2 when the Valanis-Landel type network from Figure 3 is restricted to the neo Hooke, Blatz Ko, Mooney Rivlin,  train values of 0.6175, 0.7540, and 0.8501 in tension, compression, and shear. Third, none of the four models provide an adequate fit for real chicken. Of all four models, the two term Ogden provides the best fit with R 2 train of 0.4013, 0.1676, and 0.0000 in tension, compression, and shear. Taken together, the two term Ogden model outperforms the neo Hooke, Blatz Ko, and Mooney Rivlin models for tofurky, artificial chick'n, and real chicken.

Discussion
Artificial meat is marketed as an eco-friendly, plant-based protein that is an alternative to traditional meat sources but with similar taste and texture [15,34]. Without a doubt, the mechanical properties of both real and artificial meat influence our eating experience [10,36]. To quantify the mechanical properties of artificial meat, we investigated Tofurky® deli slices and Daring TM artificial chick'n, and compared them to real chicken. The tofurky was quite thin, only 1-1.5 mm thick on average, and was a rather isotropic, homogeneous material without distinct fiber directions. The artificial chick'n had distinct sheets, and we tested it in tension, compression, and shear parallel to the fiber direction as shown in Figure 1. For each meat type, we performed five tests per sample in tension, compression, and shear, and found that the standard error of the mean for each testing mode was small. This suggests that the mean of the tension, compression, and shear data is a robust characteristic of the material properties of each product.

Constitutive neural networks automate the process of model discovery
To date, no unified constitutive model exists to characterize the mechanical behavior of artificial meat products. To discover a mathematical model that best describes the behavior of artificial and real meat in tension, compression, and shear, we compared two different neural networks, an Ogden type network, Figure 2, which we have previously successfully used to discover the best model for human brain tissue [40], and a new Valanis-Landel type network that includes some of the Ogden type features, but also exponential and logarithmic terms, Figure 3. Both networks performed well in single-mode fitting, and the Valanis-Landel model slightly outperformed the Ogden type model across all loading modes and all meat products. Both networks performed decently at predicting selective modes. The Ogden type network was able to predict shear from compression data for artificial chick'n and tension from compression data for real chicken. The Valanis-Landel type network was able to predict shear from compression data and compression from shear data for artificial chick'n and compression from tension data and tension from compression data for real chicken. For multi-mode training, the Ogden type network discovered a better overall fit to the data for real chicken, while the Valanis-Landel type network discovered a better model for artificial chick'n. The models performed equally well in simultaneously fitting the tofurky tension, compression, and shear data with a difference in cumulative R 2 train values of less than 0.0001.

Discovered models feature similar terms
The Ogden network features 20 different terms and discovers the best model out of a selection of 2 20 − 1 = 1, 048, 575 possible combinations of terms, meaning out of more than one million models. The Valanis Landel network features 14 different terms and discovers the best model out of a selection of 2 14 − 1 = 16, 383 possible combinations of terms, out of more than ten thousand models. Yet, when comparing both types of networks, we found that the Valanis Landel type network is generally more versatile: It contains a strong subset of Ogden type terms and spans a richer functional base overall. Interestingly, the Valanis Landel type network consistently discovers a similar subset of six terms for both artificial and real meat, while the weights of the other terms train to zero. All terms in the powers of two, [λ −2 , and all exponential terms, [exp(λ −2 , train to zero and are not present in any of the discovered models for combined tension, compression, and shear training. For tofurky, we robustly discover the general two term Ogden model with with flexible exponents in red and turquoise, with two unit-less negative and positive exponents, −α 1 = −21.19 and +α 2 = +20.54, and two stiffness-like parameters, µ 1 = 30.17 kPa and µ 2 = 37.30 kPa. For artificial chick'n, we discover a four-term model in terms of the stretches to the negative and positive fourth power, λ −4 i and λ +4 i , either plain, in yellow and blue, or combined with the classical Valanis Landel logarithm, in green and dark blue, with two unit-less exponents β 1 = 0.63 and β 2 = 0.63 and four stiffness-like parameters µ 1 = 0.11 kPa, µ 2 = 0.18 kPa µ 3 = 0.72 kPa, and µ 4 = 0.88 kPa. For real chicken, we discover a two-terms model with one Ogden term with a negative exponent in red, and one Valanis Landel logarithm in dark blue, ]/β , with two unit-less exponents −α = −42.23 and β = 0.65 and two stiffness-like parameters µ 1 = 3.12 kPa and µ 2 = 1.02 kPa. With the naked eye, and by touch, we immediately notice that artificial chick'n and real chicken are transversely isotropic with a single pronounced fiber direction. This creates an extremely low resistance to shear, about ten times lower than to axial loading which both our hyperelastic isotropic networks had difficulties to train on. In retrospect, it seems intuitive that we never discover any of the exponential terms: The concave shear softening nature of all three types of meat is in stark contrast to the convex strain stiffening behavior of collagenous soft tissues [6,9,19,30], and rules out the exponential light red, light green, light blue, and dark blue terms of the Valanis Landel network for real and artificial meat. Strain softening seems to be much better represented by the red and turquoise classical Ogden-type terms with relatively large exponentials. At the same time, extreme exponentials on the order of ±20 and above will likely not generalize well to stretch and shear ranges beyond the training regime of 10% deformation, where the stress contributions of these terms will simply explode. This implies that we need to be extremely careful when extrapolating beyond the training regime, especially when using Ogden type terms with large exponentials.

Tofurky is three times stiffer than artificial chick'n and real chicken
When restricting the Valanis-Landel type network to its neo Hooke, Blatz Ko, Mooney Rivlin, and two-term Ogden model building blocks, and training on the tension, compression, and shear data simultaneously, we recover the classical shear modulus of tofurky, artificial chick'n, and real chicken. The neo Hooke, Blatz Ko, and Mooney Rivlin models all discovered identical shear moduli for tofurky with 94.4 kPa, artificial chick'n with 35.7 kPa, and real chicken with 21.4 kPa. Notably, we consistently discovered lower values for the two-term Ogden with shear moduli of tofurky with 81.2 kPa, artificial chick'n with 31.4 kPa, and real chicken with 14.3 kPa. This highlights that the choice of model has a significant impact on the mechanical moduli, even on the simplest of all, the shear modulus, which was up to 50% smaller when fit with the two-term Ogden model. Yet, the general trend remains the same across all four models: Of all three meat types, tofurky is the most isotropic and homogeneous and also displays the stiffest behavior. Both artificial chick'n and real chicken are anisotropic with a pronounced fiber direction, along which they display a similar behavior in tension, compression, and shear, and are only a third as stiff as tofurky.

Limitations and future work
While we solidly and robustly discovered models and parameters for artificial and real meat products, our pilot study has a few limitations: First, we performed all tests on uncooked samples, but note that tofurky and artificial chick'n simply need to be warmed up whereas real chicken needs to be cooked. During cooking, the muscle fibers of real meat contract, cause a loss of water, and change the perceived tenderness of the meat [21]. Second, the tofurky deli slices are generally isotropic and homogeneous, but very thin. As a result, we tested the deli slices in the in-plane direction for tension and perpendicular to the in-plane direction for compression and shear, which might explain the varying stiffnesses for the different modes. Third, the artificial chick'n displays significant thickness variations and heterogeneous fiber sheets. When compressing the chick'n samples beyond 10%, the sheets collapsed on each other and the material response is no longer hyperelastic [22]. Fourth, the real chicken had a distinct fiber direction [46] that made it simple to cut similarly sized samples, but difficult to test in compression where chicken displayed a very soft, almost Jello-like consistency. Overall, to capture the anisotropy of artificial chick'n and real chicken, we would like to extend our network to include the fourth and fifth invariants [13,31], similar to a previous network for skin [25]. Ideally, in the future, we would like to perform tension, compression, and shear experiments on the same samples using a single triaxial testing device, rather than using separate samples and separate instruments for separate testing modes [4]. We envision that performing additional tests will more robustly inform our network architecture, and allow us to combine successful terms of the Ogden type and Valanis-Landel type networks to a single set of terms and a single unified constitutive neural network to reliably and accurately discover the best model and parameters for artificial and real meat.

Conclusion
Plant-based meat substitutes are an increasingly popular alternative to animal consumption. The mechanical behavior of artificial meat directly influences our perception of taste through texture, consistency, and stiffness, and ultimately shapes our sensory experience and enjoyment of the product. However the mechanical properties of artificial meat and their comparison to real meat remain incompletely understood. Here we performed mechanical tension, compression, and shear tests on isotropic and anisotropic artificial and real meat and analyzed the data using customdesigned constitutive neural networks. Constitutive neural networks are a new, powerful and easy-to-use technology to discover the model and parameters that best explain a wide variety of soft matter systems. Especially for newly engineered materials like artificial meet for which the best model is not yet known, they allow us to rapidly screen thousands of potential model candidates and-entirely without bias and human interaction-select the best model. We compared two different constitutive neural networks, of Ogden type and Valanis-Landel type, and demonstrated that both can robustly fit the experimental data from tofurky, artificial chick'n, and real chicken in tension, compression, and shear. By design, both networks contain the classical neo Hooke, Blatz Ko, and Mooney Rivlin models as special cases for which they robustly discover shear moduli of 94.4 kPa for tofurky, 35.7 kPa for artificial chick'n, and 28.7 kPa for real chicken. This suggests that artificial chicken successfully reproduces the mechanical properties of real chicken across all loading modes, while tofurky does not and is about three times as stiff. We observed that all three meat products displayed shear softening with concave stress-stretch curves, and, more importantly, that their resistance to shear was about an order of magnitude lower than their resistance to tension and compression. Our new neural network-based automated model discovery can quantify the mechanics of artificial and real meat and has the potential to inform the design of plant-based meal substitutes. Understanding the mechanics of artificial meat has the potential to revolutionize food production, address environmental concerns, and ensure a sustainable future for our planet.

Data Availability
Our source code, data, and examples are available at https://github.com/LivingMatter Lab/CANN.