User login


You are here


Subscribe to Comments feed
Updated: 53 min 37 sec ago

concrete modeling in Abaqus

Sat, 2022-01-15 06:27

In reply to Modeling Reinforced Concrete Element in Abaqus Standard

Hi,here you can find a training package for concrete modeling in Abaqus

I would like to thank all

Fri, 2022-01-14 16:50

In reply to Professor N. Sukumar: Meshfree analysis on complex geometries using physics-informed deep neural networks

I would like to thank all participants! It was a wonderful talk by Professor N. Sukumar and everyone had a chance to ask questions.


strength of the foldable structures

Wed, 2022-01-12 22:38

In reply to Journal Club for January 2022: Cylindrical Origami: From Foldable Structures to Versatile Robots

Dear Hanqing,

 Thanks for posting a very interesting topic. These foldable structures are not load-bearing structures so you mainly focus on their deformation and stiffness, not strength. I find one sentence which was related to strength, “In the load-bearing state, the prototype can hold 1,600 times its own weight (Fig. 2m).”

What is the constraining to prevent strength increasing of the prototype? For example, the strength of a composite laminate is mainly determined by its fiber strength. 


Periodic Cone cracks during penetration

Mon, 2022-01-10 06:14

In reply to Axisymmetric periodic cone cracks in Hydrogel

Needle insertion, a standard process for various minimally invasive surgeries, results in tissue damage which sometimes leads to catastrophic outcomes. Opaqueness and inhomogeneity of the tissues make it difficult to observe the underlying damage mechanisms. In this paper, we use transparent and homogeneous polyacrylamide hydrogel as a tissue mimic to investigate the damages caused during needle insertion. The insertion force shows multiple events, characterised by a gradual increase in the force followed by a sharp fall. Synchronised recording of the needle displacement into the gel shows that each event corresponds to propagation of stable cone crack. Though sporadic uncontrolled cracking has been discussed earlier, this is the first report of nearly periodic, stable and well-controlled 3-D cone cracks inside the hydrogel during deep penetration. We show that the stress field around the needle tip is responsible for the symmetry and periodicity of the cone cracks. These results provide a better understanding of the fracture processes in soft and brittle materials and open a promising perspective in needle designs and the control of tissue damages during surgical operations.

Dear Per,

Mon, 2021-12-27 08:35

In reply to Discussion of fracture paper #31 - Toughness of a rigid foam

Dear Per,

The first of the two questions you bring up, i.e., if anyone found a relation like the Brown and Srawley ASTM convention that the crack length should be larger than 2.5(KIc/yield stress)^2 for linear elastic fracture mechanics. I am not sure if it applies to other materials than structural steel and possibly other metals. But I know that it has been used with some success also for other materials. I am a steel guy but I vaguely know that there are other standards for other materials... 

Answer1: On our studies regarding the Size effect in fracture of PUR materials we found that for specimens big enough the plane strain condition a, B >= 2.5(KIc/yield stress)^2 is applicable, Marsavina, L., et al, Refinements on fracture toughness of PUR foams, Engineering Fracture Mechanics, Vol.129, 2014, pp. 54-66. However, for the smallest size specimens this condition did not apply.  The plane strain condition is required for validation of the fracture toughness tests for polymeric materials ASTM 5045-2014. Unfortunately, there is no standard methodology for fracture toughness determination of cellular materials, and the ASTM 5045-2014 is often used.

The second question is if you or if you know of anyone who modelled the PUR foam or similar with a plasticity or damage model. Spontaneously I would guess that it is damage rather than plasticity. Perhaps not very close to metal plasticity.

Answer2: We applied the CRUSHABLE FOAM models for compression of un-notched foam specimens and the simulations provide good results comparing with experiments and with Thermographic measurements performed during tests. more details about this can be found in

L Marsavina et al 2016 IOP Conf. Ser.: Mater. Sci. Eng. 123 012060. However, for the behavior in tensile of notched PUR foam specimens the theory of critical distance was successfully applied due to their quasi-brittle behavior in tensile in presence of cracks or notches. References: Voiconi, T., Negru, R., Linul, E., Marsavina, L. and Filipescu, H. (2014) “The notch effect on fracture of polyurethane materials”, Frattura ed Integrità Strutturale, 8(30), pp. 101–108;

R. Negru et al., Application of TCD for brittle fracture of notched PUR materials, Theoretical and Applied Fracture Mechanics, Vol. 80, Part A, 2015, pp 87-95.

Prof. Dr. Eng. Liviu MARSAVINA


Seems Fake

Sun, 2021-12-26 12:03

In reply to PhD scholarship in multi-physics modeling of 3D printing process for sustainable composite structures at Technical University of Denmark, Department of Wind Energy. Deadline: Jan. 9, 2022

What is your purpose of posting these jobs? You don't even look at CVs. I have applied my CV and other documents about ten times.

Dear Ying,

Wed, 2021-11-24 17:29

In reply to Timely topic!

Dear Ying,

Thank you for the informative comments! As you mentioned, DeepMD is certainly another useful tool. PINN is also an interesting direction which I consider as a combintation between classical empirical  potential and ML potential. Thanks for sharing your review article on ML potential for CG systems. 



Timely topic!

Tue, 2021-11-23 11:58

In reply to Journal Club for November 2021: Machine Learning Potential for Atomistic Simulation

Hi Wei,

Many thanks for this timely topic and for leading the great discussion.

Added to your summary, there are a few related works that might be useful:

1) The ML potential tool, such as the DeepMD is another excellent package, linked with LAMMPS to use. The DeepMD package can reproduce the temperature-pressure phase diagram of water molecules in the recent PRL paper, which is an intriguing demonstration for many mechanics problems under extreme environments. It also won 2020 ACM Gordon Bell Prize at SC20!

2) Physics-informed ML potential could be particularly useful since we want to avoid the unphysical interactions (energy, force, or stress) during the mechanical deformation. Imagining that we only train the ML potential from equilibrium conditions, it could not be used for large deformation simulations:) The recent physically informed artificial neural networks for atomistic modeling of materials can nicely address this issue, opening another avenue for ML potential to be applicable for large deformations, fractures, etc.

3) ML potential is not only useful for all-atom molecular simulations, but also for coarse-grained molecular simulations. When the ML potential is trained based on DFT data, it can reproduce the quantum accuracy, with efficiency as typical all-atom molecular simulations. Similarly, the ML coarse-grained model can archive the all-atom accuracy, with much less computational cost. It will open another door for us to model many large-scale mechanics problems with less computational time. We have a recent review article on this topic, Machine Learning of Coarse-Grained Models for Organic Molecules and Polymers: Progress, Opportunities, and Challenges.

Again, these are just my two cents.

I look forward to your fascinating works in this area and more discussions :)

Best, Ying

Dear Haoran,

Sun, 2021-11-14 11:21

In reply to Thanks for the excellent review!

Dear Haoran,

Thanks for your interests and questions. 

(1) Like those reactive potentials, there is no need to define fixed bonds in the ML potentials that are trained with DFT data, so ML potentials are able to capture bond breaking/formation. There are some good examples, such as the references [1-4] listed in the text.

(2) Molecules can be more conveniently descripted by a graph, so they are first studied with end-to-end ML potentials. However, descriptor-based ML potentials also have been used for molecules. The performance of the potentials is not only dependent on the ML model, but also many other factors, such as the quality of dataset, the choice of descriptors (or the quality of feature learning), as well as the rigor of training and validation process. Therefore, I have not seen a rigorous comparison between descriptor-based and end-to-end ML potentials in terms of prediction accuracy. If someone wants the machine to learn as much as possible from data (maybe better than human designed descriptors), then end-to-end model wins. This motivation is driving the development of new methods along with the rapid development of AI technology. However, at the moment, descriptor-based ML potentials are better connected to large scale simulator such as LAMMPS, so it may be a good choice to start with descriptor-based potentials if the final target is for large scale MD simulation.


Wei Gao

Thanks for the excellent review!

Sat, 2021-11-13 14:35

In reply to Journal Club for November 2021: Machine Learning Potential for Atomistic Simulation

Dear Wei,

Thank you for summarizing the recent developments in ML potentials for MD simulations. I'm not working in this specific field. Reading your review satisfies lots of my curiosities. 

I have 2 questions to ask here.

(1) In MD simulations, some interatomic potentials can capture bond breaking/formation, like Reaxff; some potentials cannot do so. And you mentioned that the ML potentials are learned from DFT. So are the existing ML potentials capable of capturing the bond breaking/formation? Are there any successful examples?

(2) You mentioned that the early development of end-to-end graph-based ML potentials was focused on molecules. Does it mean the end-to-end graph-based ML potentials are a better choice for polymer systems?



Hi Ajit,

Sat, 2021-11-13 10:10

In reply to Impressive!

Hi Ajit,

Thanks for your interests and questions. To your first question, yes, the polarization effect could be described by ML potential as long as the potential is trained with the information of charges. There is a recent work (Nature communications 12.1 (2021): 1-11) specifically targeting this type of problem. Your second question is about the gain of ML potential. The main advantage of ML potentials as compared to classical potentials is that they can be much more accurate, although they are still generally slower than classical potentials.

Wei Gao

metraix is a typo

Wed, 2021-11-10 21:31

In reply to Re: new material?


metraix is a typo. I spend 30 years on polymer matrix composites research. Roy

Re: new material?

Wed, 2021-11-10 19:41

In reply to Fatigue-Resistant Soft Materials

Not sure what you are refering to. Perhaps just "polymer matrix"? I did a seminar on the subjecct. Here is the video of the seminar.


Wed, 2021-11-10 04:00

In reply to Journal Club for November 2021: Machine Learning Potential for Atomistic Simulation

Dear Wei,

Thanks for a very informative edition of the journal club.

Though I have been in the ML field for some time, when it comes to this particular area (ML Potentials for Atomistic Simulations) I am an absolute newbie. In fact, this journal club edition is my first ever contact with this research area. ... It does look like there has been a great deal of activity in this area in the recent times, and people seem to have approached the problems with a lot of creativity too. All in all, very impressive! [Even if you set everything else aside, anything like "accelerating the computational time of the high-throughput search by a factor of 130" just has to be impressive!]

OK, now, allow me a couple of newbie questions...

How do these approaches work for systems / phenomena involving polarization? Any work or notable results in this direction?

How precisely does the gain in the high-throughput search come about? ... If I understand it right, these potentials are just going to get incorporated into the MD / atomistic packages like LAMMPS, right? So, speaking simple-mindedly, the run-time computational complexity should also stay more or less the same, right? If so, how come there still is a gain?





Sun, 2021-11-07 12:07

In reply to Journal Club for November 2021: Machine Learning Potential for Atomistic Simulation


Dear Rui,

Thank you so much for the kind and encouraging words. To your questions:

(1) You made an important point: the performance of ML potential is highly dependent on the training data. Only high quality data that covers the essential physics of interests can produce reliable ML potential. Therefore, developing ML potential starts from the data generation, usually by using DFT calculations. First, a variety of atomic structures have to be carefully prepared. For example, the atomic structures could come from the random perturbation of atom positions and lattice constants based on a perfect crystal structure. In addition, typical defect structures can be built into the data. Recently, we also use atomic structures coming from Nudged elastic band and Dimer calculations to inform the data with phase transition information and to better sample the potential energy surface. Nowadays, more and more researchers share materials dataset publicly, so that one could re-use those already available datasets and enrich them with specific physics of interests if needed. After atomic structures are determined, one just need to run DFT calculations to get the outputs of interests that will be used for training, such as energy, force, and stress.

The inputs to a ML model are the well-prepared atomic structures. These structures are converted to descriptors when one uses a descriptor-based methods. This conversion can be done automatically within the packages described in section 5 without interference. The outputs to a ML model depend on the application. Most of the time, the potential energy, atomic force and stress are used as outputs. ML model (e.g. neural network) can be conveniently built using some machine learning platforms such as Tensorflow and Pytorch, which provide the library functions for training the model. There are many hyper parameters that can be tuned in order to achieve a good convergence when the loss settles to within an error range. After training is done, the ML model can be saved as the ML potential, which can be used later like classical potentials.

(2) The tools described in section 5 can be used as black box, and the products (ML potentials) can be directly used for MD or MS simulations. Those tools except Schnet can be all connected to LAMMPS and used just like classical potentials.


very nice!

Sat, 2021-11-06 13:36

In reply to Journal Club for November 2021: Machine Learning Potential for Atomistic Simulation

Dear Wei,

Thank you for this nice summary on machine learning potentials. I enjoyed reading it and learning the state of the art. I have two questions in mind:

(1) How to train a machine learning model? As you noted, the performance of a ML potential depends on both the choice of descriptor and ML model. In addition, I think it also depends on the training and the data used for training. Can you elaborate on the steps taken to train a ML model?

(2) How to use one of the ML tools (Section 5) as a blackbox for atomistic simulations? I assume that these tools have been trained one way or another and thus are ready to be used directly in place of the standard empirical potentials (e.g. in LAMMPS). Is it as simple as that?

Again, I am impressed by how much you have done in this area, and congratulations on your recent CAREER award!


Hi Zheng,

Tue, 2021-11-02 09:50

In reply to Hi Wei, thanks so much for

Hi Zheng,

Thanks for your intetests! Most of the descriptor-based packages, as listed in section 5, only ask for atom coordinates as inputs, which are transformed to descritpors internally through descritpor functions (such as ACSF, SOAP, as listed in section 3). Some packages (such as n2p2) implicitly run descritpor generation so that it's not convenient (or impossible) for users to see the descriptors. In our code (AtomDNN), we compute the descritpors (ACSF and SOAP are currently supported) through customized LAMMPS Compute commands (can be found here with an example) and make the descriptor generation explict through a well defined data pipeline. In this way, descritpors can be computed without training the network. There are other tools, such as DScribe, which is only used for computing many types of descriptors. However, we found it is slow for computing the derivatives of descriptors (which are needed for compute atomic force and stress). For SOAP descriptors, you can also use QUIP to get the descriptors directly.


Hi Wei, thanks so much for

Tue, 2021-11-02 00:53

In reply to Journal Club for November 2021: Machine Learning Potential for Atomistic Simulation

Hi Wei, thanks so much for offering such an informative tutorial. Just one quick question: For descriptor-based ML potentials, can we directly use the coordinates of atoms as the input for the neural network? If we have to convert the coordinates into descriptors as the inputs, what is the standard procedure to do so? Many thanks!

new material?

Sun, 2021-10-31 04:37

In reply to Fatigue-Resistant Soft Materials



you mentioned polymer "metraix"---a new material?   Roy

Kind Reminder

Fri, 2021-10-29 16:10

In reply to PhD vacancy (4 years) on computational mechanics of thick adhesive joints in large wind turbine blades

Dear Prof. Wim VAN PAEPEGEM,

I am not quite sure whether you have already received my mail and attachments sent on Oct 22, so I just sent this reminder for your kind consideration. I am waiting to hear from you to follow up on any recommendations.


 Sincerely yours,

Amirhossein Darbandi


Recent comments

More comments


Subscribe to Syndicate