A Brief History of Finite Element Analysis – Part I

The first patent for computer software was filed in 1968 by Applied Data Research for a number sorting system. That same year, MSC Software, in partnership with NASA, released the first version of their now famous “NASA Structural Analysis” software (NASTRAN). 

A lot has changed since that time, but the fundamental way we think about structural simulation (as single scale finite element analysis) has stayed almost exactly the same. This is amazing and indeed a tribute to the method’s flexibility and robustness, but is also cause for concern when considering how complex our material catalog has become. Novel materials such as chopped fiber composites with nano-particle matrices are becoming commonplace, yet these materials being modeled today are still using the same methods that we used when we were submitting the programs on punch-cards. 

The law-of-the-land (Moore’s)

Since the advent of High Performance Computing and the “race to the bottom” mentality of cloud computing providers, computing power has become a relatively cheap commodity. But this wasn’t always the case. 

Early on, the Simulation and Analysis (S&A) industry was no different. As Moore’s Law went, so did the S&A companies. The latest code was always slightly behind the newest development in computational firepower. However, Moore’s most recent arsenal, Cloud Computing, had yet to be “released” by the big players in the industry. This was mainly due to concerns over IP protection. 

Industries such as pharmaceuticals and travel agencies, which existed long before computers came around, have just recently had their value creation lifecycles completely changed by access to data aggregation and cheap computation. S&A has yet to experience the same technological disruption that these industries now consider core competencies of their business. 

Pio-nerds (the pre-PC era)

1950s – Finite Element Analysis came into popular use in the 1950s. Around the same time, International Business Machines developed the first mass-produced electronic stored-program computer, the IBM701. As those punch-card computers progressed, so too did the software. 

1968 – A decade after the IBM701, the MacNeal-Schwendler Corporation was working with NASA to complete the first computerized structural Finite Element solver (NASTRAN). A few years later, they would sell the code as MSC/NASTRAN. 

1970 – Around the same time that NASTRAN was being developed, Dr. John Swanson was working at the Westinghouse Astronuclear Labs in Pittsburgh, PA. Swanson believed an integrated, general-purpose FEA code could be used to predict transient stresses and displacements of nuclear reactor systems. Swanson developed his program using a keypunch and a time-shared mainframe at U.S. Steel. In 1970, he released this code commercially as ANSYS, and his first customer was his former employer Westinghouse. 

1971 – The Finite Element software package MARC was developed by a group of researchers at Brown University. One of their early hires was the newly minted Ph.D, Dr. Dave Hibbitt. These two vendors built slow and steady business for most of the 70’s, offering use on large time-shared machines like the CDC below. 

cdc-1

Going Nuclear (The PC Era)

1977 – Mike Riddle and John Walker began independently working on what would become AutoCAD. Their initial version consisted of 12,000 lines of source code. This software, while not an FEA solver, would whet engineers’ appetites for powerful GUI-based design tools. 

1978 – Dr. Hibbitt and two other employees of the MARC Analysis Research Corporation developed a new FE software, Abaqus. Like ANSYS, their first customer was also a subsidiary of the Westinghouse Corporation, who used the software to analyze nuclear fuel rod assemblies. 

1978 – DYNA3D was developed by John Hallquist, a young engineer at the Lawrence Livermore National Laboratory. The software was initially 5,000 lines of code and was designed to predict the structural response of nuclear bombs dropped at low altitudes. 

nuclear-1

The 21st Century

As PC’s and their hardware evolved, software companies found unique ways to take advantage of this growth. While the roots of FEA software remained as they were in the 60’s, the way in which computational power was utilized saw many changes during the 90’s and today. 

Learn more in next week’s post: A Brief History of Finite Element Analysis – Part II

Benefits of Integrating Part, Material and Process

Have you ever experienced your car breaking down due to part failure, with little or no warning? This could be because the part was designed without taking into account how it would be affected by material behavior or manufacturing variability. 

Many engineers who work on automobiles, airplanes, rockets, etc. are responsible for designing critical and non-critical parts. However, oftentimes this design process does not include an analysis of the material that the part is made out of or the manufacturing process that is used to produce it. Oftentimes, the engineer will assume ideal and consistent material properties, which is not an accurate representation of most products that are developed using advanced materials. 

The most common issue that arises from not connecting part, system, and process during the design phase is the accumulation of residual stresses within the material during the manufacturing process. These stresses are a product of the manufacturing process inducing fiber disorientation, defects, and microcracking in the material. 

Other manufacturing defects that can lead to premature part failure include: 

  • Resin pockets, voids, or fabric wrinkles, leading to non-uniform stresses.
2D Void Cracking2
  • Fiber misalignment, which heavily impacts material behavior and thus affects part behavior.
fiber misalignment
  • Microcracking and fiber-resin debonding, which occur on a length scale not visible to the human eye and thus are often not caught upon visual inspection.
Realistic Damage Modeling jpg

Identifying Issues Early

Companies that manufacture parts spend thousands of dollars and man hours designing parts to withstand failure. If a part is manufactured containing flaws or not meeting quality standards, the part is simply thrown away, wasting a large amount of material. 

Companies that use simulation to identify material performance before a part is manufactured, rather than after, can reduce their product development time, reduce the amount of time and money wasted on products that have obvious flaws, and ultimately get products to market faster. 

High Velocity Impact on Composites – Past, Present, and Future

In 1969, Grumman Aerospace was the first company to successfully introduce advanced composites into a commercial airplane. The boron-epoxy laminated horizontal stabilizer used in the F-14A was 15% lighter and 18% less costly than its metal counterpart. 

F-14A

This breakthrough application opened the doors to a new era of material engineering, but also sparked the need for more robust testing methods. Up until this point, composite testing techniques were usually just adapted forms of existing metal or unreinforced plastic testing techniques. However, as more complex parts were getting put on higher-performance planes, these testing methods were no longer sufficient. 

What’s the Impact? 

impact

As these carbon-based composites with a high specific strength came into use, the most critical features were damage tolerance and damage resistance under impact loading. This is because these composites were routinely exposed to unplanned impact loading of numerous kinds during the manufacturing process and in service. 

There are several categories of impact loading, and all carry unique challenges and considerations. 

  1. Low velocity (large mass) – 10 m/s (22 mph)
  2. Intermediate velocity – 10-50 m/s (22-111 mph)
  3. High/ballistic velocity (small mass) – 50-1,000 m/s (111-2,230 mph)
  4. Hyper velocity impact – 2,000-5,000 m/s (5,000-11,100 mph)

In this blog, we will focus on high velocity impact. 

Not so fast! 

As fourth generation warplanes were reaching supersonic speeds, high velocity impact characterization became the most important and complicated of the impact failure categories. 

figher-jets-1

There are many complexities associated with high velocity ballistic impact, such as high pressures, high temperatures, large strains, and high strain rates. A wide variety of materials can be involved, all of which interact with one another and are subject to failure and fragmentation. Furthermore, these failure events generally occur during a small fraction of a second. 

For these reasons, physical testing can be very expensive and time-consuming, while producing limited data. It can be extremely advantageous to develop computational methods and Computer-Aided Engineering (CAE) programs that can provide a detailed look into the complicated loadings and material response during the course of an impact event. 

Computer programs can be used to examine a wide range of conditions that cannot be readily tested, such as varied impact velocities and different material properties. They also give engineers the ability to rapidly test multiple designs without the costly manufacture and setup of materials to be tested. These computer programs can give  insight into a part’s behavior and help engineers design ballistic-resistant structures while optimizing for weight and cost. 

Ultimately, the goal of CAE platforms is to provide the design and research engineers with computational tools capable of designing and analyzing ballistic projectiles, armors, and other systems, in an accurate and efficient manner. 

Zooming In

The challenge with using CAE tools is that the response of the structural part is governed by the “local” behavior of the material in the neighborhood of the impacted zone. 

For example, for a woven composite armor, several different damage and energy-absorbing mechanisms during ballistic impact have been identified. These include the combination of local and global failure mechanisms: 

  • Tensile failure of primary tows
  • Deformation of secondary tows
  • Delamination
  • Matrix cracking
  • Shear plugging
  • Friction during penetration

In other words, the root cause of strength and damage is at a scale that is very difficult to represent within a standard Computer-Aided Design (CAD) model. 

Upgraded_Leopard_2A4_SG_NDP_2010-1

In the example below, we used CAE to capture this complex behavior under ballistic load using a sample of a multi-component armor plate. These armors, used on tanks and ships, are made of several dissociated layers, such as aluminum, rubber, and steel. Recently, researchers have started investigating the use of composites as a reinforcing material. Because of the complicated global and local scale interactions within this part, this is a test case for a fully coupled multiscale analysis. In this situation, individual material behavior and interaction between different layers must be captured. 

We used MultiMech, a commercial FE2 analysis tool, to model two layers of woven composite atop a dense rubber sublayer. As you can see, the local scale weaves are driving the behavior of the global scale composite layers. As damage accumulates locally, the stiffness response of the global scale is updated until it is eventually deleted. Furthermore, the interaction between the rubber sublayer and the composite is accurate, as the rubber starts to yield before the stronger woven composite. 

This is a simplified example, but it captures the combination of local and global phenomena. The ability to accurately predict these interactions has vast implications for transportation, both on and off this planet.