Courage & Conclusions

Courage & Conclusions

There’s been an interesting back and forth on here where Miss Outlier discussed the conflicts of the theoretical versus the experimental in the week where we discussed interdisciplinary engineering and our struggles therein. Miss Outlier expressed her own point of view┬áin working with theoreticians. Cherish then responded with her post the model engineer a sort of defense of simulations.

This was on my mind lately as I had a few separate pieces of analyses that I had to complete this week. Last month I asked the question of whether a design can be too robust. I talked about the issues inherent where an engineer is expected to make predictions on the future. Sometimes predictions that have critical safety connotations. These can be terrifying, especially to an early career engineer. In my experience I’ve been asked to do analyses that fall into two separate categories.

The first is a theoretical prediction of expected results. Sometimes you just don’t have data or time to run the test. This can be as simple as a basic FEA in Mechanica or as complicated as a Simulink model that encompasses your whole system and is based on tables and tables of data.

I’ll admit, I sort of hate these. I usually have to start with a drawing or model, because the actual product hasn’t been made or can’t be tested, and go forward from there. GEARS talked about his love/hate relationship with ProE, but in general it’s a life saver. Modelling a part, inputting the correct material (or in some cases correct density) and looking at mass properties can give you a whole host of information that it’s difficult to get off the real part to begin with. Measuring internal volume seems to be one of its weaknesses, but even here something can be quickly CADed up to approximate the internal shape of your part. This allows you to fit parts or use these numbers in the rest of your analysis. I then go to a combination of Excel to keep track of all my equations, data and variables as well as hand drawn sketches and my trusty TI-83. It’s nice to see whether your hand calculated fudged up numbers agree with your spiffy ProE and Excel data.

But the drawback of this kind of analysis is you’re often predicting a system that you will never get to see whether you were “right” or not, or maybe only in a worst case scenario where your safety factor was too small or your analysis off and your part failed. This is a necessary path for most development programs, but one filled with hazards.

The second option I usually am faced with is data analysis. Sometimes a specific test was run and the conclusions are obvious. Sometimes you’re pouring over hours and hours of sensor data trying to figure out why your part failed and what indicators might predict future failures. Maybe you’re trying to increase or reduce the lifetime of a component.

Like the above graph I did of my gas and electric usage, once I had enough data it was easy to see trends and patterns emerging. I’ve done similar looks at mechanical systems and seen where drops in pressure or temperature have been significant leading indicators of potential failure. In these cases you’re still taking a risk that future systems will behave in the same way they did before (which certainly isn’t true for the economy) but can often be expected in real life, so long as you don’t forget other variables like equipment change out or seasonal affects of temperature and humidity changes.

It’s extremely risky and requires a lot of courage to make a prediction about the future or set the min/max or lifetime hours of a product. I would be interested in hearing feedback from readers, in how their theoretical analyses coexist with their experimental conclusions, how they validate the two, and which kind they prefer for their discipline.

1 comment

Comments are closed.