I have a lot of thoughts running about my head after the AIChE conference. One of the topics that happens to have a lot of blogosphere crossover recently is working with limited resources, whether in the lab or in business.
Constatinos Pantelides of Process Systems Enterprise and Imperial College presented "Model-based Optimal Design of Laboratory Experiments: Algorithms and Software," which talked about moving beyond standard experimental design to model-based design. The main difference between standard Design of Experiments and this method is the use of a model of the system to guide selection of experiments. In this case, there is a bit of sleight of hand: the optimal experiments are the ones that help improve your model the most. You pick experiments in regions where your model performs the worst in an effort to get new data that will improve the accuracy of the model. This all assumes you are running experiments in order to build a good model of the world, which is frequently the case. The resulting, improved model can then be used in a variety of familiar applications: optimization, process control, etc. Given that this presentation is from PSE, the method involves their optimisation routines.
The topic of experiments and design has come up in a few recent blogs. And there are plenty of other references that I am not going to give here.
Jeff Angus in two entries talks about the problem of where to focus your energies in two recent entries. Giants' Problem-Solving Technique: Neither The Alamo, Nor Alan Greenspan
For the moment, let's call this the Sabean Tool: Focus your energy on high-variance areas. Use lowest-variance areas as anchors (that is, either resolve them quickly if you can, or leave them for later if they're not easy right now because they probably won't get much worse because they're low variance). If you look at your host of challenges to solve with this filter, you simplify the problem, creating a set of priorities to evaluate.
Part II: Giants' Problem-Solving Technique - Taking on A Piece at a Time
In choice-rich environments where there are many decisions to be made, and the variance is likely to be high or astronomical (for example, hiring day labor, the beginning of a fantasy draft, picking stocks, staffing a call centre) it makes most sense to deal with the highest-variance components first.
See the connection? When you have decisions to make, make the decisions that have the biggest impact first. This applies in experiments where resources are limited just the way Angus describes. It also applies to personal effectiveness: set aside time for the important activities that require the biggest commitment. Don't wait until the end of the day to start that project that is going to take four hours to complete. For more on this one, check Covey's Prioritizing Rocks or A List Apart's Pickle Jar Theory.
Derek Lowe in Waiting for the Metaphorical Phone to Ring talks about setting up experiments where you have no expectation for them to succeed. He focuses on the personal aspect of it, but there can be good science behind this too. When experiments "fail," the scientist should at least think about why it failed. Just like in Pantelides' model-based experimental design, the "failed" experiments frequently give you the most information. Lessons learned approaches try to take advantage of both successes and failures. Unlike Pantelides, however, Lowe is talking about experiments where he isn't sure what to expect in the first place.