Producers are justified in questioning whether a feed additive will perform up to the claims that are made about it. Often product claims are just accepted on faith.

Producers can attempt an on-farm trial to check out the effects of a product against their own genetics and nutrition programs, under their environment, management and herd health status.

Because of limited resources and poor design, many on-farm trials don't qualify as “good science,” however. Instead, time, effort and money are spent generating “junk science.”

For trials to yield valid results, certain criteria must be adhered to.

  • First, limit any factors that may affect the performance of the pigs being tested, other than the product you are testing. These factors would include the starting weight, sex differences, genetics, any facility affect such as stocking density, pen size, feeder design, water availability and environmental differences within the barn.

  • Commit to the integrity of the data and see the trial through from start to finish.

  • Have a “control” group fed an identical ration (less the product, if you are testing supplements).

  • Objectively measure differences in the pigs and the feed (if you want to measure feed efficiency), using an accurate set of scales to collect accurate weight differences.

  • Finally, provide some basis to conclude the results are reputable and valid in determining if there are truly statistical differences between the tested feed and a control feed.

Case Study

A producer wanted to know if a growth enhancer was delivering the promised return on investment. He has a 1,600-head finishing barn split in the middle (800 pigs per side) by a load-out room. The two halves of the barn are nearly identical. Both rooms have the same pen design, feeders and drinkers, and are served with identical but separate bulk feed bins.

Rooms are filled sequentially, a week apart, as pigs flow from his 2,000-sow farm. The ratio of barrows to gilts is about equal so accurate counts are not usually made. The producer weighs the pigs as a group only.

He does not have time or labor to weigh each pig or pen separately, or to weigh the feed that goes into each feeder separately.

The producer surmises that if he runs an antimicrobial growth promoter (AGP) in the west end of the barn and none (control) in the east end, he should know if his investment in the product is paying off.

In the trial, control pigs developed diarrhea about a month after placement. This has happened before, and an antimicrobial was added to the drinking water. He also injected several of the worst pigs with an antimicrobial.

About half way through the test, the pigs began coughing. After consulting with the herd veterinarian, it was decided to use a therapeutic level of a feed antimicrobial on both ends of the barn. The coughing subsided, and after a week of feed medication for the respiratory problem, the feed trial was resumed.

When the pigs finally reached market weight, the producer was careful to keep each side separate so that the kill sheets would reflect the weight and carcass data for each half of the building. He also allowed the feed bins to empty just as the last pig was marketed on both sides.

The pigs in the treatment group outperformed the pigs in the control group by 0.02 lb. average daily gain and on feed efficiency by 0.04 lb. of feed per pound of gain.

The producer concluded there was an advantage to the treatment. Was he right?


This producer's on-farm trial is an example of “junk science” because of its flawed design.

Statistics help us sort out real differences in results from “noise” or normal variation in the trial. How much normal variation there is between groups must be known before we can draw sound conclusions from the real differences created by the item tested.

This trial design lacked detail to be called good science. The producer needed to track enough groups of pigs to establish “normal” variation. In statistical terms, standard deviation and estimated effect of the item tested are needed for determining if the number of repetitions are adequate to ensure that the differences observed were statistically significant. Sometimes, up to 16 replications are needed to establish significance.

In short, do not let “junk science” cloud your decision making. Seek out sound advice about proper trial design, and then elicit input from someone who understands statistics before you waste resources on “junk science.”