Mining useful information from your production records is an ongoing challenge — but opportunities also abound.
Tracking herd performance over multiple quarters and years helps identify performance trends — both good and not so good.
Carried a step further, if the data is submitted to a service provider, comparisons can be made to other herds that collect and report similar information. The key to gaining the most worthwhile information from these services is prioritizing the traits and data that can be effectively managed to maximize herd performance.
John Deen, DVM, and associate professor of swine production systems at the University of Minnesota, addressed data comparison issues in a seminar presented at World Pork Expo in June. He focused on prioritizing benchmarking data in an effort to maximize reproductive performance.
Deen is most familiar with the PigChamp database and benchmarking program; however, his thoughts on data mining and data comparisons can be applied to most recordkeeping system reports.
Deen began with a review of two often-cited terms that sometimes create confusion — the “mean” and “median.”
The “mean” is the average of a data set.
The “median” is the middle number of a range of numbers in the data set.
If the numbers are plotted in a normal, truly bell-shaped curve, the mean and the median are exactly the same, explains Deen.
“But, when we see the mean (average) is higher than the median, it means that there are some large numbers at one side (of the curve's highpoint) that move the mean up or down, while the middle numbers stay in the middle.”
Certain numbers, such as farrowing rate, tend to show the median higher than the mean in almost all cases. This just means that there is a “skew” — a sort of long tail on one side of the bell-shaped distribution.
“Frankly, when the median and the mean don't line up, the distribution often shows there are some farms that really perform surprisingly badly,” he says. “It may be because of an outbreak of porcine reproductive and respiratory syndrome (PRRS) or something that pulls the curve's distribution out.”
There is a natural tendency to look at the mean (average), because people strive to be above average. Deen acknowledges there is some value in this comparison, but, he warns, “averages lie.”
What is more important is the spread of performance still present in the industry. The spread identifies the challenges and opportunities available to owners and managers of production systems.
“We have a changing industry — especially in reproduction,” he explains. “Reproductive capability of our herd is increasing, and that's a part of the reason we are seeing more and more pork on shelves and available for export — even with lower and lower numbers of sows.”
“Are we converging or diverging?” Deen asks. “Are we becoming more similar in the way we produce pigs, especially reproductively, or are we changing?”
Artificial insemination and managing sow feed intake are technologies more commonly used today (convergence).
Yet, the industry is also responding to consumer demands, such as outside farrowing, so-called “natural” production, or meeting animal well-being specifications for certain markets.
“When we see divergence, it means we have to compare farms in different ways,” he continues. “The opportunity to benchmark those operations is to do so among cohorts — operations using similar technologies.”
Another important aspect of benchmarking is “capability analysis.” Simply put, how much different can an operation be, compared to the general industry, before you lose the ability to compete? An example would be pasture farrowing vs. farrowing crates in confinement systems.
The answer to that question may come from “system analysis” using benchmarking, which tests whether the business or system is capable of delivering what the owner wants — profitability, stable productivity, etc. — and whether that capability delivers what is needed to compete and survive in the industry.
At the outset, it is important to establish what you expect from the benchmarking process. Dean draws on an old English saying to make his point: “You don't fatten a pig by weighing it.”
Logically, measuring something does not improve its performance. “It simply drives a recognition of the opportunities to change or amplify the strength you have,” he notes.
Therefore, it is important to recognize your motivation to benchmark:
Boasting rights — The ability to brag about the trophy for the highest pigs/sow/year or lowest feed efficiency. “This is used less and less, but it is useful in correcting behavior,” he notes.
Risk management — Useful in understanding the likelihood of future performance based on past performance.
Capability measures — This helps clarify how capable the farm is in keeping up the expected performance.
Strategic improvement — This motivation helps identify shortfalls to initiating strategic improvements.
Financial planning — Benchmarking can help fine-tune this process.
The value of the benchmarking process is limited to the quality of information placed in the database. Standardization is important. Enter gilts into the breeding herd only when they are mated, for example. Standardize starting weights to get meaningful days-to-market and feed efficiency data.
To make good choices in benchmarking, the first step is identifying key parameters that are drivers on your farm.
“There is a complexity in there that is increasing all of the time,” Deen says. “We are seeing some demands within the swine welfare assurance programs, such as mortality rates. We have to identify which of these demands are useful.”
Reflecting on his early years in veterinary medicine, he relates: “When I started out, there was a lot of discussion on (swine) health and well-being. We used the term ‘herd health.’ If the pigs were healthy, you made money.
“Then we turned to ‘productivity.’ We said, ‘if they are productive, you make money.’ But then we saw guys who tracked pigs/sow/year go under.
“So, we decided ‘cost control’ was important. Cost/lb. of gain was the measure.
“Then more and more we were looking at ‘profit maximization’ — getting more, higher quality pigs out the door. Or, we used ‘utility maximization’ — lowering the variability of productivity, and lowering the risk of unexpected events.
“Finally, we talk about ‘robustness, sustainability, greatest good.’ These are the long-term outcomes,” he says.
Some benchmarks didn't turn out to be as useful as they were made out to be. High culling rates, for example, could result from a depopulation/repopulation effort. Low culling rates are unsustainable, he says.
“There are ‘good culls’ and there are ‘bad culls.’ It depends on age. Bad culls are early parity culls; good culls are 7th-parity culls. Culling isn't always a failure.”
Deen's driver of choice to benchmark is ‘farrowing rate,’ for two reasons: “Remember the difference between mean and median. We've got some low-performing herds, 30-40% farrowing rate or less, especially in the summer quarter. We see a number of herds dropping and we get this long tail at the lower end of the distribution curve.
“I consider seasonal infertility to be a bigger disease than porcine reproductive and respiratory syndrome (PRRS) because it drives the market price. That's not a popular statement — especially in veterinary schools — but when you lose pregnancies from the summer matings, those are high-value pigs that you're losing. You end up rebreeding those sows and pushing their pigs into November sales, and that's really expensive. Farrowing rate drives culling decisions and it drives choices as far as number of matings.”
Deen's preference for farrowing rate is driven by two points:
“You have to look more closely at capabilities to improve farrowing rate; and
“Looking backwards, farrowing rate costs you more than is often attributed to it. It costs in lost sows; it costs in lower value pigs out of the barns; it costs in over-farrowing in some weeks, under-farrowing in others.”
Plotting farrowing rate averages on a graph reveals a shotgun pattern. Some large, as well as some small, herds appear above and below the average in the total database. You may have problem weeks that need a closer look. One way of moving the average is to bring up all of the tail-enders,” he says.
Mating performance the week of Christmas, other holidays, or weekends, can cause low farrowing rates, for example. Summer heat can cause low farrowing rate weeks, too, which may be reflected in the proportion of sows bred within seven days of weaning. More sows exceed the seven-day parameter in the summer, so they are rolled over to the next week, and those sows are less likely to perform when they are successfully bred.
“The point I want to emphasize is — averages simply don't tell everything,” Deen reinforces. “And benchmarking is not perfect.”
Finally, it is critical to understand who is participating in the database you are comparing your herd performance to, including any recordkeeping biases that may exist. For example, producers who use computerized recordkeeping systems are more likely to have better productivity than those who do not.