When I was a diabetes nurse, the first question I would invariably get from a new client was, “What can I eat and what can’t I eat”. It was also the question I most hated because it was so complicated. We are both blessed and cursed that supermarkets have forty to fifty thousand different things we might eat and none of it will either guarantee health or kill us immediately. What we eat only make one of those things more likely over time. So, you can see that a list of what to eat or not to eat could get very long.
When we turn to food research to try to make sense of available information, this leads us to the list of 150 nutritional components that are tracked in nutritional research, which in many cases leads to new questions versus answers. In an attempt to increase the reliability of nutrition research, the well-known complexity scientist Albert-László Barabási, in a recent Nature article, uses a machine learning technique to mine various resources to potentially identify an additional 49,000 bioactive compounds, which he calls “nutritional dark matter”, known to have a potential effect on human health.
While admitting the difficulties of including these additional compounds in research, Dr. Barabasi is hopeful that this big data approach will give us a better vision of the entire nutritional universe, or “Foodome” and lead us to more reliable answers. He cites as an example garlic as having beneficial biocomponents, not part of the 150 currently tracked, that counters the harmful effects of red meat. Dr. Barabási does not speculate on how this information could be used from recommending garlic consumption or as a new nutraceutical.
For the sake of argument, let us say that this research is complete, and we have a better picture of what we eat and our health. I believe that data alone will still not be able to answer the question of what to eat for an individual person. We already have a great example of the use and limitations of big data in weather prediction. Continuously improved weather models process massive amounts of weather data points to create ever better predictions. But we know from complexity science that there are absolute limits to the predictive capacity of any weather model such as hurricane forecasting where we are better at knowing where a hurricane won’t go than where it will go.
Forecasting the impact of food on the body is even more complicated than predicting the weather because good forecasts depend not just on food content, but where it was grown, when was it harvested and how was it stored and then even more importantly how was it processed and prepared for consumption. But that is just the external environment, we also have an internal environment called the microbiome, the bacteria that help us digest our food, also has a huge impact on the final product of good or poor health. Big data can give us system answers about what to eat but cannot not give us the individual answers we seek with the same predictive reliability. System answers about food though are quite reliable because the surest way still to prevent heart disease caused by red meat is to not eat it.
Interested in delving into the data? Check out the US government’s Food Data Central site – https://fdc.nal.usda.gov/