Red meat doesn’t kill you (and a problem with nutrition science)

This week is world iron week. I’ve talked about iron deficiency on my blog and you will be aware of the risk factors and risks associated with iron deficiency. I know, though, there are those among us who are wary of consuming one of the best sources of iron in the diet: red meat. Because, well, you know – meat kills. The most recent of these news headlines came from this study published in June of this year.

It is challenging being an advocate for eating red meat, and (in a lot of cases) encouraging clients (particularly young and not-so-young women) to eat MORE red meat, in a climate of meat avoidance. It isn’t a popular message, particularly with the bad press that red meat consumption (and production) has received over the last few years. So I thought it timely to remind you of some of the pitfalls associated with nutritional research, and why it is problematic to rely on population based research for our nutrition wisdom. This has been well covered by people much smarter than I am (read here) and relates to the above study looking at red meat and all-cause mortality.

The Nurses’ Health Study is an observational-based study – in that, it wasn’t a study that went in to try and test the effects of a certain dietary condition, instead it merely reported on what the population was doing. The food data was collected using food frequency questionnaires (FFQ), a memory based method, to determine the intake of foods spanning a four-year period. Now, if you’re reading this, you likely think more about food and what you eat than the average population. How difficult, then, would you find it answering questions related to your food intake four weeks ago, let alone four years ago? Imagine then being someone who typically doesn’t give it a second thought. A separate analysis of the data collected in this study revealed that 67% of women and 59% of men participating reported a caloric intake so low that a 70-year-old frail woman wouldn’t be able to live on, much less people who are in the prime of their lives. It has been described as ‘physiologically implausible’. Further, the caloric intake of people categorised as obese or overweight was reported as being ‘incompatible with life’. As all nutrients we eat are attached to calories, this makes all nutrient information completely unreliable.

Secondly, any of the findings are, by virtue of being an observational study, correlational in nature and not cause and effect. Given a data set large enough, enough dietary variables and a number of statistical methods at your disposal, you are likely to see significant correlations if you go looking for them. An example I saw on a blog of Chris Kresser’s was s study reporting that eating 12 hazelnuts a day increased lifespan by 12 years. Or that two slices of bacon equated to a shortened lifespan by 10 years. Yet, all headlines reporting on the study we are talking about here, and indeed the language used by study authors, suggest causality – something that cannot be determined by observation alone. Quite possibly one of the only robust findings from correlational research is that on lung cancer and smoking, where a 2000 times increase in risk of diagnosis of lung cancer was found in those who smoked. The increased risk in the study regarding red meat consumption? 10%. In most fields of science, it takes an increase in risk of at least 200% to garner interest. In nutrition, most relative risk increases are to the tune of 10-50% in either a positive or negative direction. Almost not worth writing about. Remember, too, this is relative risk. Absolute risk (when these numbers are reported) looks quite a bit different (see infographic here, a great description).

Thirdly, the prevailing message in the last 30 years is that red meat is bad for us and we should be minimising our intake of it, something that health conscious people will make a concerted effort to do. Therefore (as the research shows) those people who tend to consume the most red meat aren’t generally those that follow public health messages. They are more likely to smoke more, drink more, do less physical activity and eat less fruit and vegetables – all things which place an individual at greater health risk. While the research statistician ‘adjusts’ for these factors by way of an algorithm, it is well acknowledged that no amount of statistics will account for these unhealthy lifestyle behaviours. This is the inverse (if you like) of a ‘healthy user bias’.

And what about clinical trials looking at the harmful effects of meat? We must put it into context. A hamburger patty served with cheese and aioli, in between two slabs of bread, along with a large side of fries and a soft drink is clearly quite different to a medium rare steak with garlic butter and a side of broccolini. The overall nutrient quality and context of the diet matters whenever we are determining the healthfulness or otherwise of a food choice. Dietary patterns matter. In line with that, there is no good evidence to suggest that meat causes inflammation, and one trial in particular (out of Australia) looked at the differing effects of one 100g serving of wild game meat (Kangaroo) and the standard feedlot beef on inflammatory markers, finding no increases in inflammation after eating the Kangaroo meat. The authors suggest that the fatty acid profile of the beef (higher in proinflammatory omega 6 fatty acids) compared to the wild game meat was the potential mechanism here, but more research was required to establish this. What would be great is to see if differences existed in a clinical trial of a whole food diet that incorporated red meat, rather than there being no differentiation between sources of red meat. Grass fed meat (the majority of our meat supply in New Zealand) is higher in omega 3 fatty acids and antioxidants as a result of the way they are raised – both of which reduce inflammation.

Finally, the tri methylamine N-oxide (TMAO) story. An increase in this enzyme (generated from choline, carnitine and betaine in the gut) is associated with cardiovascular disease and there is suggestion that red meat intake is responsible for higher levels of TMAO. However, it needs to be pointed out that fish (consistently found to be a feature of healthy diets, however you look at it) raises TMAO levels well above what is found with meat. In addition, TMAO is produced in the gut, and we know how much the health of your microbiome is important for overall health. Therefore, if someone has sub-optimal gut health due in part to a poor diet, they are likely to be at increased risk of health concerns.

There is a lot to unpack and this isn’t to try to convince anyone to eat meat if they don’t want to. It is more to remind you that nutrition science is a challenging field. Regardless of assertations made by headlines, health professionals (including me!) or your next-door neighbour, studying what people eat is rife with problems and we need to take everything with a grain of salt. Which, as you probably know,  also will not (in isolation) kill you.

Iron deficiency and the athlete

A friend of mine alerted me to this super useful review on iron deficiency in athletes, however it is going to be one for anyone who has struggled with low iron status, athlete or not.  I’ve summarised the main points and recommendations below.

How prevalent is iron deficiency in the athletic population?

Although iron deficiency is most common in female athletes (~15–35% athlete cohorts deficient), approximately 5–11% of male athlete cohorts also present with this issue. For obvious reasons, potentially a result of increased iron demand to account for menses. However, low energy intake, vegetarian diets and endurance exercise have also been proposed as potential factors impacting both male and female athletes’ iron stores

How would you know? 

The symptoms of compromised iron status include lethargy, fatigue and negative mood states with more severe cases (i.e. iron deficiency anaemia; IDA) also compromising work capacity. Such symptoms may impact the athlete’s ability to train appropriately and to produce competitive performances.

Getting blood tests – what to look for?

Stage 1—iron deficiency (ID): ferritin <15 g/L, transferrin saturation>16%

Stage 2—iron-deficient non-anaemia (IDNA): ferritin<15 g/L, transferrin saturation < 12μg/L

Stage 3—iron-deficient anaemia (IDA): Haemoglobin (Hb) production falls, resulting in anaemia (ferritin < 12μg/L, Hb. Additionally, serum-soluble transferrin receptor (sTfR) levels of 2.5 mg/L could be considered a reasonable threshold for identification of IDA.

How does training affect what you see in blood test results? 

Well it can, is that just using ferritin as a marker of iron status may not be a good idea as it is an acute phase protein, and the fact that ferritin levels are increased during periods of inflammation and after intensive exercise. Furthermore, measures of Hb are also affected by shifts in plasma volume, which, when unaccounted for, may present issues such as pseudo-anaemia or sports anaemia which does not appear to have any negative effects on performance. Considering training and/or heat adaptions can induce hypervolaemia these factors should be considered to avoid diagnosing deficiency that isn’t there.

What causes Iron deficiency?

  • haemolysis exacerbated by ground impact forces (e.g. foot strike) and
  • muscle contraction (e.g. eccentric muscle damaging exercise)
  • haematuria,
  • gastro-intestinal (GI) bleeding,
  • sweating
  • heavy menstrual losses and
  • inflammatory/iron regulatory hormone (hepcidin) responses exercise has a transient impact on increasing levels of the master iron regulatory hormone, hepcidin (for 3–6 h post-exercise), likely a result of the well-documented exercise-induced inflammatory response and associated increases in the cytokine interleukin-6 (IL-6) Increases in hepcidin activity result in a decrease in iron absorption and recycling from the gut and scavenging macrophages, respectively. As such, it is likely that there exists a transient window of altered iron metabolism after exercise where nutrition strategies could be exploited to manipulate the outcome (i.e. strategic feeding times to avoid the window of decreased iron absorption). It’s tricky given how often athletes train, combined with the fact it’s lower in morning and rises during the day – however if you eat cereal in AM, this will impair absorption. Appears to be a window of an hour post-training where you can maximise iron absorption – before hepcidin rises too much. This increase happens in all athletes too, IDA or not. So best time for supplementation could be 60 min post exercise in AM after breakfast, using iron enhancers (vit C, acetic acid – apple cider vinegar) and not drinking coffee or tea or consuming too much calcium which will compete with iron for uptake into the cells.
  • Both testosterone and oestrogen can influence iron metabolism via their suppressive effects on the hepcidin–ferroportin axis For athletes, it is possible that high training loads may alter an individual’s hormonal profile thereby suppressing gonadotropin-releasing hormone (GnRH), a precursor for sex hormones. In women, this can lead to suppressed luteinising hormone (LH), follicle stimulating hormone (FSH; to a lesser extent) and consequently oestrogen. Low oestrogen, less suppression of hepcidin, so higher levels impact on absorption.
  • Consequently, chronic suppression of testosterone may be linked to higher hepcidin levels in male athletes, potentially impairing iron regulation, and thereby helping to explain the incidence of ID among this sex, especially as endurance athletes often have lower testosterone. In women, although lower levels of testosterone are present, its importance in iron metabolism should not be ignored. For example, higher testosterone levels have been associated with lower risk for anaemia in both healthy older women (n=509) and men (n=396)

How effective is supplementation?

It doesn’t give a performance boost if not deficient, however requirements may be greater, as studies have shown that 30-40% losses ferritin over a harder training block, so the typical 13-18mg / day recommended is probably not enough. Some researchers think that training adaptations typically associated with endurance training may only be maximised in the presence of adequate iron stores.

Does diet matter?

Outside of the obvious limitations that a vegetarian or vegan diet present with (low available iron that requires supplementation) a lower CHO availability (depleted glycogen stores) may increase inflammatory markers like IL6 and hepcidin which can impact on absorption – this has been found transiently but may not be a problem over the long term in presence of higher muscle glycogen and a higher protein intake. More research is needed in the area of CHO availability and iron status.

How to ensure adequate iron status? 

Strategy: check at risk athletes every three months, getting bloods in a fasted, rested state with no hard workout in the 48 hours prior (and ideally no injury present). Also, monitor diet via an app to check overall intake through diet (red meat, organ meat and mussels are the best dietary sources, we are able to absorb 20% of available iron from these compared to 5% from vegetarian diets).

Take supplements – ones which are easy on the gut – the commonly prescribed ferrograd is not. Thorne iron biglycinate, Spa tone and Carbonyl iron are three which are better tolerated. Generally, the overall response to oral iron supplementation in athlete cohorts appears positive (40–80% increases to ferritin) when consumed over an 8- to 12-week time frame. In addition, alternate day supplementation may increase the efficacy of effect via an improvement in the absorption of iron from a given dose, which, over time, results in a greater cumulative response (Stofel et al. 2017). Such regimens, in combination with iron absorption enhancers such as vitamin C.

With this in mind, IV infusion may be a better option when levels are super low and a situation requires rapid improvement in iron stores, or when gut issues appear to render supplementation  ineffective.

Also it is worth considering is the concept of maximising iron stores through supplementation during periods of lower activity (e.g. of-season). Inevitably, as training load and iron demands increase during the competitive season, higher iron reserves may limit the negative influence that exercise training has on the bioavailability of iron.

10-signs-of-iron-deficiency

RBC… from yourfamily.co.za