The Truth About Vitamin Drips and IV Therapy – Off Topic – COMMUNITY


I would check out Dr. Gabby. Here a bit about his view.
Blessings

Alan R. Gaby, MD, received his undergraduate degree from Yale University, his master of science in biochemistry from Emory University, and his doctor of medicine from the University of Maryland. He is past president of the American Holistic Medical Association and gave expert testimony to the White House Commission on Complementary and Alternative Medicine on the cost-effectiveness of nutritional supplements. He is the author of numerous books and scientific papers in the field of nutritional medicine. He was professor of nutrition and a member of the clinical faculty at Bastyr University in Kenmore, Washington, from 1995 to 2002. In 2011, he completed a 30-year project, the textbook Nutritional Medicine,1 and has recently completed the updated second edition of the book. For more information, please visit https://doctorgaby.com/.

Integrative Medicine: A Clinician’s Journal (IMCJ): The body of literature on nutritional research can include some very contradictory results. Has this discrepancy been addressed by any formal research?

Dr Gaby: Well, the research is contradictory. Everybody knows that. The question is: What do you do about that? How do you come to a conclusion? Obviously a lot of times it’s an interim conclusion. One of the things that has been done is meta-analyses where research is pooled and researchers come out with a final number. Then, they come up with a conclusion. Some people use meta-analysis to say, “The totality of the research says so and so, and therefore the individual studies are less important.” However, many scientists—and I totally agree with this—have pointed out that meta-analyses can lead you in the wrong direction. They assume that the studies are homogeneous: that the designs are the same, that the patient populations are the same, and that the dosages are the same. They are not. So in my opinion, the proper way to analyze any body of research is to look at differences in design, population, dosage, et cetera, between studies. You have to read each study—it takes a lot of time, and it takes a lot of effort.

Sometimes, it’s possible to come up with explanations of why the results are conflicting. For example, some of the studies using fish oil to treat rheumatoid arthritis used olive oil as a placebo. Now, olive oil is not a placebo, because it has anti-inflammatory activity. So, the use of an active placebo weakened the result that one would have obtained with fish oil. You have to go back and look at each individual study and try to decide in your own mind which ones are the most reliable. From that, you can come up with a more reliable conclusion than you can by pooling the lousy studies with the good studies.

You also have to look at who funded the study, because the presence of conflicts of interest can alert you to the possibility of bias in the design of a study or in the interpretation of results. For example, about 10 or 15 years ago, a couple of studies examined St John’s Wort as a treatment for depression. The results of 2 negative studies made the cover of one of the major news magazines: “St. John’s Wort Ineffective.” Many people were using the herb successfully, and there were over 20 double-blind, placebo-controlled trials showing that St John’s wort was effective for depression. Then this study came out, and in the tiny 6-point type, the acknowledgements stated that it was funded by Pfizer. Pfizer sells Zoloft, which at the time was a $2-billion a year antidepressant drug.

Then if you continued to read the small print, it said the funding source played a role in the design of the study. So now you start wondering whether some conflict of interest existed in the way they designed it. A positive response is generally defined as a 50% improvement in a certain depression rating scale. Looking at the study, 15% of the people who received St John’s wort had a positive response. Only 5% of the people in the placebo group had a positive response, and that difference was statistically significant.

So if you’re coming down from Mars and you don’t know anything about bias, you would say, “Okay. St John’s wort showed a statistically significant advantage over placebo.” But the authors of the study, some of whom also had potential conflicts of interest—they received funding support from various drug companies—concluded that because the normal placebo response rate in a depression trial is 30% and the St John’s wort response rate was only 15% in this study, that it was ineffective.

The researchers basically ignored their own data and came up with a conclusion contrary to what they actually found. Probably what happened is that they chose patients who had previously failed to respond to antidepressant medications and were therefore less likely to improve with any treatment. The conclusion that the St John’s wort response rate was lower than the normal placebo response rate was irrelevant, because the placebo response rate in this study was only 5%.

I’m explaining this in detail just to show that authors sometimes either misrepresent their findings or don’t understand them. Therefore, one has to really dig into this. We all have biases. I have biases, too. My belief, my bias, is that nutritional therapy is a safe, effective, low-cost alternative to a lot of what’s being done in conventional medicine. Despite my bias, I do my best to view the evidence objectively. In my book, Nutritional Medicine,1 if I don’t think something works, I say so.

IMCJ: So then do factors exist in nutritional research that require differences in design compared with standard trials?

Dr Gaby: Yes, some do require differences. The standard model in medicine is a pharmaceutical drug, so you compare a single pill to a placebo. That is relatively straightforward—1 variable—and it either produces an effect or it doesn’t. In nutrition studies, you can use that design if you are only investigating the effect of a single nutrient, but the problem is that nutrients work as a team in the body, and a combination of nutrients is usually more effective than individual nutrients. If you’re trying to get your best result, you have to do multiple interventions at the same time. To bring the design back to a single pill or a single regimen of pills, sometimes you can devise a formula that you might think is the most effective and you’d compare that formula to a placebo. In that respect, the design is fairly similar to that of pharmaceutical research.

That approach was used in a trial a couple years ago with heart patients. Investigators compared a 28-component, high-dose multivitamin and mineral supplement to a placebo and found that the composite endpoint of heart attacks and heart disease-related mortality occurred 37.5% less often in the multivitamin group than in the placebo group. This benefit was seen in the subset of patients who were not taking statin drugs. The details of the study are not so important, except to illustrate that you can use the standard design in some cases.

On the other hand, nutritional therapy also involves diet, and diet is much harder to study because most of the time you can’t do a placebo control. Sometimes you can; for example, studying a gluten-free diet, you can give people muffins that contain or don’t contain gluten. In that sense, you can do a placebo-controlled trial, but when you’re looking at the Mediterranean diet or the DASH diet—the one for hypertension—they have so many different variables and the compliance rate varies from person to person, so it gets pretty muddy. Many times, it’s harder to prove a nutritional intervention is effective or ineffective than it is for a single drug or nutrient. So we have to recognize the limitations in what we’re studying.

IMCJ: In the process of discussing some of the issues here, we’ve described a number of issues to look out for, but what other advice can you offer practitioners for evaluating nutrition research?

Dr Gaby: The first things I look at are who funded the study and where was it published. If there’s a potential conflict of interest, let’s say they’re looking at a probiotic and the study was funded by the company that sells the probiotic, that doesn’t mean the study is invalid, but it indicates one should study the paper in greater detail and with greater scrutiny. One should regard it with a little more skepticism because people can twist things around in order to make the evidence look better than it really is. I’ll go over a couple of the ways that can be done.

Where was it published? That question has become much more important in recent years because there are thousands of open-access journals in publication, where the person submitting the article pays to have the article published. It’s a per-page fee. People have many reasons to get their research published, but in open-access, pay-per-page journals, the peer-review process in some cases appears to be pretty sloppy. The financial model for these journals is that they make money when articles are published. If they don’t accept an article for publication, they don’t make money. Some research has looked at the peer-review process in these open-access journals. One can conclude from that research that if an article is published in one of these journals, you need to be a more alert to the possibility that the study was weak or that there’s bias.

The next thing I look at is what type of study it is. Was it an observational study, was it a randomized, controlled trial, or was it a case report? Observational studies do not prove causation. Let’s say you find that people who do A are more likely to experience B. That does not prove that A causes B. This fact is pretty well known, but it’s often forgotten.

One of the common examples I see concerns people with lower levels of vitamin D, which is measured as 25-hydroxyvitamin D. People with lower 25-hydroxyvitamin D levels have a higher incidence of many diseases. Researchers and practitioners often conclude that if you give a vitamin D supplement to people with low 25-hydroxyvitamin D levels, you will prevent various diseases. However, that conclusion does not follow at all from an observational study. Observational studies prove associations, but they do not prove that intervening to change the variable in question—in this case, increasing the 25-hydrxyvitamin D level—would be useful.

One of the confounding factors is that 25-hydroxyvitamin D levels decline in response to inflammation. If you have a chronic inflammatory disease—and many diseases have an inflammatory component—your vitamin D level is going to be lower than if you don’t have such a disease. Therefore, the association between 25-hydroxyvitamin D and various diseases may simply mean that people with inflammation have more health problems than people without inflammation, and it may have nothing to do with vitamin D itself.

In order to find out if the vitamin D will prevent or reverse the condition, you have to do randomized controlled trials. You give half the people vitamin D and half the people a placebo and you see what happens. Unfortunately, the vast majority of vitamin D intervention trials, randomized, controlled trials, show very little benefit, if any, and so the observational studies are probably confounded by other factors and do not demonstrate that vitamin D is useful. Again, you have to understand the difference between observational studies and randomized controlled trials.

Now, when you get into randomized controlled trials, you have to know a little bit about statistics. There’s something called the beta error. An example of a possible beta error was in a study where people received vitamin C or placebo for a number of months. The people in the vitamin C group had 22% fewer days ill than the people who got the placebo. That’s pretty good—a 22% reduction. But then when you do the statistics, you find that it was not statistically significant. Very frequently, the authors of a study conclude that since it was not statistically significant, it didn’t work. That’s not the correct conclusion. We see that all the time, though: “Not statistically significant, therefore ineffective.”

What people don’t understand is that the failure to demonstrate that something was statistically significant is not the same as demonstrating that it was ineffective. The correct conclusion from that study would be: There was a 22% reduction in the number of days ill, but since it was not statistically significant we are less than 95% certain that that improvement was real. In other words, there was more than a 5% probability that the 22% improvement was due to chance. What you need to do is use statistics to test additional hypotheses. For example, what is the probability that there is less than a 10% improvement? What is the probability of less than a 20% improvement? There are ways to make those calculations, but the main point here is that failure to find statistical significance is not the same as proving that something didn’t work.

If I see a study that shows a 20% to 30% improvement and the treatment is safe and low-cost and a reasonable possibility exists that the 20% to 30% improvement is real, I very well might try that treatment with my patients, even though it’s not statistically significant.

There’s also something called regression to the mean. I’ll give you an example. Let’s say you give a treatment—it doesn’t matter what the treatment is—and for people with high cholesterol, the cholesterol comes down. For people with low cholesterol, the cholesterol comes up. So a researcher might conclude, “This treatment is an adaptogen because when the level is high, it comes down and when the level is low, it comes up.”



Source link

We will be happy to hear your thoughts

Leave a reply

Stadiumperformance.shop
Logo
Shopping cart