It is my opinion that evidence based practice (EBP) in the world of physical therapy in its forms is flawed. That is not to say that there is not good research out there, just that I am always highly sceptical of it due to the inherent nature of the problems when it comes to research design in this area, and that the short comings of the research makes evidence based practice limited at best. I am from a science based background, I have studied to MSc level, I interviewed for PhD positions, and I am very thankful that I did not go down that route. I say this not in the hope that it gives more credence to my opinion, in fact I am overtly against people who start sentences with things like 'well I have a...' or 'I'm a...' thinking that it gives their opinion more value, rather than allowing the reader make up their own mind. Instead I simply wish to inform that I am well aware of how study design works, how to choose the most appropriate statistical tests so on and so forth. It is not my hope that I am able to convince you to agree with me, I simply hope that I am able to make you think, evaluate the rationale behind my argument and then you can make your own mind up, realising of course that it is no more or less valid than mine.
It is funny when people quote studies and say 'this person proved this' because that is not how research works. Researchers test an hypothesis to find whether it be true or false, whether they find this true or false is determined by a pre-determined statistical analysis (though this is not always the case) and a value of 'significance'. If their results gives them a signicant figure of less than typically p<0.05 they can argue that their hypothesis has been proven wrong or right, to a degree. But still their results are only true until proven false, or false until proven true, which often happens as methodologies and thought processes change. And while every researcher hopefully makes the effort to minimise errors and so on they would never claim that their experiments are perfect.
I find the the validity and application of the the research hard to marry with clinical practice, in part because it is incredibly difficult in a world where we are dealing with subjective pain to create a study that adequately allows us to evaluate an intervention. I do not mean that the results have been interpreted incorrectly just that I question the validity of the sample that the researchers have tested. For example in clinic I may see 10 different people with low back pain (LBP) and it would not be unusual for me to find 10 different causes of that back pain even though they may present similarly. Now unless the researchers go so far as to evaluate the cause of the LBP before assigning them either to a control group or an intervention group and make their study very, very specific (which would involve a lot of expertise, increase the man hours required and will dramatically reduce your potential sample size often to the point of being impractical without wanting to bore you with statistical requirements). I would argue that the outcome of this study would be as relevant as giving pancreatic cancer medication to someone with lung cancer and seeing if it helped, yes you are treating the same condition but the application of the intervention is not relevant to the specific problem. (Yes a poor and crude example but I think you get my point). Now if researchers are this thorough when it comes to selecting their sample then I am potentially going to be less sceptical of their results, unfortunately they are not.
Studies into physical therapy interventions typically adopt a quantitative approach which is inherently difficult when you are collecting subjective data. (It should be noted that if you are taking objective data then this 'objection' is obviously irrelevant. But for clinical application I struggle to think of many occasions where quantitative data is applicable). Typically people come into clinic because they are in pain or they want to move better, something that I am not able to quantitatively measure, occasionally someone comes in with a severe restriction of a joint but this is often a symptom of something else. I do have some interest in a couple of quantitative things such as ROM's available at an affected area but my approach is rarely to specifically increase that ROM. If you can remove the problem that is causing the pain then typically you will find that the ROM at the affected joint may also improve by itself providing there is no structural issue.
For those who are EBP based I have put together a very brief, and certainly not exhaustive lit review. Please note the age of some of the articles that I am citing, I am not choosing to cite such old publications intentionally to try and support my position, there just appears to be a lack of newer reviews into that particular area of the research which is a problem in itself. Hopefully the newer research has taken the lessons from these articles and simply forgottent to cite them, which is another problem that we will not go into.
It would seem fairly obvious to me that the research that is going to have most relevance to a clinical setting is going to be qualitative in nature, a view supported by Jensen (1989) and Petty et al. (2012). However, there appears to be a distinct lack of evaluation and studies adopting this study design. Instead there appears to be an adoption of a standardised approach for studies – the 'golden standard' randomised control trial (RCT) (Kumar et al., 2013), as well as an indication that the quality (Snell et al., 2013) and number (Kumar et al., 2013) is improving. There is an inherent problem with the RCT in physical therapy research in that it is very difficult to introduce an adequately controlled control group, a problem with any type of blinding (realise that the double blinded RCT was developed for clinical trials and so adoption by any other type of research is inherently questionable), as well as the continuity of the intervention. Kumar et al. (2013) and Schreiber & Stern (2005) outline numerous problems and have lots of references for further reading in relation to problems with EBP in physical therapy.
If we can ignore for a moment the difficulties in study design and problems ensuing then for a studies result to have any relevance it has to have an applicable carry over into industry. Sim & Arnell (2003) highlighted that a number of researchers sacrificed the validity of their experiments in the hope to gain greater reliability. This is not only problematic in terms of carry over to industry but perhaps highlights a form of bias on the part of the researchers in an attempt to control their results, I can only hope that a similar more recent review would not find the same thing present now.
Another problem for me is the recent neglect of published articles that give advice to the researcher on how best to conduct and control their studies. This may be because: there is nothing left to say on the subject and the earlier reviews outlined all the problems (though if this was the case then why do these problems persist unless they are unsolvable); that no one has any solutions to the problem; or maybe that these articles are being submitted and simply not being published. Manske & Lehecka (2012) and Page (2012) both published articles pertaining to EBP and study design in physical therapy research respectively. Page discusses at length the numerous types of study designs that are available for the researcher to use including potential problems, however, the article neglects to mention the inherent nature of the problems and so does not offer any guidance on how these may be addressed by future studies. Similarly, Manske & Lehecka focus on how to help the discerning clinician interpret the wealth of research, whilst admitting that some published research is 'less than ideal' they choose to ignore the problems causing this. Just like a therapist treating the symptoms rather than the cause or the police arresting the drug user rather than the drug dealer both articles have neglected the actual problem which plagues the research. This in my mind is either a shocking oversight or is irresponsible.
While it may be appropriate to a degree to use certain studies as guides of what to do or rather what not to do there is nothing more powerful than a case study. Everyone is weirdly and wonderfully different and needs to be treated as such. Treating everyone who comes through your door with the same generic 'this is what the latest study tells us to do for LBP' approach is not providing the best service that you can. That latest research that you are following is already 1 or 2 years old by publication (some will be older still), any book that you are quoting from has been years in the writing, editing and publishing. So keeping 'current' with research in my mind is an oxymoron. Instead, the way to keep current is take courses that are challenging the boundaries of what you may be willing to accept or seems possible – it is those very courses that will be in the literature with their approaches in a couple of years time. Furthermore, if you are unwilling to accept something as valid until you see it in print you are just allowing someone else make your mind up for you.
I'm not suggesting that there aren't gimmicks out there to be wary of just saying that there are some very powerful techniques out there that are as yet 'unproven', or rather not disproven yet, by research. However, I could find thousands of clients / patients of these types of therapies who have been amazed by their effects as all the stuff that had been 'proven' had not worked. And lets face it if you can fix someone's pain that person does not give two-hoots whether there has been 0 or 10 research papers on it. Another argument that is often levelled against practitioners of newer or as yet 'unproven' techniques is that it can be exploitative or irresponsible or the therapist and that they can take advantage of the client. This is flawed logic as if the therapist has integrity and an ethical practice then they are no more likely to take advantage of any client with any technique Vs any other. Furthermore when I attend courses I do not take anything as gospel, I try it and evaluate it, I put it into context and I judge it on its merits and then make my own mind up and I would hope that everyone else is able to do the same.
Now I am by no means trashing all research in this subject area, as I mentioned earlier some has merit. All I wish to do is challenge the thought that EBP is providing the best service that you can to a client / patient. The newer methodologies which many are so fast to dismiss out of hand even though they are not fully informed on how they work or their scope (how can you be fully informed unless you have taken the course for yourself?), are the ones that the research of the next few years will be describing. The question is whether you are going to wait for a group of people to make your mind up for you or are you going to make up your own mind?
Kumar, S., Sisodia., V & Kumar, A. 2013. Evidence base for physiotherapy / physical therapy: A specialty-based quantitative trend analysis of articles. Saudi J Health Sci., 2, 23-30.
Manske, R & Lehecka, B. 2012. Evidence – Based Medicine / Practice in Sports Physical Therapy, Int J Sports Phys Ther., 7(5), 461-473.
Miller, P., McKibbon, K & Haynes, R. 2003. A quantitative analysis of research publications in physical therapy journals. Phys Ther. 83(2), 123-131.
Page, P. 2012. Research Designs in Sports Physical Therapy, Int J Sports Phys Ther., 7(5), 482-492.
Pettya, N., Thomson, O & Stewa, G. 2012. Ready for a paradigm shift? Part 1: Introducing the philosophy of qualitative research. Manual Therapy, 17(4), 267-274.
Schreiber, J & Stern, P. 2005. A review of the literature on evidence-based practice in physical therapy. Internet J Appl Health Sci & Practice, 3(4), Article 9.
Shepard, K. 1987. Qualitative and quantitative research in clinical pratice. Phys Ther., 67(12), 1891-1894.
Snell, K., Hassan, A., Sutherland, L., Chau, L., Senior, T., Janaudis-Ferreira, T & Brooks, D. 2013. Types and quality of physical therapy research publications: has there been a change in the past decade? Physiotherapy Canada, 66(4), 67.