Mother’s Day Brunch is over, and I am feeling appreciated and loved, though a bit bored and lonely now that the “kids” have left for home. So I’m catching up on stuff.
I went back to finish reviewing a draft practice guideline via an on-line questionnaire. The authors wrote that I was selected to participate because of my particular field of expertise. I started but didn’t finish the questionnaire last month. When I logged in again today, the software jumped me straight to a question that asked whether I agree with a statement that begins: “There is OCEBM Grade C supporting evidence that ……”
This terminology was only semi-famliar to me, so I Googled OCEBM which wasn’t much help. I still haven’t found the criteria for Grade C (which implies there exists criteria for Grade A, B, and maybe other grades beyond that too). More Googling will probably help. But to be truthful, I decided to give up on this effort. When the researchers asked me to participate, they didn’t tell me the survey has SIXTY FIVE questions each of which seems to require a whole lotta cognitive work including evaluating a set of brief descriptions of the scientific literature pertaining to the subject of each question. I am NEITHER an academic type nor a methodologist (a person who critiques the experimental design and analytical methods used in research).
HOWEVER, while Googling around I did find a document entitled “Understanding GRADE: An Introduction” that MAY BE USEFUL to you. It was for me. It succinctly describes (in LESS than 1,500 words) the steps that developers of systematic reviews and practice guidelines should use to assess and rank the quality and strength of evidence supporting a particular practice or treatment. This method is widely used in appraising scientific studies. Worth a read – particularly if you are confused by competing guidelines.
Methodology is KING in evidence based medicine — if the scientific quality of a study sucks, you shouldn’t view the results as reliable/valid/believable — no matter how much you love them. So if you’re like me and not a methodologist yourself, you need to make sure that the committee that produces an “evidence-based guidelines” has members who really do know that stuff.
In theory, I like the idea of wide participation of experts in development and review of practice guidelines. But based on my (limited) experience with development of the ACOEM practice guidelines for occupational medicine, the work is so hard, detailed, and time-consuming that it tends to be done by a small group of committed experts who put in a TON of hours, and then send their finished product out for comment. Part of the reason why I felt so uncomfortable reviewing the practice guideline today is that there was so little background information and context provided to me as a reviewer. Like: (a) what were the 65 questions going to be like; (b) how long would it take me; (c) where are they in the development process, and (d) what use is going to be made of the on-line reviewers’ input?