I try and write blogs which can be accessed by anyone, at anytime, with minimal prior knowledge. However in this case you probably do need a understanding of what #FOAM and it would be useful to also read @boringEM‘s thought provoking commentary on methods to evaluate #FOAMed sites. Essentially he proposes a number of metrics to evaluate, and potentially, rank #FOAMed resources. A number of tweets about this got me thinking of a potential conceptual challenge that may inhibit the debate:
@precordialthump is this the inevitable journalizing of FOAM? Impact factors and ranking smacks of medical journals.
— Bob Stuntz (@BobStuntz) June 23, 2013
@njoshi8 @precordialthump @Damian_Roland @BobStuntz @BoringEM Interesting stuff. Does not look at quality. Is it open to gamesmanship? (yes)
— Simon Carley (@EMManchester) June 23, 2013
@EMManchester as you know, I think these metrics are merely an indicator or exposure / influence – NOT quality or scholarship unfortunately
— Chris Nickson (@precordialthump) June 23, 2013
https://twitter.com/njoshi8/status/348861858985414657
I have spent the last three years looking at the evaluation of practice changing interventions, in particular educational ones, as part of my PhD (see summary here). Part of this involved an analysis of the term evaluation, which is different from assessment and effectiveness. One of the things that happens when medics start evaluating things is that they often apply the same measures to a variety of different environments. As soon as discussions started on judging #FOAMed content inevitable comparisons with the process of evaluating academic literature arose (some of my previous comments on this here). The problem with that is:
i) Not only are #FOAM sites, by definition, designed to share learning in an OPEN access fashion but
ii) The methodology of engagement with #FOAMed was always going to be different from that of an academic paper.
To set some context the naysayers and skeptics for #FOAMed have always stated there is no quality control of resources. How do you know if the content holds up to current evidence? What if the authors are not credible or has a conflict of interest? Well – think of the last journal you read? Did you go away and practice immediately what it told you? I am fairly sure you didn’t, probably for a variety of reasons, but ultimately because critical evaluation has been ingrained in most clinicians from early in their training. This criticism is a particular bug-bear of mine and puts people off receiving information via Social Media (see here for previous thoughts). The lack of peer-review of #FOAM material makes it more vital that the reader is aware of potential error (if I was to change one think it would be a universal alert statement is placed on site highlighting this – this would also act as a very useful #FOAM brand) but the reader can still make their judgement. As an example this paper on Early Warning Scores in Emergency Departments has been a cited on a number of occasions but is neither peer reviewed or commissioned, ultimately it should have no more value than anything else lifeinthefastlane.com or St.Emlyn’s have produced. Why does being in a journal make it have more value?
But I suppose I digress slightly, what is different about the evaluation? Well academic literature is spread by publication in journals, promoted by citations and only recently encouraged by social media. #FOAM has always been essentially reliant on word-of-mouth. The route to #FOAM is rarely discussed. Think of the last #FOAMed site you went to – why did you go there? Did you just find it? I suspect (and please comment and say I have got this wrong) it’s because it is from a source you already follow or someone has directed you there. And who was that person? My guess it is someone you trust, follow or is a leader in #FOAM. Not really sure how you define a leader in #FOAM but I stake trust in the sites that key #FOAM supporters recommend. So if @sandnsurf, @emmanchester, @_nmay, @predordialthump, @boringem, @jvrbntz or @tessardavis mention a site I take a look. Others may have a completely different list – but it probably doesn’t matter who they are. There is a different form of peer review in process here – that of trusted followership.
Could there be mistakes in the process – well yes there could. But the process of academia and publication has not been risk free . So when it comes to evaluation the metric is at stake is the spread of information. The more recommendations leading to website hits being a proxy measure of word of mouth assessment of the perceived quality of site. Problems still exist if you want to be pedantic- hits to sites can be manipulated (but this can be controlled for) and the “quality”, in terms of readability and evidence, if you are determined to measure this as well has still not formally been assessed. But if you are evaluating the primary purpose of FOAM then it is metrics like hits which have value. How this reflects the sharing ability of some of the FOAM leaders is open to question? This also prompts the question about what is the ultimate aim of #FOAM and whether it wishes to be constrained by old paradigms of evaluation or maybe create new ones.
Hey,
Just of note, maybe should make the #FOAMed the consistent hashtag so as not to create semantic confusion. Remember FOAM is the movement, #FOAMed is (one of the) hashtag(s).
Thanks for a thoughtful discussion.
Teresa
Great Observation – rather embarrassingly I did actually change the title of this post only 24 hours ago! The reason I did this was that #FOAMed in the title means its stands out in twitter links etc. Agree FOAM and #FOAMed are different and will rectify 🙂
Also, of note, just submitted systematic review re: quality indicators of online resources. FINGERS CROSSED!! 😀 Will update you when I hear more… I think impact, eminence (i.e. following the #FOAMed leadership like you suggest) need to be mirrored by quality. So working on quality markers too.
Thoughts on the ALiEM AIR series? Or the ALiEM expert peer review process?
Brilliant…. looking forward to it!