And as well as amirkhanian Carol Gamble, we're presenting an introduction to statistical analysis plans. Now, the three of us are all based in the Department of Health Data Science. We are in the Faculty of Health and Life Science. Science is so inevitably most of our examples, the vast majority are taken from health research. But we're going to try and discuss the principles that apply more generally. So the key message from me for the talk from us is prevention is better than cure prevention of what prevention of bias. OK, so it's better than cure. And it's also better than criticism. Those of you that do work in health research, I'm sure will be familiar with The Lancet series on the topic of research waste, and that can be from anything stretching from whether the research questions are high priority right through to whether what you're actually seeing reported in the research literature is biased and usable. And the focus really, although it's it does cover some of the elements throughout this research pathway. What we're focusing on today is really the biased reporting of data within studies. So. Oops there with me. OK, so although this comes from. A health research study. I think it could be generalizable, but this will be interesting to discuss with you later on. So this was a study where we interviewed clinical trialists where we had their trial protocols and their trial reports, and we knew whether they had followed what they said they were going to do in their protocol. And here's a quote from one of those interviewees. Where there were discrepancies when I took take a look at the data, I see what best advances the story and if you include too much data the reader doesn't get the actual important message. So sometimes you get data that is either not significant or doesn't show anything and see you. We just didn't include that. So here the decision of what results to include in the report were based on the results. And that will introduce bias. Focusing on clinical trials, the first example I'm going to talk about where there's most evidence of bias in reporting is around the outcomes that are measured on participants in randomized clinical trials. So with a randomized clinical trial, and many of you, I'm sure will be familiar with the example at the moment, which is the recovery trial you are recruiting patients into the trial you are randomizing them to different treatments and you're following them up for a set of health outcomes. And those health outcomes are specified in a protocol. And obviously, the outcomes that you choose matter because they determine the credibility of the trial. What can happen in a trial? It didn't happen in the recovery trial, thank goodness, but once you've got approval, it goes through a a this flow diagram and you can end up in the right hand side down at the bottom right hand corner, which is the ideal that the trial is published. No matter what happened to it, fully published and all the outcomes that were planned were measured and that all of the analysis were reported. But the box is on the left, bottom left hand corner. With Red writing is where things can go wrong, and a trial itself. The whole trial may not be published either because it was submitted and not accepted, or because it wasn't submitted. That's typically referred to as the problem of publication bias, and it usually means that the whole study is not seen. But there is another issue, which is that of outcome reporting bias, and that's where the study itself is published. But only some of the outcome results are shown, and if those results are selected based on the results, then you have a problem of outcome reporting bias. We looked at the empirical evidence for this problem of outcome reporting bias. And this study was looking up to results that were found up to 2012 and what we found was that outcome results outcome data would be fully reported more often if the results were statistically significant. So with an odds ratio of between 2:00 and nearly five, so much more likely to see the full results of an analysis if those results were statistically significant. So clearly a bias. And when we looked at more in detail at trial reports compared to protocols or across the studies in the review, we found that between 40 and 60% of an outcome that was listed in the protocol was either changed from being a primary outcome, IE the one that was most important to a secondary outcome of lesser important or it wasn't reported at all, or in fact in the trial report. Knew outcomes appeared that weren't listed in the original protocol. But does that matter? What impact does it have on health research? In particular, we looked at a cohort of Cochrane, systematic reviews 283 and each review contains more than one or one or more studies. And across those reviews we found over 1/3 of the reviews included at least one trial where this problem of outcome reporting bias was suspected. From what you could see, both in the trial report. When you had a protocol in the protocol. And what implicate what impact will that have? The diagrams show the sort of way in which you would display results from trials within within a meta analysis within a systematic review. The diagram on the left is where there is a. This is a simulated example. It's where you have complete reporting. All of the trials are presented on all of the results for the particular outcome or fully reported, and you can undertake a meta analysis and what you find there is the treatment effect. As an odds ratio of about 1.41, it is statistically significant that confidence interval next 1.41 excludes one. There's a significant effect. What happens if you either have publication bias or outcome reporting bias for the outcome is you get a missing part of the complete picture. And in this example, because the bias is induced when you don't have statistically significant results, you get a missing. Part of the diagram OK, it's typically found to be in the bottom left, depending on on which way round your treatment comparison is, and you can see then if you don't know that that's what's happening and you pull the data, you have an odds ratio that is inflated. And what we did was we looked across all of the meta analysis in that Cochrane systematic review there were 42 that were significant. But when we adjusted for the outcome, reporting bias, that was evidence we found that a fifth would not have remained significant. And I don't want to focus too much on statistical significance. More importantly, 1/4 would have overestimated the treatment effect by more than 20%. And that can have serious implications for research recommendations. So whose fault is this poor reporting? Well, it's a collective failure of the Community of authors, peer reviewers, and editors. Authors might not know what information to include in the report of their research, and editors might not know what information should be included. So we're keen to look at what help can be given to authors and what help can be given to editors. And there is a. Increasing number of reporting guidelines for various types of research studies. And that's not just in health research, it's also outside of health research now as well. But in terms of what help can be given to editors, we actually did a study to look at the impact for trials submitted to the BMJ to the BMJ. This was in trials submitted between September 2013 and July 2014. Trials going to the BMJUU. They are required to have registration prospectively. They are required to have a protocol and there needs to be transparency declarations within the reports. What we found overall, once we'd screen them out, there were 275 trials that had been submitted, 21 accepted and the rest not. But across all of those trials we had the protocol. We have the trial registration. We had the initial manuscript submitted. Had the final manuscript accepted in the case of those 21 studies, we found that overall 20% of the trials submitted were missing. Pre specified outcomes in the initial manuscript submitted and 10% of them had introduced new outcomes into the initial manuscript beyond what was in the trial registration. An the protocol now that actually if you add them up is 30% and that's a bit better from the 40 to 60% statistic I mentioned before. And that's possibly not a surprise because the BMJ is considered to be a good Journal and perhaps people apply to the. Applying to it were higher quality researchers with higher quality trials. We don't know. However, the reasons that people were giving when challenged for not including all of the outcomes were space limitation. The outcomes haven't been analyzed yet. They had been reported elsewhere and there were errors in the trial registration entry. What we were really interested in was the impact of the peer review process where there were by the reviewers were asked to compare the protocol and the registration entry to the initial manuscript and what we found was of those 21 accepted. There were only four where there were no issues picked up in the discrepancy by the peer reviewers, but of the 17 where there were issues that we identified. As we screened as well, 14 of the trials were had issues picked up by the peer reviewer. So not all 14 did. Overall, once issues were picked up, fed back to the authors, the revised manuscript came back overall. The missing pre specified outcomes. Some of them were added in. The amount of missing outcomes was reduced by 22%. So not as much as we might have hoped. For those outcomes that were newly introduced into the manuscript, they weren't in the protocol or the registry entry. There were twelve of those 21 trials that had that as an issue. In five of them, the peer review is requested. A labeling of a post hoc analysis which captured which Carol will say more about. So it's arguable that relying on peer reviewers to do this comparison may not be as impactful and may not address the bias in reporting issues well enough. I've mentioned outcome reporting bias as one source of bias in research. There are many others, and a systematic review that we were involved in found. Varying degrees across studies that have looked at biases due to statistical analysis being changed between what was planned and what was finally reported. The bias that's induced there. Sorry, the occurrence. The prevalence it occurs in anything between 7 and 80%. Eight percent of research projects. Health research. This is biasing composite outcomes, so the components of a composite outcome. What changed in 33% of cases subgroup analysis that were either not specified at front but put in later or were specified front and then weren't reported. It varied from 61% to 100% of studies. Dichotomize accepting of continuous outcome, possibly because a dichotomy gives you a suggested significant result when a continuous outcome doesn't. That was a problem of bias. How you handle missing data, what you plan to do and then what you actually did occur between 12% changes in that approach occur between 12 and 80% of the time. An adjusting for covariates in a statistical regression model or not adjusting a curd in a high percentage of times as well. So there are. Many opportunities. For researchers to end up. Not necessarily purposefully, but end up and sometimes in a legitimate way, ending up with a biased result. And we believe, as a group, that the development of proper statistical analysis plans may be one way to help reduce the problem. I'll now pass over to my colleague, Carol Gamble, who's going to tell you about the development of a statistical analysis plan? OK Carol, it should be with you now. The control. Nope. Sorry, I forgot to stop sharing. No, it's OK, it's fine. So Paula has described the problems of selective reporting and analysis and the evidence that this is happening. But how can we determine if it's something we should be concerned about when we're reading the results for study or maybe undertaking peer review for a Journal? And how is a researcher? Can we best protect ourselves from accusations of selective porting so that we can give greater weight to the credibility of the research that we've probably spent years trying conducting? So. One way is to compare what has been reported against the pre specification of the intended analysis for each outcome. And this work was undertaken more recently in 2018 by Greenberg, who undertook a review of trial protocols that were published in one month in 2016. So all those that were published just in the November to assess how well statistical analysis approaches were being pre specified within them. They assessed. Sorry, that's gone too far. Sorry, they assessed. For areas of analysis, which were the specification of the analysis population, the analysis model? The covariates and the approach to missing data. And all of these areas were required by the Spirit guidelines for protocol contents have been specified, and so reasonably we should have expected them to be quite well documented. But what you can see is that over 1/4. Sorry this is screen keeps moving. Is that over 1/4? Did not mention the analysis, population or covariates and 2/3 didn't mention. Is I don't know what's happening with the screen. Whoops OK. So over 1/4 didn't mention them and 2/3 didn't actually mention handling of missing data. So when they looked at this by the number of aspects, they were adequately by the number of aspects that were adequately defined. They found that of the 99 protocols they assessed, none adequately defined all four aspects. So it really shouldn't be that much of a surprise then that given poor standards of predefining analysis and protocols, that when you look and compare the conducted analysis against what was pre planned, there are a number of discrepancies and in this paper by Suzie Crow and colleagues, they conducted a review of published randomized trials to evaluate how often a pre specified analysis approach was publicly available and then look at how often there were discrepancies. Between the primary outcome analysis plan and its conduct, so this is just focusing on the primary outcome analysis plan. So they reviewed randomized control trials that were published within a four month period in 2018 across 6 leading General Medical journals. And what they found was that a prespecified analysis approach was publicly available in 88% of them. But of these, only 1/4 had no unexplained discrepancies, and within 15% it was so poorly described it was impossible to tell. So when we go back to thinking about that this was. Just focusing on the primary outcome, we can only imagine that things are much worse when we start thinking about secondary outcomes or explanatory outcomes. So the conclusions that they reached were the unexplained discrepancies in the statistical methods of randomized trials are common and that we do need increased transparency to enable us to have a proper evaluation of results. So to achieve this, we need to think we will, we need to make sure we're clearly pre specifying our analysis plans. But the question is, is the protocol always the right place to do this? So. Guidance on statistical principles for clinical trials, which is commonly called ICHE 9 and when I talk about ICHE 9 this is what I am referring to. It states that the principle features of the eventual statistical analysis of the data should be described in the statistical section of the clinical trial protocol, but the words that I want to emphasize here are the principle features of the eventual analysis can. Ann and. Elsewhere, within IC 89 and also within the Spirit guidelines on protocol content and also within the NHSHRA protocol template, there is a reference to something called a separate statistical analysis plan, so it's so we need to look more in terms of what is a statistical analysis plan. OK, so statistical analysis plan is defined within IC 89 and it's a document that contains a more technical and detailed elaboration of the principle features that are stated in the protocol, and it includes detailed procedures for executing the statistical analysis. Of the primary and secondary variables and any other data. Now. Similar to protocols, the ability of ASAP to increase transparency and. Help with the Selective reporting is going to be dependent on its content. So one question we should ask is why should we bother developing a separate statistical analysis plan? Because there's a legal requirement to comply with the trial protocol. So. Rather than just improving what goes into SO11, solution could be that we actually aim to improve what goes into our protocols. And well, really, if it's reasonable to include all the relevant information in the protocol, then we should do that. But the type of information that we want to specify in statistical analysis plan is such that if the raw data set and the analysis plan were passed to another statistician, then he or she would be able to independently replicate the analysis exactly. So a statistical analysis plan, it doesn't just contain a list of statistical test to be applied to a variable, it needs to document data manipulations, calculations and derivations as well. Need to talk about how we're going to handle outliers? Missing data process is for checking model assumptions and a whole number of different aspects. And this level of detail can lead to quite large document on its own, particularly when you've got a complex clinical trial. So you can imagine that embedding something that might could be of 50 or 200 pages length on its own within a clinical trial protocol. Doesn't actually work very well and can lead to a volume of protocol amendments and in response to changes that may become necessary that otherwise wouldn't be needed. And if anybody's ever put through protocol amendments, they will know that that is something that you generally want to avoid. So. This general agreement that a separate SAP is required because of the level of detail that we need. Concepts are being produced and there's a number of different published examples, but often they're not publicly available and we do need to address this aspect. But what we know from experience is that when we do look at statistical analysis plans, there is huge variation in the level of detail. But then that can only be expected if there's an absence of guidance on what should be included within this app, and so we aim to determine what the content should actually be. So the results of this work, which was funded by the MFC Trials methodology Research Network where published in JAMA in 2017 alongside an editorial that was written by David de Metz and in this paper we report on the methods used to determine the guidance for the guidance for the content of the statistical analysis plans, as well as what the content should be itself. So in brief, I just want to highlight that this was very much a collaboration. And included a a survey of current practice across the UK CRC register CTU network. It also included a two stage delphy survey and an expert consensus meeting which involved statisticians from the UK CRC registered CTU network. It included statisticians in the Pharmaceutical industry Journal editors and also had representation from the UK Regulatory Authority, the NHRA. And we followed this all up by a critical review meeting of 51 senior statisticians across the UK. CRC registered CTU network and then we piloted its application in five trials. So. The guidance itself is there within JAMA and it's also within Equator. It's been placed within the clinical trials toolkit, and it's also within the Global Health Network website as well. And the guidance was developed for the requirements of regulatory later phase randomized trials and in doing so we made the following assumptions and they are the SAT is not a standalone document and it should be read in conjunction with the clinical trial protocol and the reason for this is that we want to be able to prevent duplication but also reduce the risk of discrepant documents. We wanted the protocol we wanted to be able to assume that the protocol was consistent with the Spirit guidelines. And we also wanted to make the assumption that the stuff was going to be applied to a clean or validated data set for analysis. So by that we meant that the statistical programming for cleaning the data was going to be specified elsewhere outside of this app. Now this app content guidance document. It covers 6 sections with 32 items plus sub items. And so that gives us a total of 55 items and just a couple of points to mention is that the aim is to put the emphasis on confidence intervals, rolls and P values. And there were some more controversial components than others, namely whether or not we should duplicate the sample size or reference to the protocol. Whether alternative analysis should be specified of assumptions were not met, or whether this was necessary as a consequence of what we call something called blind review and update of the analysis plan. Whether we should be advocating a two stage process where assumptions are checked and the methods are then changed accordingly and. Particularly these last two items around specification of alternative analysis and a two stage process of checking assumptions. They were more controversial, and So what we aim to do is to not be prescriptive, but to allow flexibility rather than to enforce one view or the other. Now, I'm not actually going to spend anytime working through each of the items in the guidance because we developed a detailed elaboration document which is available in the supplementary material of the published paper, but also available on the LCTC website. And we also developed a checklist as well. Now, originally we weren't planning to do that, but because of the volume of requests that we were receiving to actually provide one, we have developed that and posted that on the website as well. So what I want to spend a little bit of time about now is. When should we be writing **** apps? So we've agreed that. If you can't specify all the detail necessary to replicate the analysis in detail across everything that's being undertaken, then we should have a separate analysis plan. But when can we write it? Well, essentially community. After finalizing your protocol. Now for regulated studies. For an open study we need to have that in place before your first participant is recruited, and for a blinded study before the 1st interim analysis. It's important that your SAP is version controlled and has a revision history that details what the changes have been made across versions and gives provides a justification as well. And as and also that we report differences that are there so. An an example from a paper published in The Lancet was that the model had, until expected to include center, but center actually couldn't be included in the Cox model due to the lack of conversions, so that detail was reported in the paper so that anybody comparing it unexpected to see model included in the in the sorry center included in the model would understand the reason why this wasn't actually followed. So. It's also important to make it clear that we're not trying to put researchers into a straitjacket by asking them to pre specify their analysis. We can still undertake post hoc analysis. We're not trying to prevent those at all, and such analysis. Um, which are described as being analysis that performed in the light of the data that were collected rather than being of interest before data collection began, can be requested as. In response to a number of reasons, sometimes it's by the research team who have seen the initial analysis and think, oh, I really want to ask about the answer to this question. Sometimes it's within in response to peer review is a papers been submitted to a Journal and we're trying to address peer review comments. On the issue is that we need to be transparent and identify them as post hoc, together with the rationale for why they were undertaken. Now this isn't something you this transparency is also required as part of the console statement. And I've given an example here of where we've highlighted a post hoc analysis was undertaken. What the post hoc analysis was undertaken forth so in this case it was underlying reasons for further management. The further clinical management of A of a seizure. But we've specified that the assessment was done without knowledge of the allocated intervention. So we've clearly identified it as being post hoc, but we've built in a some extra validity around it by letting people know that yes, it wasn't pretty specified, but we didn't use the allocation intervention information. When we were making the assessments that were undertaken. So. As I mentioned earlier, the SAP guidance was developed for regulated later phase randomized trials. And there is an extension available. Well, an extension that's currently being developed for early phase nonrandomized trials. This is underway and hopefully will be able to be submitted for publication soon and again. That is a collaboration across a number of the UK CSC registered Ctu's, and there are also been discussions or or desires raise that they would like to have extensions for other areas including adaptive designs. Invasion invasion. Studies as well. Also, observational studies. Now. Observation ull studies have actually already been addressed and what was done was they've looked at our guidance for statistical analysis plans for a later phase regulatory trials. And. Up seeing which of those items apply to observation, ull studies and what they found is that of the 32 items that are recommended for A SAP for a clinical trial. 30 of them were identically applicable to A SAP for an observation ULL study. So although we haven't, there's nothing specific been developed for observation studies. It does apply quite well. The guidance that we've already developed, and similarly in another study they did something similar, but they split. The assessment of which items applied to the observation. ULL studies by whether or not the observational studies were retrospective or prospective. And to highlight that the former, the debate. Publication that one actually gives an elaboration document as well, similar to the guidance document that we've developed, which means that you can have context specific examples of how each of those items may apply in practice, and now I'm going to Passover to Anna Kinney to talk about the impact of the project. Great, I'm just getting control. Sorry, just let me just check in. OK. Not advancing Paul. I don't know if you can. Can you advance it for me, Paula instead? Yeah, brilliant thank you for have to do it like this. Apologies OK so I want to talk just briefly about the impact that this guidance is having within clinical trials. As Carol mentioned, we are. It was developed with the UK C Arc views and we know that they are actively using the document and embedding it within their standard operating procedures. We've also been looking to track the documents, the checklist in the elaboration document hosted on Equator and the Liverpool Clinical Trial Center website. And we know in the last just under the last year that there's been 141 downloads of the checklist and 77 downloads at the elaboration document. Obviously those are not referenced in the JAMA publication, so perhaps the more interest is the number of times the JAMA article has been viewed and we can see. As of today, it's been viewed over 80,000 times with 22,000 downloads of the PDF. In advance that cranky, so we've also undertaken a citation analysis of the JAMA paper. We did this originally in February and updated it this week and we've identified 124 citations either from Webber Science or from Google Scholar. And we can see the most frequent reason that the JAMA paper is being cited is because it is being used as a guidance or attempt. The guidance is being used as a template for published or publicly available statistical analysis plans. It is also being frequently cited in articles about the need for transparent reporting and also in articles outlining trial conduct. Where it recommends the use the guidance. If we look at the top row, if you can advance the slide polar, we can see of the articles that are citing the guidance because it's being used as a template. If we look at the country, the first author, we can see that. A lot of these are being written or by people from the UK. However, the guidance is being used internationally and every time we update this analysis, the list of countries increases, which is really encouraging. It's just a couple of things to say back down to minutes. Sorry Paula, you ahead of me there. Just one thing to note. Is that we're obviously very keen to make the point that SAPS should be made publicly available, and that doesn't always necessarily mean that they have to be published in a Journal article. That's one of the reasons we're using Google Scholar in this analysis, because it does pick up other types of documents, and for example, the articles published by authors from Denmark, four out of those six were publicly available documents that were available. A potentially from their University website and they weren't articles published in in in a Journal. Similarly, those that had an unknown first author. Our clinical trials Gov entries where at the Google Scholar has picked up some citations from SAPS published or made publicly available on clinical trials, Gov website and this kind of leads me nicely onto the fact that actually one of the key things in encouraging. SAP's to be made publicly available is the USA, NIH RNIH Final Rule, which is the rule for clinical trial registration and results information submission. And this requires that, alongside clinical trial results. The SAPS should be made available on clinical trials Gov IF they're not if they're not contained within the protocol, so they mandating the SAPS are publicly available. OK, next slide please. OK, so although we've said it's important that the publicly available and that might not be necessarily publishing in the Journal, we are keen to see what journals are doing about making SAPS publicly available. And we know that the ICM JE guidance in 2014 says that. As part of peer review, editors are encouraged to review the research protocols and the plans for statistical analysis if they are separate from the protocol and they should be encouraging authors to make these documents publicly available next slide. So we were keen to see all is this happening and so we looked at the randomized control trial publishing policies within the major medical journals and we found that they did broadly align with. With that guidance they are requesting or the majority of them are requesting SAPS as part of the manuscript submission process. But Interestingly, only BB&J referenced the SAP guidance in the JAMA paper in their publication policy. One thing that is apparent that it is unclear how those saps are used once they are submitted. We have not been able to identify any guidance as how peer reviewers should be using the guidance and how they might compare this apps to the manuscript that is being submitted. And similarly, when not always seeing those SAPS published in those journals, a piece of work by Spence published this year, looked at the availability of statistical analysis plans for randomized control trials published in the major medical journals in 2016. So this is between the ICM JE guidance and between the publication of the JAMA paper. But it found that actually. Um, less than 10% of. Of I've asked is published in Annals of Internal Medicine BMG, and The Lancet had. SAP's publicly available and that was defined either as this apps being in the supplementary material of the publication. Or it contained a link to either a separately published SAP or another publicly available SAP. This was better pyjama and better again for New England Journal of Medicine, but there's still a lot that could be done in this area. Next one Paula. So we're sort of in the process and have been trying to contact journals to talk about this issue. We've had some mixed responses, but one area we've had a positive response is with trials Journal and recently we published an editorial with them. Looking at the different routes that they offer for publishing statistical analysis plans, and I'd encourage you to look at that editorial because there are a number of different ways you can publish a statistical analysis plan with them, just advance it. Um, the only thing to note is that when we go back to our citation analysis and look at all of those 42 that use the guidance as template. Over half of these worst statistical analysis plans published within trials, so trials is already been one of the main journals that will publish statistical analysis plans, which is encouraging. And encouraging that they wanted to continue to do that and to continue to make it easy for people to publish. The journals are publishing statistical analysis plans, but nothing like the frequency that trials is. OK, next slide. And then finally just onto how this is impacting and the impact within funders. Again, we looked at the randomized control trial funding policies for UK funders. And we didn't find any references to the SAP guidance in them at the time, and we've since been in contact and now the references to the JAMA paper and the Statistical Analysis plan guidance will be added to the Welcome Trust, asked funding policy and the NHR data sharing policy. So this is really starting to get embedded within key funder policies and following on from our discussions with NIH are the guidance was highlighted in the 2020 online Evia Funders Conference. So again, it is gaining traction amongst funders and being kind of embedded part of practice, which is really encouraging. So I'm gonna head back to Paula now for the last little bit. Myself. OK, thanks very much Anna. OK so I just wanted to finish last sort of less than five minutes. Just wanted to give you some examples from some areas outside of clinical trials. The first one comes from actually pre clinical trials where we know that there is very often a failure to translate results of impact of effective treatment from preclinical studies in do clinical studies. This may be differences in biology underlying biology, but it could also be due to bias in the study design, conduct, analysis or reporting. And there was this lovely study done. They looked at over nearly four and a half thousand statistical comparisons and found the 17119 observed analysis were statistically significant when only 919 would have been expected to be an. Their conclusion was that selective analysis and outcome reporting bias is. Where plausable explanations for this, their recommendations in preclinical work is pre. Registration of studies. Access to a protocol, the data and the analysis plan. An reporting via the Yriv guidelines. Now they haven't developed a guideline for that statistical analysis plan, but it is under discussion. OK, let me just OK. The other interesting paper thanks to Andrew Jones for this was one from psychology where the authors stated that it is unacceptably easy to publish statistically significant evidence consistent with any hypothesis and the culprit behind that is something they've referred to as researcher degrees of freedom. There is a really interesting. Example of how you can make data seemingly legitimately tell whatever story you want it to and what they refer to as the chronological rejuvenation experiment and the conclusion from the statistical analysis, was that in that experiment was that people were nearly a year and a half younger after listening to when I'm 64, I'm not going to go into the detail of that experiment, but this is a great read. The authors recommended a disclosure based solution, though, rather than a statistical analysis plan up. Front and what what they meant by that was an explanation by the researchers in the paper as to why they did what, so that was an interesting example. I mentioned systematic reviews earlier in being an area where we can demonstrate the impact of bias in the individual studies within a review. But actually this paper shows that systematic reviewers are also prone to inducing bias in what they do. So when you do a systematic review, you need to. State upfront in your protocol what outcomes you're going to then collect data from from the individual studies and actually systematic reviewers had either added new studies, admitted ones from the protocol, upgraded or downgraded, the importance of the outcomes they were interested in in 38% of the reviews, so you know it's widespread, and I came across an article the other day that was actually looking at. Artificial intelligence assisted creation. So in the arts field there is evidence of bias in the creative industries, including art and design. In terms of what is reported and what is shown in that world of using AI for assisted creation. So in conclusion, couple of lovely quotes here. So from Simmons. We believe our our goal as a scientist is not to publish as many articles as we can, but to discover and disseminate truth. And, uh, Goldman said many years ago and we believe it's still true. More so now. Probably we need less research, better research and research done for the right reasons. Therapeutic statistical solutions do exist to these problems of bias. If you know how they were risen. But there will always be a question around them, so prophylactic ones, such as using statistical analysis plans are better. Absolutely. What's important not just here, but everywhere in research is transparency. Transparency is key to overcoming some of the the bias that we see in research. So we have got a little bit over the time we thought, but we do have some time left for Quest.