Healthcare experts demand transparency in AI use
These concerns underscore the need for careful and transparent AI integration in healthcare to build trust and ensure reliable, responsible usage.
Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but demand greater transparency regarding its application. A survey by Elsevier reveals that 94% of researchers and 96% of clinicians believe AI will accelerate knowledge discovery, while a similar proportion sees it boosting research output and reducing costs. Both groups, however, stress the need for quality content, trust, and transparency before they fully embrace AI tools.
The survey, involving 3,000 participants across 123 countries, indicates that 87% of respondents think AI will enhance overall work quality, and 85% believe it will free up time for higher-value projects. Despite these positive outlooks, there are significant concerns about AI’s potential misuse. Specifically, 95% of researchers and 93% of clinicians fear that AI could be used to spread misinformation. In India, 82% of doctors worry about overreliance on AI in clinical decisions, and 79% are concerned about societal disruptions like unemployment.
To address these issues, 81% of researchers and clinicians expect to be informed if the tools they use depend on generative AI. Moreover, 71% want assurance that AI-dependent tools are based on high-quality, trusted data sources. Transparency in peer-review processes is also crucial, with 78% of researchers and 80% of clinicians expecting to know if AI influences manuscript recommendations. These insights underscore the importance of transparency and trust in the adoption of AI in healthcare.