Dear Editor,
We would like to respond to a comment on the published article entitled “The use of ChatGPT in occupational medicine: opportunities and threats”.
1 ChatGPT’s ability to enhance communication, efficiency, and data analysis could completely change the field of occupational medicine. It is capable of pattern recognition, task automation, and virtual assistant functions. However, due to confidentiality issues, ethical concerns, and the possibility of inconsistent or inaccurate results, caution is required. Occupational health specialists are still necessary for medical data analysis and surveillance; ChatGPT cannot take their place.
Being an artificial intelligence (AI) model, ChatGPT might run into ethical problems like bias, privacy, and dependability. Although it is crucial to protect the privacy and confidentiality of employees’ health information, using ChatGPT may give rise to worries about data security and privacy violations. Because ChatGPT relies so heavily on patterns and trends, it could produce inconsistent or erroneous results, which could affect the standard of health assessments and decisions.
In order to encourage responsible AI use in occupational medicine, more research and development should concentrate on minimizing biases, protecting privacy, and creating guidelines.
2 To minimize biases in AI systems used in occupational medicine, researchers can take several steps. One approach is to ensure that the algorithms used are trained on diverse and representative datasets. By doing so, AI systems can be developed that do not unfairly favor certain groups based on race, gender, or other protected characteristics. Regular audits and evaluations of the algorithms can also be conducted to identify and address any potential biases that may arise. Protecting privacy is another key consideration in AI systems for occupational medicine. Developers can design systems that adhere to strict data privacy standards. This can involve implementing robust encryption protocols, anonymizing patient data, and obtaining explicit consent from individuals before collecting and processing their personal information. Organizations can also establish stringent access control measures to restrict unauthorized use or disclosure of sensitive data, ensuring the utmost privacy protection for patients.
Creating guidelines is crucial for the responsible use of AI systems in occupational medicine. One example of creating guidelines is the formation of a professional consortium comprising occupational medicine practitioners, AI experts, and ethicists. This collaboration enables the establishment of industry-wide standards that encompass best practices for the development, deployment, and monitoring of AI systems. These guidelines may address issues such as transparency, accountability, auditability, and the responsible sharing of data. By following these guidelines, the use of AI in occupational medicine can be guided by ethical considerations and ensure its effectiveness and reliability.
3
The analysis and evaluation of workplace health and safety can be made more accurate and less inconsistent by continuously improving ChatGPT's algorithms and training data. Instead of replacing other occupational health professionals’ tools, ChatGPT should be viewed as an addition to them. Working together, human experts and AI systems like ChatGPT can guarantee critical analysis of workers’ health data and produce better results. Finally, in the event that the practitioner actually uses ChatGPT, they—not the AI—will remain fully responsible for any clinical patient management decisions made.
4
-
Competing interests: The authors declare that they have no competing interests.
-
Authors contributions:
Conceptualization: Daungsupawong H, Wiwanitkit V.
Formal analysis: Daungsupawong H.
Supervision: Wiwanitkit V.
Writing - original draft: Daungsupawong H.
Abbreviations
References
- 1. Sridi C, Brigui S. The use of ChatGPT in occupational medicine: opportunities and threats. Ann Occup Environ Med 2023;35(1):e42. 38029273.ArticlePubMedPMCPDF
- 2. Khan MS, Umer H. ChatGPT in finance: applications, challenges, and solutions. Heliyon 2024;10(2):e24890. 38304767.ArticlePubMedPMC
- 3. Guleria A, Krishan K, Sharma V, Kanchan T. ChatGPT: ethical concerns and challenges in academics and research. J Infect Dev Ctries 2023;17(9):1292–1299. 37824352.ArticlePubMedPDF
- 4. Kleebayoon A, Wiwanitkit V. ChatGPT, critical thing and ethical practice. Clin Chem Lab Med 2023;61(11):e221. 37247851.ArticlePubMed
Citations
Citations to this article as recorded by
