Machine Learning and Artificial Intelligence Redefining Diagnosis and Treatment for Mental Health Illnesses

0
1035

Abstract

The emergence of COVID-19 led to a domino effect within the mental health field. While increasing awareness on the topic, isolation through the years of COVID-19 has led to a rise in mental health issues and a desire to receive diagnosis and treatment. Nations worldwide have struggled to match this demand due to a need for more qualified physicians in the field. As such, we discuss the possibility of further integrating Machine Learning and Artificial Intelligence based support systems in mental health. This paper aims to synthesize the developments in machine learning (ML) and artificial intelligence (AI) for mental health, highlighting growth in the diagnosis and treatment stages. Further discussion on ethical and legal implications that stem from said developments is also mentioned. We examined several sources to present the most relevant data for this paper. This paper identified and explored a select sample of diagnosis and treatment tools for mental health illnesses. These tools focus on the diagnosis of a singular illness such as Major Depressive Disorder, Attention Deficit Hyperactivity Disorder (ADHD), or Social Anxiety Disorder, or they aim to diagnose a specific set of illnesses. They’ve been built through artificial neural networks (ANN) and support vector machine (SVM) models, which train systems to adapt to various patients and form efficient diagnoses. We aim to collect data from these tools as they diagnose common mental illnesses. Data covered in this paper is extracted from respective research, specifically the success rates of tools and their limitations. Data on treatment was based on rising trends in the use of chatbots, which are yet to be considered formal tools in the healthcare field. Regardless, we scope out articles and research on these tools’ rise and possible future. Through these tools, we explored the relevance and application of Machine Learning (ML) and Artificial Intelligence (AI) in the mental health field by considering each tool’s fundamental limitations and benefits. While AI in mental health proves helpful in advancing development and treatment for many, ethical and legal concerns continue to plague the advancement of Decision Support Systems (DSS) in healthcare—specifically, the risk of data privacy and algorithm bias within diagnosis. However, a push for guidelines in AI development also indicates that with more barriers, the field will continue to open up towards technological advancements. For now, research and trials on new DSS models are integral to ensuring the debut of well-formed tools.

Introduction

Following COVID-19, the demand for diagnosis and treatment for mental health issues has not matched professional capacities1. A lack of qualified doctors has left a gap in the healthcare sector. 

These issues have been ongoing for far longer and are projected to continue. Political interception in many nations, such as the United States, has ensued as governmental figures foresee a need for up to 100,000 doctors in the field2. Young children have seen increases in reclusion and anxiety due to rapid changes in routine during COVID-19 and adults have faced recurring patterns of depression stemming from isolation during the period. In a self-reported study it was found that 1 in 10 respondents felt that their needs were severely mismatched with healthcare professionals3.

Parallely, development in the technology sphere has skyrocketed, with more resources and tools available to a wide variety of industries. As such, machine learning (ML) and artificial intelligence (AI) tools have made recent debuts in the diagnosis of mental illnesses. 

Currently, the United States aims to increase the number of physicians to support the rising demands by increasing access to education. However, this paper discusses the possibility of introducing more technology-based tools into the field to alleviate stress on professionals4. The medical field has become more open to digital aid forms, and we discuss the possibility of the same happening in the mental health field.

We have also seen a variety of self-help tools, such as chatbots, developed through ML and AI, become more mainstream in the self-help genre of mental illnesses. This paper discusses the precedence, future, and ethics of all these tools in the mental health sector. This paper begins with a discussion of existing tools within the mental health field and then covers the development of ML and AI-fueled tools. We will discuss how these tools calculate their success rates and their limitations. Lastly, this paper will consider future research and implications of up-and-coming ML and AI tools.

Literature Review

Digital tools aiding in the Diagnosis of Mental illnesses

While ML and AI tools integrated into the mental health sector have seen success thus far, they remain specialized to individual illnesses. Specific tools, such as the SCL-90-R, act as the baseline for these AI tools. The SCL-90-R remains the most well-known instrument in the mental health field and is not fueled by AI or ML. Instead, the test can be administered on paper or computerized. In using the SCL-90-R, clinics have seen an increase in productivity and ability to identify general illnesses—this has funneled interest in technology in healthcare, specifically mental health5.

Given this, research to develop ML and AI methods of diagnosis has centered chiefly around identifying whether a patient does or does not have a singular mental illness. However, illnesses that have integrated ML and AI tools have shown promising results, with most indicating a future for adaptive technology in the mental health sector—especially considering the intense demand for professional help6. In order to lay out the development in the industry, the following sections will be segmented into traditional support tools that have been explored in the mental health field and newer ML and AI tools. 

The SCL-90-R: The baseline test in Traditional Diagnosis

One of the most well-known traditional tools for diagnosis is the SCL-90-R4. This instrument poses 90 questions on a 5-point scale from 0 to 4 to illustrate symptom occurrence (with 0 indicating no symptoms and four indicating the symptoms occur regularly)7

The system has a reserved number of questions specific to each of the ten illnesses it can diagnose. For example, the instrument holds 13 questions to aid in the diagnosis of depression, while there are 9 for anxiety. Hence, the SCL-90-R takes data linearly to diagnose the illness based on its sector of questions rather than making responses interact8. This is a limitation that the instrument continues to have as it does not use machine learning or artificial intelligence. We aim to address this limitation by developing better decision support systems (DSS).

Due to the in-depth nature of the instrument, researchers and professionals have indicated the need for a shorter system. However, in shortening the length of the SCL-90-R, the effectiveness of diagnosing all ten illnesses diminishes significantly8. We have seen that to form accurate diagnoses, traditional tools require the gathering of far more data as they cannot form repeated patterns across multiple patients. Considering vast demand, more than lengthy and ineffective diagnosis tests are needed.

Hence, the need for a shorter model with more interactive means of arriving at conclusions has arisen. Currently, such tools are being developed with the help of ML and AI. However, they have yet to force an industry-wide shift from traditional diagnosis tools.

The BSI-18: A condensed test in Traditional Diagnosis

Like the SCL-90-R, the BSI-18 is a traditional tool that uses linear tools to categorize data into a diagnosis. While significantly shorter than the SCL-90-R, with only 18 questions, it is limited in the number of diagnosable illnesses9. The BSI-18 can only diagnose up to 3 illnesses—depression, schizophrenia, and anxiety. 

While the test is used in several clinics globally, it is less effective than the SCL-90-R, making it the more niche tool9. It also poses the same challenge as the SCL-90-R, in which it cannot come to several conclusions with a smaller set of questions, as it continues to formulate diagnosis linearly. This indicates the persistent need for an ML and AI tool that can replace the SCL-90-R while being more efficient and significantly shorter.

Introduction to DSS Tools

Following the need for more efficient tools in the mental health field, several researchers have explored the possibility of developing decision support systems (DSS) that integrate learning models of different forms. This involved using data collected from the SCM and BSI instruments to build a base for the machine to detect where certain response sets lie next to precedent. 

Several tools were developed to aid in the diagnosis of ADHD, depression, and more. Some of these DSS employed support vector machine (SVM) models, and others employed an artificial neural network (ANN) model, all to merge several data sets from traditional tools that have been in play for years. 

We note that both models achieve the same effects but in different ways; SVM maps data to be categorized and compared for future reference, whereas ANN forms layers of data to refer back to10. Ultimately, SVM models work particularly well on smaller and more complex datasets, given they categorize at high-efficiency rates; however, ANN models are instrumental when dealing with a breadth of information and large data sets as they organize through levels of information which is efficient with several data points11

While these tools have been developing steadily, there has yet to be widespread change—instead, clinics continue to administer paper-based diagnosis tests. The leading cause of the lack of support for DSS tools has been that they have yet to span across several mental illnesses. Though some researchers have been working on formulating tools that can be widespread, a primary DSS tool has yet to emerge, with most remaining niche and used within the scope of their respective illnesses.

An ADHD Hybrid DSS

Employing a mix of ML tools, the NHS developed an instrument that aids in diagnosing ADHD patients1. Referencing past data stored in the NHS catalog to develop a new diagnosis is how the tool concludes. What is unique about this tool is that it addresses limitations in the data and formulates three responses to data imputed to the system. It can either diagnose the patient with or without ADHD or refer them to an expert. In turn, the instrument can aid professionals in growing workloads. 

This instrument uses a hybrid approach to steer the healthcare sector into a steady change to digital diagnosis tools1. By addressing its internal limitations, the tool is more successful on the net and can combat the demand for diagnosis. While a niche DSS tool at the end of the day, it presents excellent results for the application within the diagnosis of ADHD, an ever-rampant demand globally.

This ADHD tool has been put into mass testing within eight clinics under the umbrella of the NHS1. Through the initial testing, the instrument promises to change the path of current mental health fields.

A Social Anxiety Disorder DSS

The Social Anxiety Disorder (SAD) DSS was developed using data collected from 214 patients and participants12. The study consisted of a multi-stage procedure—the preprocessing, classification, and evaluation.

The first stage detected anomalies using the Self-Organizing Map (SOM), normalization, and feature selection. Then, the Adaptive Neuro-fuzzy Inference System (ANFIS) technique is a more researched and refined version of Artificial Neural Networks (ANN) which forms highly interconnected neural networks that cause fewer errors, with 5-fold cross-validation was used for the final classification of SAD13. Empirically, this algorithm has been integrated into several other tools built through ML. 

The data collected from the patients were preprocessed and fed into an altered ANFIS model, in which a hybrid optimization learning algorithm and 41 epochs were used as learning parameters12. The epoch level indicates 41 complete passes through the dataset to help train the system and produce accurate outputs. 

The SAD DSS model learned how to diagnose and recognize SAD accurately through this learning process. This model was a step forward in addressing niche disorders through Computer-DSS tools.

A Depression DSS

The Computerized Adaptive Test for Major Depressive Disorder  (CAD-MDD) was developed to aid in the diagnosis of Major Depressive Disorder (MDD), a specific iteration of depression14. CAD-MDD was developed using information from 656 individuals with a prior history of mental health illnesses, ranging from minor/significant depression to schizophrenia. Using ML in the form of forest trees, developers used a large data bank from the participants. All of which were weaved together.

Users of the CAD-MDD can be diagnosed at any point of taking the screening model; as the algorithm works based on diagnosing or announcing results, the exact second odds flip from negative to positive14. This model was based on decision-theoretical models such as trees and random forests of data.

The model was then used on the same individuals to diagnose and determine the success rates14. The information gathered proves it to be a viable and valuable screening tool in healthcare as it does not drag as long as alternative models.

The Rise of Chatbots in Treatment for Mental Health illnesses

We then have to consider the second component to tackling the struggle of mental health, which is long-term plans for patients who get diagnosed with illnesses. In most cases, patients are referred to licensed professionals who aid in treating illnesses by using medication or counseling services. Research on medication has been rapidly increasing, and patients who partake in counseling services regard the tool as highly useful. However, for those who are battling with the stigma or lack financial tools, treatment remains largely inaccessible. 

With increased digital platforms and mobile users, there is an observed shift to ML and AI-based chatbots. These tools have become increasingly popular, skyrocketing the rates of individuals who turn to self-medication for illnesses 15. This form of treatment is more garnered towards those who explicitly seek out external help tools; however, from engagement levels, there is a clear trend that such individuals do exist. Often, these chatbots come in the form of websites, applications, and online communities. These web-based services offer access to peer support, self-diagnosis, and therapy sessions16.

Individuals seeking out these resources either use them independently to any form of official treatment or in conjunction with professional therapy or treatment. 

Exploring the value of existing chatbots

Existing research on the use of chatbots in aiding mental health treatment displays a promising future for this field and that, with time, there is a possibility of widespread adaptation. However, unlike the diagnosis stage, the treatment stage has received less attention concerning digital tools.

Traditional tools such as counseling have succeeded due to the in-person nature of the interactions, with 75% of the value stemming from the relationship built between patient and counselor15. However, this does not eliminate the possibility of ML and AI tools in treating mental illnesses.

Chatbots are increasingly popular due to their practicality, as several patients look for access to instant information without consistent in-person interactions. Hence, chatbots based on ML and AI tools can easily reference existing data around the internet to formulate a response to questions instantly, providing gratification to users. For example, Gabby, a Conversational artificial intelligence tool, has been developed to increase access to knowledge related to personal problems within impoverished areas17. This tool has made mental health care feasible for populations without access to self-care tips and guidance. 

Another benefit of this resource is the second-hand adherence of professionals to other digitally distributed content due to the increased desire to consume information from readily available digital resources 18

Online resources are also perceived as safer for many individuals who fear reaching out to physical resources due to stigma. Users of chatbots tend to respond more honestly to questions than individuals put in front of professionals. Chatbots have also developed rapidly to foster a safer environment on digital platforms, increasing the perceived safety net19. A break from judgment allows more people to seek remedies for their mental health illnesses. 

While chatbots that bounce off personal experiences and respond to them with internet-backed advice are beneficial to those who seek accessible tools, several ethical concerns arise from such tools. Questions on data privacy remain valuable, but the more significant concern when addressing mental health is the possibility of harmful advice generated by systems. Though chatbots are at the beginning of utility, limitations, and questions such as these should be addressed20.  

Further along the paper, we will discuss the ethical and legal issues that stem from utilizing computer-based models in healthcare and how development in these fields will shape in the coming years. 

Data

ADHD DSS Success Rate

Developed within the NHS, a publicized data set of 69 patients with general demographic information was used in developing the hybrid model1. This dataset consisted of clinical, self-reported screening questionnaires, which were validated and more. Table 11 of the research summarizes the general demographics of participants. 

The model used in this study was a hybrid of a knowledge-based algorithm (KR) and a machine learning (ML) algorithm1. Both of which, when combined, were able to address the best course of action for diagnosis. A clear benefit of combining the two models is that it allows double verification of a diagnosis. In cases where the result is unclear, and the patient is referred to a specialist, data tracked and noted by the machine learning model can be relayed to the specialist to help with their decision. 

Success rates were calculated by inputting participant information into the algorithm and comparing results to pre-existing diagnoses. Errors in the diagnosis reduced accuracy rates. Accuracies were calculated independently for all three models and then compared to determine which was the most successful. 

In the hybrid model, the algorithm directs diagnosis to an expert when the knowledge-based and machine-learning algorithms disagree. Given the tension in the diagnosis, the model decides the best method is to pass it forward to an expert1

This model, developed with the initial pool, was then tested separately on both primary and risk assessment data for both algorithms. Results for all three models considered (ML, KR, and Hybrid) are below. 

Table 1

ADHD Knowledge Based, Machine Learning, and Hybrid Accuracies Breakdown

ModelYes/No (%)Yes/No/Expert (%)
KR35/38 (92.1)66/69 (95.7)
ML61/69 (88.4)61/69 (88.4)
Hybrid32/35 (91.4)66/69 (95.7)

The data for the second column shows that the knowledge-based model has the highest accuracy rate. However, this only contains two options that the tool outputs: yes or no in response to a diagnosis of ADHD. In order to account for possible expert intervention, should the tool not be able to accurately detect the diagnosis, an expert output is integrated into all three models. With this change, we can see that the knowledge-based and Hybrid Models have equal success rates in diagnosing patients, at 95.7%. 

This paper did not mention demographic differences and their effects on algorithm bias, which indicates that while success rates are promising, they may only be an accurate predictor for some backgrounds. It is essential to consider the variety of responses and backgrounds for mental health tools as they are not as clear-cut in diagnosis as other health issues21

Anxiety Disorder DSS Success Rate

Conducting a set survey on 214 patients, researchers developed a Social Anxiety Disorder (SAD) detection tool, which cross verifies at five stages to ensure that the results returned have been considered from several angles12. The data collected from this survey can be found in Table 112, which displays all of the answers. 

To make success rates visible, we have categorized the success rates of the Anxiety Disorder DSS into a table found below. This lays out how the DSS tool compares to industry classifications, with a higher number indicating the tremendous success of the tool12. Breaking the data into three metrics allows us to analyze it from three angles and determine if it is adequate in the diagnosis process. 

The data collected was then compared to the general Anxiety Disorder tool, a traditional anxiety diagnosis tool. The accuracy reported is specific to the GAD-7’s ability to diagnose social anxiety22

Table 2

Social Anxiety Disorder DSS Accuracies Breakdown

Accuracy (%)Sensitivity  (%)Specificity Metrics (%)
SAD98.6797.14 100 
GAD-7859595

In this table, we can see the accuracy of the SAD diagnosis tools is at 98.67% when data is fed in from several possible patients. In this case, sensitivity breaks down the tool’s ability to accurately diagnose fluctuating independent variables. A high value indicates that it is closer to correctly diagnosing patients. Specificity metrics refer to the number of true positives that the model identified. In this case, a perfect percentage indicates that the tool does an excellent job of accurately diagnosing those with SAD. Compared to the GAD-7, it is apparent that the tool can diagnose most patients more accurately than the current diagnosis tool in place. 

This tool should have discussed demographics within its results section. As such, differing backgrounds were not considered when calculating the DSS’s success rate12. Lack of representation of all groups can lead to inaccurate diagnosis, a limitation that will be explored later. 

Major Depressive Disorder DSS Success Rate

The Depression DSS is based on decision trees and random forests, making the model adept at instant diagnosis for patients participating14. A group of 656 patients with a range of mental health conditions were used in the creation and testing of CAD-MDD; their general information can be found in Table 114.

Predictive values were calculated by comparing the CAD-MDD to the PHQ-9, the traditional depression questionnaire in place since 200123. Sensitivity and specificity metrics were also compared to ensure the ML-based tool can keep up with the empirical tool. The method of finding predictive values is based on comparing diagnosis through CAD-MDD in comparison to actual diagnosis, then determining errors in diagnosis14

As done previously, we segregated success rates and results of the model into a table with general information below. The model from the numbers listed shows high success and potential, especially considering its ease of use. 

Table 3

Major Depressive Disorder DSS Accuracies Breakdown

Positive Predictive Value (%)Negative Predictive Value (%)Sensitivity Specificity Metrics 
CAD-MDD95950.950.87
PHQ-995950.860.86

In this case, the data displayed breaks down accuracies to detect true negatives and positives with patients. True positives refer to those who do have MDD being diagnosed with the tool as such, and true negatives are the tool identifying those who do not have MDD by not diagnosing them. As discussed above, sensitivity refers to the machine’s ability to diagnose even with fluctuating independent variables accurately. Lastly, specificity metrics are a proportion of the machine’s accurately diagnosed cases and inaccurate diagnoses. With all categories except specificity metrics falling in the 95%, this tool is consistent with diagnosis. 

No specific mention of demography was made in discussing predictive values and success rates, indicating that the experiment did not calculate separate predictive values for all demographics. This may be a limitation of the DSS tool when considering algorithm bias, a concept discussed later in the paper14.

Analysis

Across the board, DSS tools discussed in this report showcase accuracy or success ranging from 95%-98.67%, indicating viability for the tools going forward. Through rounds of testing on different data sets, it is proven that these models can adapt and learn from the data inserted by patients. 

Diversity within preliminary groups used in developing these models allows for a broad base for the algorithm to take from when operating daily. However, there is not enough variety in success rate testing to guarantee a lack of algorithm bias—which we will discuss further24.

As a whole, for the diagnosis of mental health issues, it is clear that there is a future for ML and AI-based tools. However, several limitations and discussions surrounding the topic could hold development back in the coming years.

Limitations

DSS Diagnosis Limitations

Currently, the available DSS tools mainly focus on diagnosing patients based on the provided data. While they succeed, there is still a slight sample of misdiagnosed individuals, posing a concern with the algorithms. 

Tools, such as the NHS tool, manage to find a hybrid mix to introduce breakpoints for professional voices to intervene in place of the machine. Most of the DSS tools available have yet to mimic a similar structure1. This poses a significant limitation in the widespread integration of DSS tools.

The second possible risk is the presence of algorithm bias. In data collection, we often need to find a way to vary subjects in their background and make-up. Culture, identity, and family significantly influence varying responses to questions and data collected25. These make-ups determine varying responses to trauma, stress, anxiety, and more when considering mental illnesses. If not advanced enough, a DSS fails to consider these nuances, resulting in biased algorithms that diagnose patients with tools built on previous data that may be skewed to the most popular backgrounds24.

When exploring empirical data on tools developed for ML, it was found that out of 114 publications, 90% of the results failed to discuss the ethnicity and background of participants26. This indicates that potential biases are not actively explored or mentioned when reporting ML and AI-based tools. Further research shows that existing disparities within social groups make training sets unrepresentative of the population, leading to a negative feedback loop. This bias leads to minorities being misdiagnosed or mistreated as signs of mental illness vary from person to person. Since mental health is not as structured in diagnosis as other health issues, such as heart failure, a DSS tool that fails to account for a variety of leading paths for diagnosis is not accurate enough21.  

In the case of mental health illnesses, there is no doubt that the identity and general background of a person have a significant effect on their individual experiences with the illnesses— and how they manifest25. One of the major contributors to potential bias stems from the difference in socioeconomic classes, as answers may deviate from the patterns on ML and AI tools. This continues to hinder lower-income individuals from getting access to cheaper and digital diagnosis, as their responses may converge to different diagnoses to what they actually deserve since the tools don’t account for a variety of backgrounds and reactions to mental health issues27. As such, this limitation continues with DSS tools that do not contain sufficient neural networks to account for these nuances.

Limits to Chatbots in Treatment

Briefly discussed at the beginning of the report was the possibility of chatbots taking over the role of traditional therapy. While an increasing possibility, this prospect comes with various limitations. The most significant being the lack of personal connection that is integral to the treatment process and possible misinformation.

As mentioned above, a large portion of success in treating mental health issues comes from the relationship developed with the professional patients referred to15. Having connections that lean on trust and lack of judgment is what helps patients remain vulnerable and open to aid. This is integral in ensuring that patients get diagnosed and treated.

Secondary limitations of this treatment are the possibility of internet chatbots gathering and learning potentially harmful data28. The impact of this comes down to the fact that individuals could be exposed to advice from chatbots that can put them in dangerous situations. When scoping 12 highly referred to studies on chatbots, only two deemed chatbots safe for use. The rest continued to state that further research is needed to produce digital tools that replace human connections and provide accurate information29

The more widespread and promoted chatbots are, the more likely it is that such malignant circumstances occur, as there are chances that patients are subject to adverse advice; however, just as with every other topic, there are layers of nuance that consider the several other benefits of making easily accessible treatment options30. Instead, tools still need more focus in their advice and resort to the ‘closest’ answer, which is not the most accurate.

At this point, the possibility of professionally endorsed chatbots needs to be clarified, yet the tool’s limitations are well-known by professionals and researchers. 

Discussion

Mental health will remain a rampant issue, especially as awareness and advocacy about the topic pick up momentum. As such, ML and AI tools that aim to lessen the load on existing professionals will become imperative, as their ability to diagnose patients accurately and quickly keeps the healthcare system on top of increased demand for diagnosis. However, there is no doubt that several barriers exist before the complete integration of Decision Support Systems (DSS) tools into the mental health field. 

We note that a primary concern remains ethicality, with most questioning confidentiality and how to maintain it. So far, while there are some rules for safeguarding data gathered through AI, most of the field lacks a clear code of conduct31. Legal and ethical concerns will continue to arise if clarity is not provided in the industry. However, rapid development in the technology field has yet to be matched with equally adept policy32. Four principles that exist as fragments within the use of ML and AI-based tools are beneficence, non-maleficence, autonomy, and justice33. These four principles are evolving into creating adept legal policies to maintain user privacy and ethicality when gathering data. 

Specific legal rules, such as the California Privacy Rights Act, have been amended to include regulations on the use of ML and AI34. This restricts the ability of companies to extract private information and utilize harmful intentions. Further acts along similar lines must continue to ensure ML and AI smooth transition into healthcare. Furthermore, corporations adopt strict ML and AI laws to maintain user privacy35.

Another primary concern as time goes on is the impact on healthcare roles currently employed. With more robust ML and AI tools, professionals can focus time on only extraneous cases that do not fall into DSS systems. This streamlines efficiency, allowing existing professionals to optimize their time. 

However, several see the further development of DSS tools as a potential harm to healthcare workers, as psychiatrists specializing in diagnosis and consultation could be written off36. It is noted that should DSS tools be refined to address all backgrounds and ethical concerns, job losses are inevitable, especially in high-skill roles such as physicians37. However, in the realm of treatment, where human connection is integral, experts in the field are expected to retain most of their jobs15

Lastly, longevity is a concern that specifically concerns the treatment process of several mental health issues, as patients heavily rely upon forms of therapy to build secure connections to battle struggles38. While chatbots are adequate in aiding short-term problems, the future of these tools is yet to be discovered as we do not know the impacts of this form of treatment on an individual’s development20. However, if we can harness these tools well, there are several positive outcomes, such as decreased inequality with access to healthcare. 

A significant benefit of integrating ML and AI tools into mental health diagnosis and treatment is accessibility. Healthcare becomes far more accessible for the population as ML and AI tools can handle large data sets at a time, allowing the system to accommodate more users39. As such, for most individuals, the cost of diagnosis goes down significantly. The economic impacts are vast, as it makes mental health tools more affordable and accessible, and it is easier to integrate ML and AI into several communities39. However, this is dependent on tools taking into account a variety of backgrounds, as without that, healthcare isn’t able to accurately diagnose and treat all individuals27 .

The increasing demand for diagnosis and treatment has not made integrating new tools any more accessible, with more people looking for personal connections with professionals specializing in mental health and lacking professionals to meet the demand1

As we continue to develop on both ethical and legal fields alongside digital fronts, there must be an identified balance to ensure everyone can maximize resources developed without harming intrinsic human rights40

Conclusion

For now, complicated webs of information surround the topic of ML and AI in the mental health field. With several opinions clashing on the applications for diagnosis and treatment, we hope that we can see movement in some form as clarity comes regarding ethical and legal barriers. Current DSS tools promise a strong future for mental health diagnosis, however there is need for further development to address different groups and minorities within these models. With more representation within ML and AI tools, healthcare can become accessible and widespread. We predict that as this happens, there will be a decline in jobs within the field of mental health Diagnosis. However we’re confident that the remaining professionals will not be overwhelmed with numbers of patients that outpace their time and abilities. Similarly, treatment options in the form of chatbots require more development as well. With the risk of misinformation and data privacy, there has to be clearer legal boundaries before clinics can take in the idea of digital therapy and treatment. As the number of patients continues to grow, we predict a steady growth in attention to the mental health field. With time, we hope that further development of DSS tools occurs to reduce stress on the existing population of professionals. Further research on success rates of DSS tools on various backgrounds is also needed, followed up with solid legal outlines for ML and AI-based models in healthcare.

References

  1. Tachmazidis, I., Chen, T., Adamou, M., & Antoniou, G. (2020a). A hybrid AI approach for supporting clinical diagnosis of attention deficit hyperactivity disorder (ADHD) in adults. Health Information Science and Systems, 9(1). [] [] [] [] [] [] [] [] [] []
  2. Gates, A., & Mohiuddin, S. (2022). Addressing the mental health workforce shortage through the Resident Physician Shortage Reduction Act of 2021. Academic Psychiatry, 46(4), 540–541. []
  3. U.S. Department of Health and Human Services. (2023, September 28). Covid-19 mental health information and resources. National Institutes of Health. []
  4. Tutun, S., Johnson, M. E., Ahmed, A., Albizri, A., Irgil, S., Yesilkaya, I., Ucar, E. N., Sengun, T., & Harfouche, A. (2022). An AI-based decision support system for predicting mental health disorders. Information Systems Frontiers, 25(3), 1261–1276. [] []
  5. Curto, M., Pompili, E., Silvestrini, C., Bellizzi, P., Navari, S., Pompili, P., Manzi, A., Bianchini, V., Carlone, C., Ferracuti, S., Nicolò, G., & Baldessarini, R. J. (2018). A novel SCL-90-R six-item factor identifies subjects at risk of early adverse outcomes in public mental health settings. Psychiatry Research, 267, 376–381. []
  6. Shatte, A., Hutchinson, D., & Teague, S. (2018). Machine Learning in Mental Health: A Systematic Scoping Review of Methods and Applications. []
  7. Abiri, F. A., & Shairi, M. R. (2020). Short Forms of Symptom Checklist (SCL): Investigation of validity & Reliability. Biannual Journal of Clinical Psychology &  Personality, 18(1). []
  8. Abiri, F. A., & Shairi, M. R. (2020). Short Forms of Symptom Checklist (SCL): Investigation of validity & Reliability. Biannual Journal of Clinical Psychology &  Personality, 18(1). [] []
  9. Franke, G. H., Jaeger, S., Glaesmer, H., Barkmann, C., Petrowski, K., & Braehler, E. (2017). Psychometric Analysis of the brief symptom inventory 18 (BSI-18) in a representative German sample. BMC Medical Research Methodology, 17(1). [] []
  10. Moraes, R., Valiati, J. F., & Gavião Neto, W. P. (2013). Document-level sentiment classification: An empirical comparison between SVM and ann. Expert Systems with Applications, 40(2), 621–633. []
  11. Ren, J. (2012). Ann vs. SVM: Which One performs better in classification of mccs in mammogram imaging. Knowledge-Based Systems, 26, 144–153. []
  12. Fathi, S., Ahmadi, M., Birashk, B., & Dehnad, A. (2020). Development and use of a clinical decision support system for the diagnosis of Social Anxiety Disorder. Computer Methods and Programs in Biomedicine, 190, 105354. [] [] [] [] [] []
  13. Jang, J.-S. R. (1993). ANFIS: Adaptive-network-based Fuzzy Inference System. IEEE Transactions on Systems, Man, and Cybernetics, 23(3), 665–685. []
  14. Gibbons, R. D., Hooker, G., Finkelman, M. D., Weiss, D. J., Pilkonis, P. A., Frank, E., Moore, T., & Kupfer, D. J. (2013). The computerized adaptive diagnostic test for major depressive disorder (CAD-MDD). The Journal of Clinical Psychiatry, 74(07), 669–674. [] [] [] [] [] [] []
  15. Cameron, G., Cameron, D., Megaw, G., Bond, R., Mulvenna, M., O’Neill, S., Armour, C., & McTear, M. (2017). Towards a chatbot for digital counseling. Electronic Workshops in Computing. [] [] [] []
  16. Koulouri, T., Macredie, R. D., & Olakitan, D. (2022). Chatbots to support Young Adults’ Mental Health: An exploratory study of acceptability. ACM Transactions on Interactive Intelligent Systems, 12(2), 1–39. []
  17. Miner, A. S., Milstein, A., & Hancock, J. T. (2017). Talking to machines about personal mental health problems. JAMA, 318(13), 1217. []
  18. Denecke, K., Abd-Alrazaq, A., & Househ, M. (2021). Artificial Intelligence for chatbots in mental health: Opportunities and challenges. Multiple Perspectives on Artificial Intelligence in Healthcare, 115–128. []
  19. Lucas, G. M., Gratch, J., King, A., & Morency, L.-P. (2014). It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37, 94–100. []
  20. Torous, J., Nicholas, J., Larsen, M. E., Firth, J., & Christensen, H. (2018). Clinical Review of user engagement with mental health smartphone apps: Evidence, theory and improvements. Evidence Based Mental Health, 21(3), 116–119. [] []
  21. Walsh, C. G., Chaudhry, B., Dua, P., Goodman, K. W., Kaplan, B., Kavuluru, R., Solomonides, A., & Subbian, V. (2020). Stigma, biomarkers, and algorithmic bias: Recommendations for Precision Behavioral Health with Artificial Intelligence. JAMIA Open, 3(1), 9–15. [] []
  22. O’Connor, E. A., Henninger, M. L., Perdue, L. A., Coppola, E. L., Thomas, R. G., & Gaynes, B. N. (2023). Anxiety screening. JAMA, 329(24), 2171. []
  23. Kroenke, K., Spitzer, R. L., & Williams, J. B. (2001). The PHQ-9. Journal of General Internal Medicine, 16(9), 606–613. []
  24. Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’Ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of ai. International Journal of Information Management, 60, 102387. [] []
  25. Barker-Collo, S. L. (2003). Culture and validity of the symptom checklist-90-revised and profile of mood states in a New Zealand student sample. Cultural Diversity and Ethnic Minority Psychology, 9(2), 185–196. [] []
  26. Crowley, R. J., Tan, Y. J., & Ioannidis, J. P. (2020). Empirical assessment of bias in Machine Learning Diagnostic Test Accuracy Studies. Journal of the American Medical Informatics Association, 27(7), 1092–1101. []
  27. Can ai help reduce disparities in general medical and mental health care? (2019). AMA Journal of Ethics, 21(2). [] []
  28. Cameron, G., Cameron, D., Megaw, G., Bond, R., Mulvenna, M., O’Neill, S., Armour, C., & McTear, M. (2019). Assessing the usability of a chatbot for Mental Health Care. Internet Science, 121–132. []
  29. Abd-Alrazaq, A. A., Rababeh, A., Alajlani, M., Bewick, B. M., & Househ, M. (2020). Effectiveness and safety of using chatbots to improve mental health: Systematic review and meta-analysis. Journal of Medical Internet Research, 22(7). []
  30. Cameron, G., Cameron, D., Megaw, G., Bond, R., Mulvenna, M., O’Neill, S., Armour, C., & McTear, M. (2019). Assessing the usability of a chatbot for Mental Health Care. Internet Science, 121–132.). 

    Chatbots are yet to be formally integrated within healthcare systems as well, indicating that there is room for improvement in the tool. Limitations such as data privacy and harmful advice are at the forefront of concerns from clinicians; however, so is a lack of pointed advice. Users report that applications designed to help with self-treatment do not help solve problems brought up ((Torous, J., Nicholas, J., Larsen, M. E., Firth, J., & Christensen, H. (2018). Clinical Review of user engagement with mental health smartphone apps: Evidence, theory and improvements. Evidence Based Mental Health, 21(3), 116–119. []

  31. Panch, T., Mattie, H., & Celi, L. A. (2019). The “Inconvenient Truth” about AI in Healthcare. Npj Digital Medicine, 2(1). []
  32. Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare, 295–336. []
  33. Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. SSRN Electronic Journal. []
  34. ElBaih, M. (2023). The role of privacy regulations in AI Development (a discussion of the ways in which privacy regulations can shape the development of AI). SSRN Electronic Journal. []
  35. AI regulation is coming. Harvard Business Review. (2021, August 30). []
  36. Minor, A. L., Hansen, A. A. J., Yanny, A. A. M., & Erickson, A. M. (2018, September 12). Will doctors be replaced by algorithms?. Scope. []
  37. Satya, S. (2021, May 5). Healthcare industry estimating the impact of artificial intelligence … Economics, University of California Berkeley. []
  38. Cameron, G., Cameron, D., Megaw, G., Bond, R., Mulvenna, M., O’Neill, S., Armour, C., & McTear, M. (2017). Towards a chatbot for digital counseling. Electronic Workshops in Computing.
    []
  39. Javaid, M., Haleem, A., Pratap Singh, R., Suman, R., & Rab, S. (2022). Significance of machine learning in Healthcare: Features, pillars and applications. International Journal of Intelligent Networks, 3, 58–73. [] []
  40. Graham, S., Depp, C., Lee, E. E., Nebeker, C., Tu, X., Kim, H.-C., & Jeste, D. V. (2019). Artificial Intelligence for Mental Health and Mental illnesses: An overview. Current Psychiatry Reports, 21(11). []

LEAVE A REPLY

Please enter your comment!
Please enter your name here