In recent news, it was revealed that researchers from MIT have come forth with a brand new AI model, which, they say, can detect indicators of depression or depression on the whole by assessing and listening to the speech of the individuals. Listening? Yes, listening.
It may sound a bit weird, especially if you’re familiar with the symptoms of depression, but MIT claims the model is quite successful. By going through speech patterns, the writing styles and choice of words, MIT’s AI can detect whether a person is suffering from depression or not. It is being said that this will be done without the need of specific questions per se, and through general talk of the person, which the model will pick up.
The new model is being called ‘context-free’, as it analyzes how one is communicating things instead of what is essentially being said. Therefore, it picks up on the style of communication to detect depression.
Some of the cues that the model will be looking at to assess whether someone has depression or not, include whether a person is hinting towards happiness, excitement or sadness. This can all be done purely by listening to intonation and detection of words. Isn’t that amazing?
While commenting on the new model, the lead researcher of the project, Mr. Tuka Alhanai said that if one wants to deploy any ‘depression detection models’ in a way that is scalable, they should lower the total constraints on the data that is being used. Adding further, he said, “If you want to deploy it in a normal, regular conversation and have the AI model pick up from it, it will pick up from the natural interaction as to what the state of the individual is”.
After being tested initially, the AI has had a high success rate, of 77% (!). It has done better than the other models that were reliant on the orthodox Q&A model.
The prospects of the new model seem great, but also a bit scary. James Glass, who is a co-researcher of the project, feels that the model can even be used in different mobile apps to assess the signs of distress that are taken up in speech and then send appropriate alerts to doctors as well if needed. Not an idea I’m very fond of.
In a nutshell, the model looks at any cognitive dissonance that a person might have. However, lead researcher Alhanai feels that it may broaden its spectrum further to pick on additional cues. It’s for us to guess which cues that’ll be, and what spectrum it might operate in.
While the new model is being commemorated by some, there are concerns. Like Tristan Greene at The Next Web: “Passive automated monitoring of human communication sounds like the dystopian future of our nightmares. There are numerous conceivable social and professional environments where a person could be subjected to a question and answer sequence without knowing their mental health would later be evaluated by a machine.” And I understand his point of view. In a dark scenario, big companies would be able to use the model at any time without consent. Being able to predict or detect depression should not something corporates are able to do, by any means.
The model also has other reservations as far as its efficacy is concerned. During its test runs, Alhanai noted that it might need a lot more data to acutely predict if a person has depression.
Yes, it is scary. It will be predicting and detecting. There’s a difference. The distinction between the two is minor and that is why the model has its critics.
Mental health is no joke, and should never be a checkmark on someone’s application form or assessment interview. That being said, there’s no holding back A.I. and other developments from changing healthcare. For the better. We do have a responsibility to do this with caution and care. And there’s only one of doing that: debating the possibilities, creating regulations and being transparent from every perspective.