Seminars
The Laboratory of Artificial Intelligence for Cognitive Science (AICS) announces the start of academic seminars on Wednesday 15 May 2024. The seminars provide a friendly and constructive environment to share our recent research results and exchange knowledge and experience in the theoretical and applied AI research domains. As instances of the former domain, we may mention projects such as (i-a) Gradient descent clustering, (i-b) A filtered gradient descent clustering method to recover communities in attributed networks, etc. As instances for the latter, we may mention projects such as (ii-a) Deep learning models to meet eye fixation and dyslexia, (ii-b) Identifying the severity and type of Aphasia using Machine Learning methods, etc.
The seminars target a broad range of audiences from different backgrounds, including but not limited to Computer Science, Linguistics, Neuroscience, etc of various professional levels. More precisely, while the technical details of the seminars, which are devoted to the applied AI projects, should interest the students, their obtained results and the theoretical projects might interest more professional researchers and academicians.
The seminars (usually) will take about an hour and a half, in which 30-40 minutes is devoted to the presenter(s) and the rest to discuss the subject.
We will announce more information few days prior to each seminar.
The seminars are going to be held in a hybrid format: Online and in person at the Center for Language and Brain's meeting room at 3 Krivokolenny Pereulok, room 302.
To learn more about the AICS please refer to our website using the link below:
https://www.hse.ru/en/neuroling/vml/
In addition, you can view a short presentation about the laboratory with a brief description of our projects and information about employees:
Information about the lab (PDF, 874 Kb)
For further inquiries, please, contact the head of the laboratory, Dr. Soroosh Shalileh, via email: sshalileh@hse.ru
"Deep Learning models to meet eye fixation and dyslexia"
Date: 15.05.2024 at 14:00
Subject: Deep Learning models to meet eye fixation and dyslexia
About lecturer: Krylova Maria (research assistant), Shalileh Soroosh (Ph.D. in Computer Science, Laboratory Head)
Annotation: The current research investigates the three following questions:
How accurately artificial intelligence (AI) models can predict dyslexia using only eye fixation data? What is the most effective way of representing eye fixation to train AI models? Which family of AI models obtains the best results? To this end, we scrutinized four "data representations" and used two family of AI models, namely, ensemble learning and deep learning models. Our experiments showed that treating eye fixation as time-series and applying Long Short-Term Memory neural network leads to nearly perfect diagnosis of dyslexia.
Materials: Presentation from the seminar (PDF, 12.41 Mb)
"Artificial intelligence to identify depression from audio information"
Date: 29.05.2024 at 14:00
Subject: Artificial intelligence to identify depression from audio information
About lecturer: Anna Kazachkova(HSE MS student)
Supervisor: Shalileh Soroosh (Ph.D. in Computer Science, Laboratory Head)
Annotation: Depression is a widespread psychiatric disorder, which can significantly deteriorate the quality of life. Automatic depression detection could be an accessible and reliable diagnostic tool, addressing the current issues in the mental disorders area. The purpose of this paper is to study how accurately depression can be predicted on a given dataset and what are the most sustainable models and data representations. The study focuses on problem formulations such as binary classification and abnormality detection. The exploited models included convolutional neural networks and the transformer, and they were either trained only on our dataset or employed in the form of pre-trained for the image classification instances. Additionally, a benchmark of classical machine learning algorithms for the Geneva Minimalistic Acoustic Parameter Set features was computed. In total, we derived the best average ROC-AUC value of 0.72 on the test, compared to the benchmark of 0.55. This best result was provided by fine-tuning InceptionV3 architecture under the one-plus-epsilon optimization algorithm.
Materials: Presentation from the seminar (PDF, 1.60 Mb)
"Predicting Aphasia Type and Severity Using Machine Learning"
Date: 19.06.2024 at 14:00
Subject: Predicting Aphasia Type and Severity Using Machine Learning
About lecturer: Matvey Kairov (HSE BSc student and research assistant of AICS)
Supervisor: Shalileh Soroosh (Ph.D. in Computer Science, Laboratory Head)
Annotation: Aphasia is a language disorder that can result from brain damage, often caused by a stroke or traumatic brain injury. The type and severity of aphasia can vary widely among individuals making accurate diagnosis and treatment challenging in real life conditions. In this paper, we propose an approach to determine the type and severity of aphasia using machine and deep learning on brain MRI (Magnetic Resonance Imaging) scans. By leveraging the power of deep neural networks to augment and analyze brain imaging data, we aim to develop a reliable and automated method for classifying different types of aphasia and assessing their severity. Our study will involve training multiple machine and deep learning models to generate synthetic data in order to augment the existing dataset and accurately classify and quantify the characteristics of aphasia. The results of this research have the potential to improve the diagnosis and management of aphasia, leading to better outcomes for individuals affected by this debilitating condition.
Materials: Presentation from the seminar (PDF, 5.34 Mb)
"AI Models to Diagnose Depression Using Acoustic Features"
Date: 26.06.2024 at 14:00
Subject: AI Models to Diagnose Depression Using Acoustic Features
About lecturer: Alexandra Kovaleva (HSE master's student and research assistant of AICS)
Supervisor: Shalileh Soroosh (Ph.D. in Computer Science, Laboratory Head)
Annotation: Depression is one of the most widespread mental issues of the world today that affects an individual’s quality of life to a considerable extent. A lot of people tend to practice self-diagnosis avoiding doctor's consultation and try to heal themselves on their own because appointment in the hospital takes a considerable amount of time and disturbs individual's privacy. In this study we examined various Artificial Intelligence (AI) methods to detect whether a person is suffering from depression or not, using the acoustic features (such as pitch, tone, rhythm, etc.) extracted from the voices.
Assuming that the acoustic features are promising indicators of depression. We took the dataset of 346 patients from Mental Health Research Center in Moscow, RF., who were asked to record their voices while completing one of the tasks: picture description, reading IKEA instruction and telling thie personal story. To access severity of depression doctors used two scales: Hamilton Depression Rating Scale (HDRS) and Quick Inventory of Depressive Symptomatology (QIDS) scales. We extracted features from the audio recordings of patients and trained several different models: ranging from conventional Machine Learning (ML) models, such as ensemble learning algorithms and k-nearest neighbor to more advanced deep learning architectures, such as TabNet and Wide&Deep methods. The results of our study show that several models can achieve high accuracy in predicting depression levels with approximately 0.62 and 0.7 ROC-AUC and F1-Score respectively, using picture descriptions as a stimulus for patients. In addition, out of two scales, QIDS showed the most accurate results in terms of prediction.
Overall, our results demonstrated that deep learning models have great potential for depression detection using extracted acoustic features; however, further research is required to improve the quality of the obtained results.
Materials: Presentation from the seminar (PDF, 2.36 Mb)
Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.