Artificial Intelligence Meets Dementia

Magnetic resonance or MR  imaging is one of the most powerful noninvasive medical testing tools- and when we combine it with with the emerging power of artificial intelligence great things are possible. Today we are looking specifically at applying this technology to the brain  in order to help understand dementia from one of the leaders in the field. 

Wiro Niessen is a professor at both Erasmus MC, University Medical Center Rotterdam and Delft University of Technology. He is founder and scientific lead of Quantib, an AI company in medical imaging. He received a master’s degree in Physics and a PhD in Medical Imaging from Utrecht University.

—2012 ImageNET competition revolutionized machine learning with deep convolutional neural networks

-Data science approach to medical imaging

-How risk factors [Genetic and lifestyle] are associated with outcome

-Rotterdam study for population imaging

-Increased DWM assoc with increased stroke and dementia

-DTI- diffusion tensor imaging allows white matter tracts to be seen- AI can automate processing to save orders of magnitude in processing time

-Automated biomarker extraction from 15,000 Brain MRI

-Hippocampal volume- associated with risk for later developing Alzheimer disease. 

-MICCAI Society- Medical Image Computing and Computer Assisted Interventions

-ESR- European Society of Radiology

-FAIR- Finable, Accessible, Interoperable, Reusable Data

https://www.erasmusmc.nl/en/research/researchers/niessen-wiro

https://www.quantib.com/about/team/wiro-niessen

*** SUBSCRIBE TO ROBERT LUFKIN MD YOUTUBE CHANNEL HERE *** https://www.youtube.com/channel/UC2w2

*** CONNECT WITH ROBERT LUFKIN MD ON SOCIAL MEDIA ***
Web:
https://robertlufkinmd.com/
Twitter:https://twitter.com/robertlufkinmd
LinkedIn:
https://www.linkedin.com/in/robertluf

*** THINGS I ACTUALLY USE FOR MY OWN HEALTH AND LONGEVITY *** [

ROBERT LUFKIN MD AMAZON INFLUENCER STOREFRONT] https://www.amazon.com/shop/robertluf

*** GOT A SUGGESTION FOR A SHOW? ***
Contact us at: https://robertlufkinmd.com/contact

*** SPONSORSHIPS & BRANDS ***
We do work with sponsors and brands. If you are interested in working with us and you have a product or service that is of value to the health industry please contact us at: https://robertlufkinmd.com/contact​

NOTE: This is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have. Never disregard professional medical advice or delay in seeking it because of something you have seen here. Robert Lufkin MD may at any time and at its sole discretion change or replace the information available on this channel. To the extent permitted by mandatory law, Robert Lufkin MD shall not be liable for any direct, incidental, consequential, indirect or punitive damages arising out of access to or use of any content available on this channel, including viruses, regardless of the accuracy or completeness of any such content.

Disclaimer: We are ambassadors or affiliates for many of the brands we reference on the channel. ————————————————————————

#longevity #wellness #antiaging #MR #lifestylemedicine #younger #artificial intelligence #biohacking #RobertLufkinMD #wironiessen #Quantib #Brainvolume

TRANSCRIPT:

Robert Lufkin 0:01
Welcome back to the health longevity secrets show and I’m Dr. Robert Lufkin Magnetic Resonance or Mr. imaging is one of the most powerful non invasive medical testing tools. And when we combine it with the emerging power of artificial intelligence, truly great things are possible. Today, we are looking specifically at applying this technology to the brain in order to help understand dementia. From one of the leaders in the field we’re on Neeson is a professor at both Erasmus University Medical Center in Rotterdam and Delft University of Technology. He is the founder and scientific lead at flaunted an AI company in medical imaging. He received a master’s degree in physics and a PhD in medical imaging from Utrecht University. In this episode, we are experimenting with a new format of having a presentation rather than the interview style that we usually do. Please let us know how you feel about it. Dr. Neeson starts with a review of the 2012 image net breakthroughs in machine learning to drive the current progress in artificial intelligence. He also mentions the type of magnetic resonance imaging pulse sequence known as de TI, which is an abbreviation for diffusion tensor imaging, which measures the diffusion components across images, which in the brain turned out to be particularly useful to define brain fiber tracks. He mentions the hippocampus which we will learn about in other episodes, is paired structure in our brain which is active in memory. Measuring its volume can be a biomarker for Alzheimer’s disease risk to other abbreviations, he mentions our myka society, the magnetic or the medical image computing and computer assisted intervention society, and also ESR, which is the European Society of Radiology. This presentation is fairly technical, but does cover state of the art challenges in applying artificial intelligence to Mr. The brain to help with our understanding of dementia. Now, please enjoy this presentation with Dr. guerra Neeson.

Wiro Niessen 2:27
Good afternoon. It’s a pleasure today to present on the topic of when artificial intelligence meets dementia. My name is Silvia aneesa. I’m a professor of biomedical image analysis at Erasmus MC in Rotterdam, large University Medical Center in the Netherlands, and the University of Technology. And I’m also linked to quantum. And as a disclosure, I’m also the founder scientific lead, and shell shareholder of quantum. So if you work these days in data science, artificial intelligence, it’s quite exciting times it’s on the top cover of many magazines, the enormous possibilities of machine learning and artificial intelligence. And even though this technology dates back over half a century ago, it’s been really some landmark development in 2012. That brought all this excitement was linked to a challenge image net challenge, which was organized at Stanford, in which teams were competing in order to recognize objects in images, and add to 2012 competition, a neural network, which takes the raw image in as input and outputs. The classification whether a certain object is present in an image, all of a sudden, really outperformed many of the other techniques. And because of that, a lot of people jumped towards the use of neural networks. And now it’s not only for image recognition, but in many fields. That is technology is being employed. And it’s good probably to think why this image net competition, or this algorithm was so successful. So this didn’t come out of the blue. It was many years that this competition had been organized. And the fact that a large number of imaging data that had been annotated or available for algorithm developers to use them to train and optimize their algorithms, was actually key in order to make sure that we have progress in the field. So I think there’s two aspects that are really very important and that is that this this concept of open data to be able to train from, and also the challenge aspect, that you are really organized in corporate competition, and that you can objectively evaluate how well an algorithm performs. These two aspects, I think have made that progress was substantial progress has been made in that that field. Since then, we see an enormous uptake of machine learning, not only in the medical domain, basically in all domains of society. And you see that in the field. And while we are in which are where at least, I’m working, in which machine learning meets medical imaging, that actually the magnitude of papers, scientific papers of machine learning is now even surpassing the number of papers on medical imaging. But it’s actually the merger of the two, the merger of medical imaging and machine learning. That is a very exciting area. There have also been some debates about the risk of artificial intelligence, for example, the risk of artificial intelligence to replace jobs of clinicians. I think by now we know that the current state of the art in machine learning and AI is very useful. We can build very strong algorithms, but we really are complementing human intelligence rather than replacing it, because there’s distinct things that current state of the art AI algorithms are good at. And there are certain things human experts are very good at. And at least for the for the for the coming period. It is about how can we optimally utilize AI in the hands of experts, like radiologists, like neurologists, in order to

do the best for our patients to make to take the best diagnosis? So this is this is this is really the main topic of this lecture. How can you can we ensure that the enormous potential of these AI techniques, these neural networks that can learn from examples can be utilized in clinical practice? Now, if I first look at my own domain, the field of medical image processing, we see that actually, AI has already made a large impact. We’re developing many algorithms. And these algorithms are aimed at making the field of Radiology, more objective and quantitative, we want to turn medical imaging data into numbers that somehow are linked to the state or presence of disease, or that can be used to characterize human anatomy or human function. And because of these deep learning techniques, and the possibility to link imaging data directly to outcome, we see a second movement, not only are we going to use AI, in order to quantify and describe image data more objectively, we also are interested in directly linking imaging characteristics to things to outcomes that are clinically irrelevant. For example, we could try to predict based on imaging data with a neural network, what the subtype of a tumor is, or we could try to predict based on MRI scans of the brain, if someone is at increased risk of getting cognitive decline, or dementia, and we could, for example, look at differential diagnosis of dementia. So this is not all straightforward. And it’s and people perhaps have underestimated how, how it can be more difficult, perhaps in the medical domain compared to other domains, in order to really make an impact. So first, what we have to realize is that in the health domain, we need to do more than image perception, eventually, to take it to make a diagnosis. We combine multiple sources of information. That means that if we train an algorithm, often we need no more than image data. So we need all types of all our data, clinical data, genetic data. And that becomes more difficult to collect all this data, especially if we work with larger datasets over multiple hospitals. Then we need many data because human biology and pathology is highly variable. And we want to have a representative data set that includes all the variation that is present in clinical practice and often also data by This may be an issue. If we train our algorithms on different data than we apply our algorithm, they may not perform as expected. So, so there are some challenges. But it’s, it’s really worthwhile to face these challenges to address these challenges, and develop algorithms based on this technology. And why? Because it really helps us to build what we call a learning healthcare system. This technology gives us the power to learn from previous patients to diagnose the next patient better, to do a better prognosis, and to do a better treatment selection.

So I will illustrate that, with the research we’ve been doing. And I will start with the research we did in the context of a population imaging study. So in a population study, we are interested in understanding why certain people are healthy, why other people develop diseases. And in order to really get a good insight in it, we need to collect a large amount of data. And because of that reason, these kind of studies are very good candidates to develop and train AI algorithm known because of the quality and the magnitude of the data that are available. So what is the design of such a population imaging study? If we think about such a population imaging study, what we do have as many people that we can afford, we collect risk factors, this may be your genetic liability for disease. So we can do t was data. But we can also look at lifestyle, whether people smoke or not diet, environmental factors. And eventually we are interested in which risk factors really lead to neurological outcomes such as cognitive decline, dementia stroke. And normally this relationship between risk factors, and outcome is then a black box. But if in such a population image, study, you start to image frequently, you can start to open the black box. And in this way, you can visualize changes that happen to the brain that are, for example, associated with neurological neurodegenerative processes, so atrophy of the brain, or the occurrence of brain lesions. And then you can start to investigate the relationship between risk factors, and changes in appearance of the brain, or change in appearance of the brain and relevant clinical outcomes. Or you can combine risk factors with an MRI brain image in order to predict outcomes. So there’s really calls for sort of data science approach, in which you link a lot of data of subjects, including imaging data to relevant clinical outcomes. Now, a good question would be, would it be, is this really a sensible approach? Is there information in the image data that can that we can use, especially to, to identify disease and an early, early stage? And here you see a very nice study, which was conducted now about eight years ago, in which we, we try to address specifically that question. And what we did, we took about 1000, subjects of the Rotterdam study. And we looked at white matter pathologies, white matter lesions, white matter hyperintensities. And these are Mr. Brain images. And we know that an increased occurrence of these biomatter hyperintense hyperintensities, are associated with an increased risk of stroke and an increased risk of dementia. We then use some image processing to spatially register the baseline scan to the follow up scan, we segmented automatically, the white matter hyperintensities in both scans, and then we divided the brain and the white matter in four regions, those regions in which the white matter appeared normal both baseline and follow up those regions in which there was already a white matter of hyper intensity at the baseline image. And those regions where white matter lesions had grown, or where new white matter lesions appeared. And what we did then is we went back to the baseline scan, and we started to compare these four regions in the MRI images and all the Emaar sequences that we had, wherever, any statistically significant difference between these regions. Now if you looked here, I showed you One example for fractional anisotropy, that we can derive from diffusion tensor MRI. And this measure is somehow related to the micro structural integrity of the of the brain of the white matter pathways. So if it’s affected, then actually,

it is a marker for for neuro degeneration. And what we saw here is that this marker was already different at the baseline scan, in which the scan that we normally interpret look normal. But this marker was already different in regions that only later were officialized years later, as a white matter lesions. So this indicates for us that somehow, already a subclinical disease processes going on, there’s changes happening in the white matter, before white matter lesions appear. And that information currently, we’re not using in order to to predict, for example, cognitive decline of dementia. So this actually tells us that it makes sense to think of these AI approaches and data science approaches, in which you use all of the information in all the MRI images to be able to predict better relevant outcomes. So in our research group, and also, in our collaborations with the, with the with the rubber m study, we started to, to address this problem as a data science problem. So we started to fully automate the biomarker extraction of all the subjects that are included in the Rotterdam study, such that you get a descriptor that per individual, we have information about the brain tissue, the presence of atrophy, localization, and volume of white matter lesions, how different regions of the brain are connected, etc. And we utilize all this information to start to describe every individual in in the Rotterdam study. And now I would like to show you that this research now really has been impacted greatly, or this processing has been impacted greatly by artificial intelligence. And I will show you by one example, this is an example of an algorithm that we developed to segment the white matter pathways from diffusion tensor MRI, we used to have a method for that that combines an atlas that we match. So we met mentioned, an atlas of sort of an average person to an individual in order to have a prior where different white matter pathways located. And then we use the tracking on the diffusion tensor images in order to determine a segmentation of the white matter pathways. And this full procedure would take multiple hours to process one single image. Now, what we’ve done now we’ve developed a novel algorithm. And this is based on a neural network that was a different type of neural network, but the principle is the same as in the image net challenge. And this this network takes diffusion tensor images as input, and its output provides segmentation of the white matter pathways. This network is trained via by 1000s of examples. And after it has been trained, it proves to be a very accurate and robust algorithm for fully automatic segmentation of threats. But now a single threat can be analyzed in less than a second. So you see an enormous debris disruptive development, in which the image processing becomes much faster because of artificial intelligence. We have brought this kind of technologies and integrated that into a clinical workstation. So this is work with quanta pads now integrated in an FDA approved workstation, quantum Andy, and the idea of this workstation is really to to make the next step in radiology to make it more objective and quantitative. So what you see here is that similar imaging biomarkers related to the volume of the brain of different lobes of the hippocampus, the presence of white matter lesions, also the change of if you have multiple scans, the change of the volume of the brain over time, that those markers are not only calculated with these tools, but that we also provide reference data of this population based Rotterdam’s study such that you can interpret these markers and see how An individual person deviates from from from their peer population.

So, I think there’s an enormous potential and promise of these technologies. But but we really, we need to work together, academia, industry clinical end users, in order to ensure that these algorithms make a positive impact into into clinical practice. So first and foremost, what we need is, in order to have good quality algorithms, we need to train algorithms with state of the art methods on high quality and large numbers of data that are representative for clinical practice. So in that sense, it’s important that we collaborate to ensure that algorithms are optimized in a multicenter setting, then we need proper validation strategies, you need to be able to evaluate the performance of algorithms in order to know how you can use them in clinical practice. And finally, it’s important that these kind of algorithms are seamlessly integrated into the workflow. So so one of the things that I think is important is that we work towards better reuse of health data to optimize algorithms, you see that everywhere, where large data resources become available, the whole field of artificial intelligence gets a boost. And a lot of people start to develop algorithms that are more informative, more accurate, and have more impact. So so we should globally start to build infrastructures, and promote the reuse of data, of course, in a responsible manner, and, and taking care and adhering to privacy guidelines and, and consent of reuse of data. But utilizing data to train algorithms is also in the public interest. In the Netherlands, we are working because of that reason towards an infrastructure that enables Distributed Learning, with the idea of Distributed Learning is that you don’t have to bring all your data centrally. But rather that an algorithm can be trained, even if data reside at multiple locations, you actually bring a network to all the sides, you look how you optimize it locally, but you share results of the optimization. And in this way, you can optimize an algorithm in a distributed fashion. So this is something that for also, at this moment, this kind of infrastructures don’t really exist. So it’s good that we build them. So for a company like our company, quantit, we really have to invest in, in networks of partners. So here you see an example of a recent network that we built, this is not in the narrow domain, but in the in the prostate domain, in which we work with multiple clinical centers in order to get training data to train algorithms to optimize our algorithms, because this is the only way to get algorithms that are sufficiently accurate in order to be used in in clinical practice. So the next step after our training algorithms is validation and and hear the warning is in place for a lot of things that that happen in. In literature, it’s easy to claim a certain performance of an algorithm. But if you if you do it on a retrospective data set, then often this this can be a biased data set or a data set that is similar to the one that was used when you train the algorithm. And for AI algorithms, it’s really important to have an indication of how well they work in a clinically realistic setting. So then approach we always take is that next to trying to build a training data set that is of high quality, we also estimate what is the accuracy on a totally independent data set, so that we can provide a numbers on that. And then we can estimize estimate generalizability. And

here, we really think that we can learn from this image net challenge in the medical imaging domain. And since 2007, in medical image analysis conferences, we now also have the concept of challenges. The fact that we objectively compare multiple algorithms around a certain task, so we also calibrate their bit The machine learning and the medical image analysis community, and different clinical community. Important collaboration is between the MCI society. So the MCI society is the largest scientific society in the field of medical image, computer, image computing and computer assisted interventions. So there you have a lot of data scientists and people in machine learning and AI to develop algorithms for interpret interpreting image data, and make AI has a long tradition of organizing challenges. But they’re now teaming up with the American College of Radiology, RNA ESR, in order to ensure that these challenges are being initiated from a relevant clinical question, relevant clinical use case. And here, what is quite important to mention, as well as one of the initiatives of the American College of Radiology has been to, to generate a directory of relevant AI use cases with information, what kind of input data is needed, what kind of analysis and analysis should be done, what kind of metrics should be used. And this is an important guideline for developers of algorithms. So I think this is something we have to work out in multiple domains, and also in the field of dementia, ensuring that we create clear use cases, and also collected data around him will help to further improve the quality and the impact of AI tools for dementia. So there’s really a collaborative effort, so that the startups in AI often have the best technology. But you’ll need to go work with, for example, University Medical Centers, clinics in order to get the data and to distribute sometimes also, the smaller companies have to work with larger companies. So it’s also a joint effort by multiple parties to ensure that we, that we bring the enormous potential of AI for health for medical imaging to clinical practice. And there is a nice article by Rector, though, that describes the steps that need to be taken for them for the introduction of these technologies. So summarizing, I think it’s not only in the field of dementia, but also in other fields, we will see that we will move towards better use of previously seen patients in order to diagnose and treat the next patient better. So we need to learn from from our previous patients. And an important step for our field learning from the field of computer vision in which this image net challenge was so successful, is that we have to make our data better accessible to learn from. And we also have to do a good job of validating our algorithms. And essentially, this means we need to co create we need to bring the people that know AI, they know how to develop it know how to turn it into tools for the good user interface, know how to integrate it in the workflow. We have to link those two clinicians that have the clinical question that want to advance their field of practice to jointly develop the next generation of tools. And it’s very exciting for us to be in that field and making a contribution there. So with this, I would like to thank you for attending.

Robert Lufkin 28:52
No, this is not intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have. Never disregard professional medical advice or delay in seeking of it because of something you’ve seen here. If you find this to be a value of you, please hit that like button and subscribe and support the work we do on this channel. Also, we take your suggestions and advice very seriously. Please let us know what you’d like to see on this channel. Thanks for watching and we hope to see you next time