<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Artificial Medical Intelligence Group (AMIGO) at King's College London (KCL) | AMIGO</title><link>https://amigolab.github.io/</link><atom:link href="https://amigolab.github.io/index.xml" rel="self" type="application/rss+xml"/><description>Artificial Medical Intelligence Group (AMIGO) at King's College London (KCL)</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-gb</language><lastBuildDate>Mon, 24 Oct 2022 00:00:00 +0000</lastBuildDate><item><title>Quality Control with Foundation Models for Radiology</title><link>https://amigolab.github.io/positions/foundation-models-radiology/</link><pubDate>Wed, 25 Mar 2026 00:00:00 +0000</pubDate><guid>https://amigolab.github.io/positions/foundation-models-radiology/</guid><description>&lt;h2 id="internship--student-project-opportunity">Internship / Student Project Opportunity&lt;/h2>
&lt;p>We are developing &lt;strong>foundation models for large-scale radiology data&lt;/strong> with a focus on robust &lt;strong>representation learning&lt;/strong> and &lt;strong>quality control&lt;/strong>.&lt;/p>
&lt;p>The project explores how to learn general-purpose imaging representations that transfer across tasks such as scan-level quality assessment. We are particularly interested in self-supervised and weakly supervised learning strategies that can leverage large, heterogeneous radiology datasets.&lt;/p>
&lt;p>During the project, students may work on:&lt;/p>
&lt;ul>
&lt;li>pretraining and adapting radiology foundation models on large-scale imaging data;&lt;/li>
&lt;li>learning transferable visual representations for downstream diagnostic tasks;&lt;/li>
&lt;li>designing quality control pipelines to detect low-quality scans and out-of-distribution inputs;&lt;/li>
&lt;li>evaluating robustness, calibration, and generalization across sites and cohorts.&lt;/li>
&lt;/ul>
&lt;p>We welcome applications from candidates with a &lt;strong>strong machine learning and coding background&lt;/strong> (for example, Python/PyTorch, deep learning, and practical experience handling medical imaging data).&lt;/p>
&lt;p>This is suitable as an internship or student research project for candidates interested in clinically relevant AI and translational machine learning.&lt;/p>
&lt;h3 id="how-to-apply">How to apply&lt;/h3>
&lt;p>Please email the following documents:&lt;/p>
&lt;ul>
&lt;li>CV&lt;/li>
&lt;li>Academic transcript&lt;/li>
&lt;li>A short motivation letter&lt;/li>
&lt;/ul>
&lt;p>Send your application to: &lt;a href="mailto:yigitavci@kcl.ac.uk">yigit.avci@kcl.ac.uk&lt;/a>&lt;/p></description></item><item><title>Contact</title><link>https://amigolab.github.io/contact/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://amigolab.github.io/contact/</guid><description/></item><item><title>People</title><link>https://amigolab.github.io/people/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://amigolab.github.io/people/</guid><description/></item><item><title>Federated Learning Interoperability Platform (FLIP)</title><link>https://amigolab.github.io/currentresearch/flip/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://amigolab.github.io/currentresearch/flip/</guid><description>&lt;p>FLIP is an open-source platform that links data from multiple NHS Trusts to enable federated training and evaluation of medical imaging AI models, while ensuring data privacy and security. Developed by the London AI Centre in collaboration with Guy&amp;rsquo;s and St Thomas&amp;rsquo; NHS Foundation Trust and King&amp;rsquo;s College London, FLIP comprises three main components.&lt;/p>
&lt;p>&lt;strong>Secure Enclaves&lt;/strong> - Dedicated secure data storage within each partner NHS Trust&amp;rsquo;s firewall keeps sensitive patient data inside the Trust. Data from across the Trusts&amp;rsquo; patient records systems is transferred into the secure enclave for curating and aggregation, unifying medical imaging scans from PACS and other electronic health data.&lt;/p>
&lt;p>&lt;strong>Interoperability and Data Harmonisation&lt;/strong> - Electronic healthcare records are complex and heterogeneous. FLIP uses ontological and data interoperability standards to structure and harmonise data across multiple hospitals and clinical systems, enabling AI algorithms to query, learn, and action data via an open standards-based interface.&lt;/p>
&lt;p>&lt;strong>Federated Learning and Evaluation&lt;/strong> - FLIP brings algorithms to the data within each NHS Trust&amp;rsquo;s secure enclave, without sharing information outside the secure firewall or breaking local governance rules. Algorithmic models are sent to multiple Trusts and trained on local data before being securely combined to achieve consensus. The platform supports both NVIDIA FLARE and Flower federated learning frameworks.&lt;/p></description></item><item><title>MONAI</title><link>https://amigolab.github.io/project/monai/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://amigolab.github.io/project/monai/</guid><description>&lt;p>MONAI is a &lt;a href="https://pytorch.org/" target="_blank" rel="noopener">PyTorch&lt;/a>-based, &lt;a href="https://github.com/Project-MONAI/MONAI/blob/master/LICENSE" target="_blank" rel="noopener">open-source&lt;/a> framework for deep learning in healthcare imaging. Its ambitions are:&lt;/p>
&lt;ul>
&lt;li>developing a community of academic, industrial and clinical researchers collaborating on a common foundation;&lt;/li>
&lt;li>creating state-of-the-art, end-to-end training workflows for healthcare imaging;&lt;/li>
&lt;li>providing researchers with the optimized and standardized way to create and evaluate deep learning models.&lt;/li>
&lt;/ul>
&lt;h2 id="features">Features&lt;/h2>
&lt;p>The codebase is currently under active development.&lt;/p>
&lt;ul>
&lt;li>flexible pre-processing for multi-dimensional medical imaging data;&lt;/li>
&lt;li>compositional &amp;amp; portable APIs for ease of integration in existing workflows;&lt;/li>
&lt;li>domain-specific implementations for networks, losses, evaluation metrics and more;&lt;/li>
&lt;li>customizable design for varying user expertise;&lt;/li>
&lt;li>multi-GPU data parallelism support.&lt;/li>
&lt;/ul>
&lt;h2 id="installation">Installation&lt;/h2>
&lt;p>To install &lt;a href="https://pypi.org/project/monai/" target="_blank" rel="noopener">the current release&lt;/a>:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-fallback" data-lang="fallback">&lt;span class="line">&lt;span class="cl">pip install monai
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>To install from the source code repository:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-fallback" data-lang="fallback">&lt;span class="line">&lt;span class="cl">pip install git+https://github.com/Project-MONAI/MONAI#egg=MONAI
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Alternatively, pre-built Docker image is available via &lt;a href="https://hub.docker.com/r/projectmonai/monai" target="_blank" rel="noopener">DockerHub&lt;/a>:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-fallback" data-lang="fallback">&lt;span class="line">&lt;span class="cl"># with docker v19.03+
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">docker run --gpus all --rm -ti --ipc=host projectmonai/monai:latest
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="getting-started">Getting Started&lt;/h2>
&lt;p>Tutorials &amp;amp; examples are located at &lt;a href="https://github.com/Project-MONAI/MONAI/tree/master/examples" target="_blank" rel="noopener">monai/examples&lt;/a>.&lt;/p>
&lt;p>Technical documentation is available via &lt;a href="https://monai.readthedocs.io/en/latest/" target="_blank" rel="noopener">Read the Docs&lt;/a>.&lt;/p>
&lt;h2 id="contributing">Contributing&lt;/h2>
&lt;p>For guidance on making a contribution to MONAI, see the &lt;a href="https://github.com/Project-MONAI/MONAI/blob/master/CONTRIBUTING.md" target="_blank" rel="noopener">contributing guidelines&lt;/a>.&lt;/p>
&lt;h2 id="links">Links&lt;/h2>
&lt;ul>
&lt;li>Website: &lt;a href="https://monai.io/" target="_blank" rel="noopener">https://monai.io/&lt;/a>&lt;/li>
&lt;li>API documentation: &lt;a href="https://monai.readthedocs.io/en/latest/" target="_blank" rel="noopener">https://monai.readthedocs.io/en/latest/&lt;/a>&lt;/li>
&lt;li>Code: &lt;a href="https://github.com/Project-MONAI/MONAI" target="_blank" rel="noopener">https://github.com/Project-MONAI/MONAI&lt;/a>&lt;/li>
&lt;li>Project tracker: &lt;a href="https://github.com/Project-MONAI/MONAI/projects" target="_blank" rel="noopener">https://github.com/Project-MONAI/MONAI/projects&lt;/a>&lt;/li>
&lt;li>Issue tracker: &lt;a href="https://github.com/Project-MONAI/MONAI/issues" target="_blank" rel="noopener">https://github.com/Project-MONAI/MONAI/issues&lt;/a>&lt;/li>
&lt;li>Wiki: &lt;a href="https://github.com/Project-MONAI/MONAI/wiki" target="_blank" rel="noopener">https://github.com/Project-MONAI/MONAI/wiki&lt;/a>&lt;/li>
&lt;li>Test status: &lt;a href="https://github.com/Project-MONAI/MONAI/actions" target="_blank" rel="noopener">https://github.com/Project-MONAI/MONAI/actions&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>NiftyNet</title><link>https://amigolab.github.io/project/niftynet/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://amigolab.github.io/project/niftynet/</guid><description>&lt;p>NiftyNet is a &lt;a href="https://www.tensorflow.org/" target="_blank" rel="noopener">TensorFlow&lt;/a>-based open-source convolutional neural networks (CNN) platform for research in medical image analysis and image-guided therapy. NiftyNet&amp;rsquo;s modular structure is designed for sharing networks and pre-trained models. Using this modular structure you can:&lt;/p>
&lt;ul>
&lt;li>Get started with established pre-trained networks using built-in tools&lt;/li>
&lt;li>Adapt existing networks to your imaging data&lt;/li>
&lt;li>Quickly build new solutions to your own image analysis problems&lt;/li>
&lt;/ul>
&lt;p>NiftyNet is a consortium of research organisations (BMEIS – &lt;a href="https://www.kcl.ac.uk/lsm/research/divisions/imaging/index.aspx" target="_blank" rel="noopener">School of Biomedical Engineering and Imaging Sciences, King&amp;rsquo;s College London&lt;/a>; WEISS – &lt;a href="http://www.ucl.ac.uk/weiss" target="_blank" rel="noopener">Wellcome EPSRC Centre for Interventional and Surgical Sciences, UCL&lt;/a>; CMIC – &lt;a href="http://cmic.cs.ucl.ac.uk/" target="_blank" rel="noopener">Centre for Medical Image Computing, UCL&lt;/a>; HIG – High-dimensional Imaging Group, UCL), where BMEIS acts as the consortium lead.&lt;/p>
&lt;h3 id="features">Features&lt;/h3>
&lt;ul>
&lt;li>Easy-to-customise interfaces of network components&lt;/li>
&lt;li>Sharing networks and pretrained models&lt;/li>
&lt;li>Support for 2-D, 2.5-D, 3-D, 4-D inputs*&lt;/li>
&lt;li>Efficient training with multiple-GPU support&lt;/li>
&lt;li>Implementation of recent networks (HighRes3DNet, 3D U-net, V-net, DeepMedic)&lt;/li>
&lt;li>Comprehensive evaluation metrics for medical image segmentation&lt;/li>
&lt;/ul>
&lt;p>NiftyNet is not intended for clinical use.&lt;/p>
&lt;p>*2.5-D: volumetric images processed as a stack of 2D slices; 4-D: co-registered multi-modal 3D volumes&lt;/p>
&lt;h3 id="installation">Installation&lt;/h3>
&lt;ol>
&lt;li>Please install the appropriate &lt;a href="https://www.tensorflow.org/" target="_blank" rel="noopener">TensorFlow&lt;/a> package:
&lt;ul>
&lt;li>&lt;code>pip install &amp;quot;tensorflow==1.15.*&amp;quot;&lt;/code>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;code>pip install niftynet&lt;/code>&lt;/li>
&lt;/ol>
&lt;p>All other NiftyNet dependencies are installed automatically as part of the pip installation process.&lt;/p>
&lt;p>To install from the source repository, please checkout &lt;a href="http://niftynet.readthedocs.io/en/dev/installation.html" target="_blank" rel="noopener">the instructions&lt;/a>.&lt;/p>
&lt;h3 id="documentation">Documentation&lt;/h3>
&lt;p>The API reference and how-to guides are available on &lt;a href="http://niftynet.rtfd.io/" target="_blank" rel="noopener">Read the Docs&lt;/a>.&lt;/p>
&lt;h3 id="useful-links">Useful links&lt;/h3>
&lt;ul>
&lt;li>&lt;a href="http://niftynet.io/" target="_blank" rel="noopener">NiftyNet website&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/NifTK/NiftyNet" target="_blank" rel="noopener">NiftyNet source code on GitHub&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/NifTK/NiftyNetModelZoo/blob/master/README.md" target="_blank" rel="noopener">NiftyNet Model zoo repository&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://groups.google.com/forum/#!forum/niftynet" target="_blank" rel="noopener">NiftyNet Google Group / Mailing List&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://stackoverflow.com/questions/tagged/niftynet" target="_blank" rel="noopener">Stack Overflow&lt;/a> for general questions&lt;/li>
&lt;/ul>
&lt;h3 id="citing-niftynet">Citing NiftyNet&lt;/h3>
&lt;p>If you use NiftyNet in your work, please cite &lt;a href="https://doi.org/10.1016/j.cmpb.2018.01.025" target="_blank" rel="noopener">Gibson and Li, et al. 2018&lt;/a>:&lt;/p>
&lt;p>E. Gibson*, W. Li*, C. Sudre, L. Fidon, D. I. Shakir, G. Wang, Z. Eaton-Rosen, R. Gray, T. Doel, Y. Hu, T. Whyntie, P. Nachev, M. Modat, D. C. Barratt, S. Ourselin, M. J. Cardoso† and T. Vercauteren† (2018) &lt;a href="https://doi.org/10.1016/j.cmpb.2018.01.025" target="_blank" rel="noopener">NiftyNet: a deep-learning platform for medical imaging&lt;/a>, Computer Methods and Programs in Biomedicine. DOI: &lt;a href="https://doi.org/10.1016/j.cmpb.2018.01.025" target="_blank" rel="noopener">10.1016/j.cmpb.2018.01.025&lt;/a>&lt;/p>
&lt;p>BibTeX entry:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-bibtex" data-lang="bibtex">&lt;span class="line">&lt;span class="cl">&lt;span class="nc">@article&lt;/span>&lt;span class="p">{&lt;/span>&lt;span class="nl">Gibson2018&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">title&lt;/span> &lt;span class="p">=&lt;/span> &lt;span class="s">&amp;#34;NiftyNet: a deep-learning platform for medical imaging&amp;#34;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">journal&lt;/span> &lt;span class="p">=&lt;/span> &lt;span class="s">&amp;#34;Computer Methods and Programs in Biomedicine&amp;#34;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">year&lt;/span> &lt;span class="p">=&lt;/span> &lt;span class="s">&amp;#34;2018&amp;#34;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">issn&lt;/span> &lt;span class="p">=&lt;/span> &lt;span class="s">&amp;#34;0169-2607&amp;#34;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">doi&lt;/span> &lt;span class="p">=&lt;/span> &lt;span class="s">&amp;#34;https://doi.org/10.1016/j.cmpb.2018.01.025&amp;#34;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">url&lt;/span> &lt;span class="p">=&lt;/span> &lt;span class="s">&amp;#34;https://www.sciencedirect.com/science/article/pii/S0169260717311823&amp;#34;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">author&lt;/span> &lt;span class="p">=&lt;/span> &lt;span class="s">&amp;#34;Eli Gibson and Wenqi Li and Carole Sudre and Lucas Fidon and
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="s"> Dzhoshkun I. Shakir and Guotai Wang and Zach Eaton-Rosen and
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="s"> Robert Gray and Tom Doel and Yipeng Hu and Tom Whyntie and
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="s"> Parashkev Nachev and Marc Modat and Dean C. Barratt and
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="s"> S\&amp;#39;{e}bastien Ourselin and M. Jorge Cardoso and Tom Vercauteren&amp;#34;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="p">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>The NiftyNet platform originated in software developed for &lt;a href="https://doi.org/10.1007/978-3-319-59050-9_28" target="_blank" rel="noopener">Li, et al. 2017&lt;/a>:&lt;/p>
&lt;p>Li W., Wang G., Fidon L., Ourselin S., Cardoso M.J., Vercauteren T. (2017) &lt;a href="https://doi.org/10.1007/978-3-319-59050-9_28" target="_blank" rel="noopener">On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task.&lt;/a> In: Niethammer M. et al. (eds) Information Processing in Medical Imaging. IPMI 2017. Lecture Notes in Computer Science, vol 10265. Springer, Cham. DOI: &lt;a href="https://doi.org/10.1007/978-3-319-59050-9_28" target="_blank" rel="noopener">10.1007/978-3-319-59050-9_28&lt;/a>&lt;/p>
&lt;h3 id="licensing-and-copyright">Licensing and Copyright&lt;/h3>
&lt;p>NiftyNet is released under &lt;a href="https://github.com/NifTK/NiftyNet/blob/dev/LICENSE" target="_blank" rel="noopener">the Apache License, Version 2.0&lt;/a>.&lt;/p>
&lt;p>Copyright 2018 the NiftyNet Consortium.&lt;/p>
&lt;h3 id="acknowledgements">Acknowledgements&lt;/h3>
&lt;p>This project is grateful for the support from the &lt;a href="https://wellcome.ac.uk/" target="_blank" rel="noopener">Wellcome Trust&lt;/a>, the &lt;a href="https://www.epsrc.ac.uk/" target="_blank" rel="noopener">Engineering and Physical Sciences Research Council (EPSRC)&lt;/a>, the &lt;a href="https://www.nihr.ac.uk/" target="_blank" rel="noopener">National Institute for Health Research (NIHR)&lt;/a>, the &lt;a href="https://www.gov.uk/government/organisations/department-of-health" target="_blank" rel="noopener">Department of Health (DoH)&lt;/a>, &lt;a href="https://www.cancerresearchuk.org/" target="_blank" rel="noopener">Cancer Research UK&lt;/a>, &lt;a href="http://www.kcl.ac.uk/" target="_blank" rel="noopener">King&amp;rsquo;s College London (KCL)&lt;/a>, &lt;a href="http://www.ucl.ac.uk/" target="_blank" rel="noopener">University College London (UCL)&lt;/a>, the &lt;a href="https://www.ses.ac.uk/" target="_blank" rel="noopener">Science and Engineering South Consortium (SES)&lt;/a>, the &lt;a href="http://www.stfc.ac.uk/about-us/where-we-work/rutherford-appleton-laboratory/" target="_blank" rel="noopener">STFC Rutherford-Appleton Laboratory&lt;/a>, and &lt;a href="http://www.nvidia.com/" target="_blank" rel="noopener">NVIDIA&lt;/a>.&lt;/p></description></item><item><title>VTrails</title><link>https://amigolab.github.io/project/vtrails/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://amigolab.github.io/project/vtrails/</guid><description>&lt;p>A vectorial representation of the vascular network, which embodies quantitative features such as location, direction, scale, and bifurcations, has many potential cardio- and neuro-vascular applications.&lt;/p>
&lt;p>VTrails stands as an end-to-end approach to extract geodesic vascular minimum spanning trees from angiographic data by solving a connectivity-optimised anisotropic level-set over a voxel-wise tensor field representing the orientation of the underlying vasculature.&lt;/p>
&lt;p>VTrails has been presented at the biennial conference Information Processing in Medical Imaging (IPMI 2017) &lt;a href="https://arxiv.org/abs/1806.03111" target="_blank" rel="noopener">[1] [Full-Text]&lt;/a>, and further published on the IEEE Transactions on Medical Imaging journal &lt;a href="https://ieeexplore.ieee.org/document/8421255/" target="_blank" rel="noopener">[2] [Full-Text]&lt;/a>.&lt;/p></description></item></channel></rss>