|
Meghal Dani
I'm a Ph.D. student in Computer Science at the
International Max Planck Research School for Intelligent Systems (IMPRS-IS) , University of Tübingen, Germany.
I am part of ClinBrAIn, a clinical neuroscience initiative in Tübingen, and am advised by
Dr. rer. nat. Stefanie Liebe and
Prof. Dr. Jakob Macke.
Previously, I worked with
Prof. Dr. Zeynep Akata on vision–language models and explainable AI.
My training places me at a natural intersection of computer science and the life sciences. I received my Bachelor of Engineering in Computer Science from
BIT Mesra in 2016 and Masters in Computational Biology (2017–2019) at
IIIT Delhi.
I gained industry research experience (2019–2021) at Tata Research and Innovation Labs as a full time researcher at Deep Learning and AI lab, I worked on variety of problems on 3D vision with
Ramya Hebbalaguppe and medical image analysis with Dr. Lovekesh Vig.
My research interests broadly include understanding and adapting foundation models—vision models (VMs), large language models (LLMs), and vision-language models (VLMs) for safe deployment in critical healthcare settings. Additionaly, I work on multimodal learning (text, images, video and EEG signals), explainable AI, and robustness under distribution shift, with a current focus on clinical neuroscience data.
I am actively looking for research internship / visiting researcher positions for summer 2026 (industry or academic labs).
Email /
CV /
Scholar /
Twitter /
Github
|
|
|
SemioLLM: Evaluating Large Language Models for Diagnostic Reasoning from Unstructured Clinical Narratives in Epilepsy
Meghal Dani,
Muthu Jeyanthi Prakash, Filip rosa, Zeynep Akata, Stefanie Liebe
Nature Communications Medicine, 2025
(Also presented at ICML-W AI for Science, 2024)
Paper
/
Code
/
OpenReview
SemioLLM is a domain-adaptable evaluation framework benchmarks 8 SOTA LLMs on a core diagnostic task in epilepsy. Leveraging a database of 1,269 seizure descriptions, we show that most LLMs are able to accurately and confidently generate probabilistic predictions of seizure onset zones in the brain. Most models approach clinician-level performance after prompt engineering, with expert-guided chain-of-thought reasoning leading to the most consistent improvements. Performance was further strongly modulated by clinical in-context impersonation, narrative length and language context (13.7%, 32.7% and 14.2% performance variation, respectively). However, expert analysis of reasoning outputs revealed that correct prediction can be based on hallucinated knowledge and deficient source citation accuracy, underscoring the need to improve interpretability of LLMs in clinical use.
|
|
|
DeViL: Decoding Vision Features into Language
Meghal Dani*, Isabel Rio Torto*, Stephan Alaniz, Zeynep Akata
DAGM GCPR, 2023 (Oral)
(Also presented at ICCV-W CLVL, 2023)
project page
/
Paper
/
code
Post-hoc explanation methods have often been criticised for abstracting away the decision-making process of deep neural networks. In this work, we would like to provide natural language descriptions for what different layers of a vision backbone have learned. Our DeViL method decodes vision features into language, not only highlighting the attribution locations but also generating textual descriptions of visual features at different layers of the network.
|
|
|
An efficient anchor-free universal lesion detection in CT-scans
Manu Sheoran*,Meghal Dani*, Monika Sharma, Lovekesh Vig
ISBI, 2022
Paper
This work proposes a one-stage, anchor-free universal lesion detector (ULD) for CT that improves robustness across lesion sizes and datasets compared to traditional anchor-based detectors. Instead of relying on predefined anchor boxes and IoU-based matching, the method ranks box predictions by center-based relevance, alleviating the sensitivity to anchor design and improving detection of small and mid-sized lesions. The model further incorporates domain-specific priors by feeding multi-intensity inputs generated from multiple HU windows, and fuses these via self-attention-based feature fusion.
|
|
|
DKMA-ULD: Domain Knowledge augmented Multi-head Attention based Robust Universal Lesion Detection
Manu Sheoran*, Meghal Dani*, Monika Sharma, Lovekesh Vig
BMVC, 2022
Paper
[Medical imaging, attention] Injects anatomical priors into multi-head attention for robust universal lesion detection; basis of a granted US patent.
|
|
|
3DPoseLite: A Compact 3D Pose Estimation Using Node Embeddings
Meghal Dani,
Karan Narain, Ramya Hebbalaguppe
WACV, 2021
Paper
[3D vision] Lightweight, graph-based 3D human pose estimator designed for deployment on resource-constrained devices.
|
|
|
PoseFromGraph: Compact 3-D Pose Estimation using Graphs
Meghal Dani, Additya Popli, Ramya Hebbalaguppe
SIGGRAPH Asia, 2020
Paper
/
Video
[3D vision] Graph-based encoder that recovers accurate 3D poses from sparse 2D keypoints with a compact model.
|
|
|
Mid-air fingertip-based user interaction in mixed reality
Meghal Dani,
Gaurav Garg,
Ramakrishna Perla, Ramya Hebbalaguppe
ISMAR, 2018
Paper
[HCI, AR] Fingertip-based interaction techniques for mixed reality with monocular RGB cameras.
|
|
Selected Projects
-
EEG foundation models for BCI (2025–)
Investigating how pretraining strategies, data scale and signal augmentations affect downstream performance of EEG foundation models for BCI and seizure-related tasks.
-
SzCORE Seizure Detection Challenge, EPFL (2025)
Developed S4/Hyena-based state-space models for long-context EEG seizure detection and segmentation,
achieving 3rd place out of 30 teams on the SzCORE epilepsy benchmark.
-
VEEG seizure behaviour analysis (2024–)
Built a TAPIR-based pipeline that tracks 21 body joints from single-camera, low-light VEEG recordings with strong occlusions,
using pose trajectories to detect seizure vs. non-seizure events and classify seizure types.
-
Handwritten epilepsy records – Africa collaboration (2025–)
Designing OCR + layout pipelines using vision-language models to extract seizure types and onset zones from handwritten epilepsy clinic notes in South Africa,
and to study cross-cultural differences in symptoms, patient narratives and clinical workflows.
-
BRIDGE: Brain Research International Data Governance & Exchange (2025)
Preprocessed and quality-controlled multi-country MRI datasets (e.g. SUDMEX CONN, Nigerian Brains),
and contributed to understanding legal and technical barriers for cross-border neuroimaging data sharing.
|
News
- Nov–Dec 2025 – Visiting researcher at the Neuroscience Institute, Groote Schuur Hospital (Cape Town), working on OCR and ML pipelines for handwritten epilepsy records.
- Oct 2025 – SemioLLM accepted at Nature Communications Medicine (LLMs for diagnostic reasoning in epilepsy).
- Sep 2025 – Our team ranked 3rd of 30 in the EPFL SzCORE EEG seizure detection & segmentation challenge.
- 2025 – Contributor on BMBF DeepSync and co-lead contributor on DFG I2I international collaboration grant with University of Cape Town – both accepted.
- 2022– – Awarded ClinBrAIn PhD Fellowship (EKFS, Tübingen).
|
Awards & Patents
Awards and distinctions
- 3rd place, EPFL SzCORE EEG seizure detection & segmentation challenge (2025)
- ClinBrAIn PhD Fellowship, EKFS, Tübingen (2022–)
- Best Thesis Award nominee, M.Tech. Computational Biology, IIIT-Delhi (2019)
- Travel award from IIIT-Delhi to present at ISMAR (2018)
- Postgraduate fellowship from the Government of India (2017)
Patents
- U.S. Patent 12,131,466: Domain knowledge augmented multi-head attention based robust universal lesion detection (with M. Sheoran, M. Sharma, L. Vig), 2024.
- U.S. Patent 12,033,352: End-to-end model to estimate 3D pose of arbitrary objects (with R. Hebbalaguppe, A. Popli), 2024.
|
Community service
Reviewer:
- CVPR (2025)
- ICCV (2023–2025)
- ICML (2024)
- MICCAI (2024)
- ACCV (2024)
- ICML Workshop on LLMs and Cognition (2024)
Other:
- Soft Skills Seminar Series (S4) workshop organizer at IMPRS-IS (2024–Present)
- Ph.D. representative in the search committee for a Tenure-Track Professor of Machine Learning and Intelligent Systems, University of Tübingen (2024)
- IMPRS-IS Interview Symposium helper involved in recording and moderating candidate talks
- Volunteer at Explainability in Machine Learning (Tübingen, 2023)
|
Last updated: December 2025. This website is adapted from template here:
source code.
|
|