Skip to content

Latest commit

 

History

History
96 lines (65 loc) · 4.59 KB

README.md

File metadata and controls

96 lines (65 loc) · 4.59 KB

🌟 ImageCLEFmed-MEDVQA-GI-2025 🌟

Registraion

The ImageCLEFmed-MEDVQA-GI (3rd edition) challenge 🔬 focuses on integrating Visual Question Answering (VQA) with synthetic gastrointestinal (GI) data 🏥 to enhance diagnostic accuracy 🏃‍♂️💡 and AI learning algorithms 🤖.

This year's challenge includes two exciting subtasks 🚀 designed to push the boundaries of image analysis 🖼️ and synthetic medical image generation 🧬, aiming to improve diagnostic processes 🏨 and patient outcomes 💖.


🎯 Task Descriptions

🔍 Subtask 1: Algorithm Development for Question Interpretation and Response

💡 Goal: Develop algorithms 🤖 that can accurately interpret and answer 🗣️ questions based on GI images 🏥. These questions may involve identifying abnormalities ⚠️, counting objects 🔢, or describing image content 📝.

Focus: Create robust systems that combine image 🖼️ and text understanding 🗨️ to assist medical diagnostics 🏨.

💬 Example Questions:

  • 🔢 How many polyps are in the image?
  • Are there any abnormalities in the image?
  • 🏷️ What disease is visible in the image?

🎨 Subtask 2: Creation of High-Fidelity Synthetic GI Images

🖌️ Goal: Generate synthetic GI images 🧬 that are indistinguishable from real medical images 🏥, rich in detail and variability.

🌱 Why? Provide privacy-preserving alternatives 🔒 to real patient data and support diagnostic systems 💡.


📂 Data

The 2025 dataset 🗃️ is an extended version of the HyperKvasir dataset 🔗 (datasets.simula.no/hyper-kvasir) and includes:

  • 🏥 GI images with detailed VQA annotations 📝
  • 🌟 New synthetic image data simulating realistic diagnostic scenarios
  • 🎯 Segmentation masks for key elements like polyps 🩺 and instruments 🛠️

📥 Datasets

  • 🏃 Development Dataset: Download Here
  • 🕑 Test Dataset: Coming Soon

🧪 Evaluation Methodology

🏃 Subtask 1: Question Interpretation and Response

  • 📊 Metrics: 🎯 Accuracy, 🔍 Precision, ♻️ Recall, and 🏆 F1 Score.
  • 📜 Evaluation: Based on correctness ✅ and relevance 📝 of answers using the provided questions 💬 and images 🖼️.

🖼️ Subtask 2: Synthetic Image Quality

  • 👀 Subjective Evaluation: 🩺 Expert reviewers will assess realism 🌟 and diagnostic utility 🏥.
  • 🎯 Objective Evaluation:
    • 📉 Fréchet Inception Distance (FID): Similarity between synthetic and real images.
    • 🏗️ Structural Similarity Index Measure (SSIM): Resemblance in structure 🏛️.

🏆 Online Leaderboard

🚀 Compete in real-time with a dynamic leaderboard 📈 showing participants' performance!
💡 Iterate, Improve & Win! 🏅


🗓️ Preliminary Schedule

  • 📅 20 December 2024: 📝 Registration opens
  • 📅 14 February 2025: 🏃 Release of training & validation datasets
  • 📅 14 March 2025:Test datasets released
  • 📅 25 April 2025: 🚪 Registration closes
  • 📅 10 May 2025: ⏲️ Run submission deadline
  • 📅 17 May 2025: 🏆 Processed results released
  • 📅 30 May 2025: ✍️ Participant papers submission [CEUR-WS]
  • 📅 27 June 2025: 💌 Notification of acceptance
  • 📅 7 July 2025: 🖨️ Camera-ready paper submission [CEUR-WS]
  • 🏛️ 9-12 September 2025: 🌍 CLEF 2025, Madrid, Spain 🇪🇸

💼 Organizers

✨ For any queries, feel free to reach out to our amazing team:


🔗 For More Details & Registration

🌐 Visit: 👉 imageclef.org/2025

💥 Join the challenge, push the boundaries, and make a difference in medical AI! 🚀🧬