
December 31st, 2025
LOCATION:
IIT Delhi.
IIT Delhi.
REGISTRATIONS OPEN
13
Days
:
10
Hours
:
23
Minutes
REGISTRATION OPEN
Apply by December 31st
eRaksha Hackathon 2026
December 31st
Deadline
EVENT LOCATION
BharatMandapam.
REGISTRATIONS OPEN
13
Days
:
10
Hours
:
00
Minutes
Overview
5th Edition
eRaksha Hackathon 2026
eDC IIT Delhi, in collaboration with the CyberPeace, is organizing a hackathon focused on defence-AI and digital safety challenges. The event encourages creativity, problem-solving, and industry-level exposure through expert mentorship. Participants will work across domains like AI/ML, threat detection, blockchain, and secure software development. We aim to develop practical, scalable solutions that address defence-related problems and enhance national security. The vision is to build impactful technologies aligned with CyberPeace’s mission of a safer, more resilient digital ecosystem.
Competition Format: Innovation/ Prototype development
The Hackathon is an innovation-driven, prototype-development challenge designed to assess participants’ ability to conceptualize and build cutting-edge technological solutions that address real-world security challenges.
This 36-hour intensive event will revolve around two critical problem statements: Agentic AI for Securing Home IoT Devices Using Deception & Autonomous Defence, where participants will design intelligent, self-defending systems capable of detecting and mitigating threats in home IoT ecosystems; and Agentic AI for Deepfake Detection & Authenticity Verification, which calls for innovative AI-powered prototypes to identify manipulated media and validate content integrity. The Hackathon aims to foster creativity, rapid prototyping, and practical innovation in areas vital to national security and digital trust.
This 36-hour intensive event will revolve around two critical problem statements: Agentic AI for Securing Home IoT Devices Using Deception & Autonomous Defence, where participants will design intelligent, self-defending systems capable of detecting and mitigating threats in home IoT ecosystems; and Agentic AI for Deepfake Detection & Authenticity Verification, which calls for innovative AI-powered prototypes to identify manipulated media and validate content integrity. The Hackathon aims to foster creativity, rapid prototyping, and practical innovation in areas vital to national security and digital trust.

CCTNS Scheme had been conceptualized as a comprehensive and integrated system for enhancing the efficiency and effective policing at all levels and especially at the Police Station level in order to achieve the following key objectives:
Creating Centralized Databases
Creating State and Central levels databases on crime and criminals starting from FIRs.
Sharing Real-time Information
Enable easy sharing of real-time information/ intelligence across police stations, districts and States.
Prevention
Improved investigation and crime prevention.
Citizen Portals
Improved service delivery to the public/ stakeholders through Citizen Portals.
Registration
Registration for eRaksha Hackathon 2025 begins on Dec 19, 2025. Participants can register individually or in teams of up to 3 members and submit an abstract explaining their proposed approach to the challenge in PPT/PDF format. They should outline the problem they aim to solve, the methodology and technological stack they intend to use, and any relevant experience or past work in the specific domain.
Shortlisting
Participants/Teams will be evaluated based on the document/brief, the quality of their writeup/idea and PowerPoint presentation submitted during registrations on the basis of set judging criteria.
Top 20 (maximum 20x3=60 head counts) Teams/Participants will be shortlisted for the Final Round
Top 20 (maximum 20x3=60 head counts) Teams/Participants will be shortlisted for the Final Round
Grand Finale (IIT Delhi)
The Inauguration of the Hackathon will take place on 19 December 2025 at the IIT Delhi Auditorium. The Grand Finale is scheduled from 16 January to 18 January 2026, also at IIT Delhi.
The Award Function will be held on 10th February 2026 at Bharat Mandapam, exclusively for the winners.
The Award Function will be held on 10th February 2026 at Bharat Mandapam, exclusively for the winners.
Important Dates
19-31
December
2025
2025
1-10
January
2026
2026
16-18
January
2026
2026
10
February
2026
2026
Registration
Period
Period
Shortlisting
Finale
Award Function
Problem Statement
Agentic AI for Securing Home IoT Devices Using Deception & Autonomous Defence
Design an agentic AI–powered IoT security system that protects home users from cyber attacks using:
Register Now
Objectives
Design an agentic AI–powered IoT security system that protects home users from cyber attacks using:
- Autonomous threat detection
- Deception technologies (honeypots/honeytokens)
- Real-time response
- Edge-based firewall + monitoring capabilities (Firewall-a-like)
The goal is to build a consumer-friendly, plug-and-play security appliance that continuously monitors, analyses, and protects every device connected to the home network.
Key Requirements
1. IoT Device Discovery & Risk Profiling
Build an AI agent that can:
● Automatically detect all devices on the home network (TVs, cameras, bulbs, appliances, wearables)
● Profile device behaviour using baseline modelling
● Identify outdated firmware, weak configurations, open ports
● Predict which devices are most likely to be exploited
Outcome:
A unified, real-time risk dashboard for home IoT devices.
1. IoT Device Discovery & Risk Profiling
Build an AI agent that can:
● Automatically detect all devices on the home network (TVs, cameras, bulbs, appliances, wearables)
● Profile device behaviour using baseline modelling
● Identify outdated firmware, weak configurations, open ports
● Predict which devices are most likely to be exploited
Outcome:
A unified, real-time risk dashboard for home IoT devices.
Timeline:
● Registration Opens: 19 December 2025
● Registration Closes: 31 December 2025
● Shortlisting Period: 1 January 2026 – 10 January 2026
● Finale: 16 January 2026 – 18 January 2026
● Award Ceremony: 10 February 2026
● Registration Opens: 19 December 2025
● Registration Closes: 31 December 2025
● Shortlisting Period: 1 January 2026 – 10 January 2026
● Finale: 16 January 2026 – 18 January 2026
● Award Ceremony: 10 February 2026
Register Now
Agentic AI Problem Statement: Deepfake Detection & Authenticity Verification
Develop an agentic AI system capable of autonomouslyidentifying, analysing, and responding to deepfake audio/video content in realtime across operational, tactical, and open-source environments.
Register Now
Objectives
Develop an agentic AI system capable of autonomously identifying, analysing and responding to deepfake audio/ video content in realtime across operational, tactical and open-source environments.
Key Requirements
On-Device, Field-Operative Agent
Create a lightweight edge agent that can run on:
- Smartphones
- Body-cams
- RBAC, MFA, secure firmware
- Handheld tactical devices
It should support:
- Offline detection without cloud dependency
- Low-power operation
- Immediate authentication results during field missions
- Tools and frameworks
- Sample test environments (mock attacks)
Agent Capabilities:
✔ Edge inference
✔ Compression-aware detection
✔ Cognitive assistance for operatives
✔ Edge inference
✔ Compression-aware detection
✔ Cognitive assistance for operatives
Expected Outcome
A reliable, fast, autonomous deepfake detection agent that:
● Operates across cloud, edge, and field devices
● Continuously learns from new threats
● Assists security agencies, social platforms and operatives
● Strengthens national security and digital media integrity
A reliable, fast, autonomous deepfake detection agent that:
● Operates across cloud, edge, and field devices
● Continuously learns from new threats
● Assists security agencies, social platforms and operatives
● Strengthens national security and digital media integrity
Timeline:
● Registration Opens: 19 December 2025
● Registration Closes: 31 December 2025
● Shortlisting Period: 1 January 2026 – 10 January 2026
● Finale: 16 January 2026 – 18 January 2026
● Award Ceremony: 10 February 2026
● Registration Opens: 19 December 2025
● Registration Closes: 31 December 2025
● Shortlisting Period: 1 January 2026 – 10 January 2026
● Finale: 16 January 2026 – 18 January 2026
● Award Ceremony: 10 February 2026
Register Now
Judging Criteria:
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Criteria:
Innovation & Creativity:
● Detection: Use advanced AI and machine learning methods to detect deepfakes with high accuracy while reducing false positives and negatives.
● Real-time Analysis: Process and analyze media information in real or near real time to ensure early detection of deepfakes, particularly in key settings such as live news broadcasts or social media platforms.
● Mitigation: Create ways to reduce the impact of recognized deepfakes, such as watermarking legitimate content, notifying users or platforms, and giving verifiable evidence to dispute bogus information.
● Scalability and Efficiency: Make sure the solution is scalable and can be integrated into several platforms (social media, news outlets, and government organizations) without sacrificing performance.
● Ethical considerations: Address ethical concerns about privacy, data consumption, and the possible misuse of the detection system. The solution should also follow legal guidelines and be flexible to changing requirements.
● Detection: Use advanced AI and machine learning methods to detect deepfakes with high accuracy while reducing false positives and negatives.
● Real-time Analysis: Process and analyze media information in real or near real time to ensure early detection of deepfakes, particularly in key settings such as live news broadcasts or social media platforms.
● Mitigation: Create ways to reduce the impact of recognized deepfakes, such as watermarking legitimate content, notifying users or platforms, and giving verifiable evidence to dispute bogus information.
● Scalability and Efficiency: Make sure the solution is scalable and can be integrated into several platforms (social media, news outlets, and government organizations) without sacrificing performance.
● Ethical considerations: Address ethical concerns about privacy, data consumption, and the possible misuse of the detection system. The solution should also follow legal guidelines and be flexible to changing requirements.
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Description:
Assessing originality, inventiveness, and the potential to disrupt traditional approaches. The solution must align well with the competition's theme and directly address the selected problem statement. The idea should demonstrate uniqueness, creativity, or innovation in its approach
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Criteria:
Technical Implementation and Feasibility:
● Detection: Use advanced AI and machine learning methods to detect deepfakes with high accuracy while reducing false positives and negatives.
● Real-time Analysis: Process and analyze media information in real or near real time to ensure early detection of deepfakes, particularly in key settings such as live news broadcasts or social media platforms.
● Mitigation: Create ways to reduce the impact of recognized deepfakes, such as watermarking legitimate content, notifying users or platforms, and giving verifiable evidence to dispute bogus information.
● Scalability and Efficiency: Make sure the solution is scalable and can be integrated into several platforms (social media, news outlets, and government organizations) without sacrificing performance.
● Ethical considerations: Address ethical concerns about privacy, data consumption, and the possible misuse of the detection system. The solution should also follow legal guidelines and be flexible to changing requirements.
● Detection: Use advanced AI and machine learning methods to detect deepfakes with high accuracy while reducing false positives and negatives.
● Real-time Analysis: Process and analyze media information in real or near real time to ensure early detection of deepfakes, particularly in key settings such as live news broadcasts or social media platforms.
● Mitigation: Create ways to reduce the impact of recognized deepfakes, such as watermarking legitimate content, notifying users or platforms, and giving verifiable evidence to dispute bogus information.
● Scalability and Efficiency: Make sure the solution is scalable and can be integrated into several platforms (social media, news outlets, and government organizations) without sacrificing performance.
● Ethical considerations: Address ethical concerns about privacy, data consumption, and the possible misuse of the detection system. The solution should also follow legal guidelines and be flexible to changing requirements.
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Description:
Evaluating the feasibility, scalability, and robustness of the design and prototype. The solution should be practical and realistic to implement in real-world scenarios. It must be technically and economically viable, with resources or infrastructure being reasonably accessible. The potential challenges of implementing the solution should be identified, and appropriate strategies to address them must be outlined effectively.
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Criteria:
Technical depth and Illustration of the idea:
● Detection: Use advanced AI and machine learning methods to detect deepfakes with high accuracy while reducing false positives and negatives.
● Real-time Analysis: Process and analyze media information in real or near real time to ensure early detection of deepfakes, particularly in key settings such as live news broadcasts or social media platforms.
● Mitigation: Create ways to reduce the impact of recognized deepfakes, such as watermarking legitimate content, notifying users or platforms, and giving verifiable evidence to dispute bogus information.
● Scalability and Efficiency: Make sure the solution is scalable and can be integrated into several platforms (social media, news outlets, and government organizations) without sacrificing performance.
● Ethical considerations: Address ethical concerns about privacy, data consumption, and the possible misuse of the detection system. The solution should also follow legal guidelines and be flexible to changing requirements.
● Detection: Use advanced AI and machine learning methods to detect deepfakes with high accuracy while reducing false positives and negatives.
● Real-time Analysis: Process and analyze media information in real or near real time to ensure early detection of deepfakes, particularly in key settings such as live news broadcasts or social media platforms.
● Mitigation: Create ways to reduce the impact of recognized deepfakes, such as watermarking legitimate content, notifying users or platforms, and giving verifiable evidence to dispute bogus information.
● Scalability and Efficiency: Make sure the solution is scalable and can be integrated into several platforms (social media, news outlets, and government organizations) without sacrificing performance.
● Ethical considerations: Address ethical concerns about privacy, data consumption, and the possible misuse of the detection system. The solution should also follow legal guidelines and be flexible to changing requirements.
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Description:
The idea must be clearly presented, highlighting its key features, workflows, and functionality through effective use of visual aids such as diagrams, flowcharts, or lifecycle visualizations. Additionally, participants should demonstrate the technical depth of their solution by detailing the coding, technologies, frameworks, or libraries used. Any code samples provided should be well-commented and easy to understand, clearly explaining the purpose and usability of each component.
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Criteria:
Impact on Security:
● Detection: Use advanced AI and machine learning methods to detect deepfakes with high accuracy while reducing false positives and negatives.
● Real-time Analysis: Process and analyze media information in real or near real time to ensure early detection of deepfakes, particularly in key settings such as live news broadcasts or social media platforms.
● Mitigation: Create ways to reduce the impact of recognized deepfakes, such as watermarking legitimate content, notifying users or platforms, and giving verifiable evidence to dispute bogus information.
● Scalability and Efficiency: Make sure the solution is scalable and can be integrated into several platforms (social media, news outlets, and government organizations) without sacrificing performance.
● Ethical considerations: Address ethical concerns about privacy, data consumption, and the possible misuse of the detection system. The solution should also follow legal guidelines and be flexible to changing requirements.
● Detection: Use advanced AI and machine learning methods to detect deepfakes with high accuracy while reducing false positives and negatives.
● Real-time Analysis: Process and analyze media information in real or near real time to ensure early detection of deepfakes, particularly in key settings such as live news broadcasts or social media platforms.
● Mitigation: Create ways to reduce the impact of recognized deepfakes, such as watermarking legitimate content, notifying users or platforms, and giving verifiable evidence to dispute bogus information.
● Scalability and Efficiency: Make sure the solution is scalable and can be integrated into several platforms (social media, news outlets, and government organizations) without sacrificing performance.
● Ethical considerations: Address ethical concerns about privacy, data consumption, and the possible misuse of the detection system. The solution should also follow legal guidelines and be flexible to changing requirements.
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Description:
Measuring how effectively the solution enhances surveillance capabilities and fortifies cybersecurity measures.
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Criteria:
Presentation & Clarity:
● Detection: Use advanced AI and machine learning methods to detect deepfakes with high accuracy while reducing false positives and negatives.
● Real-time Analysis: Process and analyze media information in real or near real time to ensure early detection of deepfakes, particularly in key settings such as live news broadcasts or social media platforms.
● Mitigation: Create ways to reduce the impact of recognized deepfakes, such as watermarking legitimate content, notifying users or platforms, and giving verifiable evidence to dispute bogus information.
● Scalability and Efficiency: Make sure the solution is scalable and can be integrated into several platforms (social media, news outlets, and government organizations) without sacrificing performance.
● Ethical considerations: Address ethical concerns about privacy, data consumption, and the possible misuse of the detection system. The solution should also follow legal guidelines and be flexible to changing requirements.
● Detection: Use advanced AI and machine learning methods to detect deepfakes with high accuracy while reducing false positives and negatives.
● Real-time Analysis: Process and analyze media information in real or near real time to ensure early detection of deepfakes, particularly in key settings such as live news broadcasts or social media platforms.
● Mitigation: Create ways to reduce the impact of recognized deepfakes, such as watermarking legitimate content, notifying users or platforms, and giving verifiable evidence to dispute bogus information.
● Scalability and Efficiency: Make sure the solution is scalable and can be integrated into several platforms (social media, news outlets, and government organizations) without sacrificing performance.
● Ethical considerations: Address ethical concerns about privacy, data consumption, and the possible misuse of the detection system. The solution should also follow legal guidelines and be flexible to changing requirements.
Objectives:
● Accuracy: Maintain a high detection accuracy rate across various media formats (video, audio, and pictures).
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
● Speed: Ensure that the system can process and analyze media content rapidly and with low latency.
● User Interface: Create an easy-to-use interface for cybersecurity professionals, content moderators, and law enforcement.
● Reporting: When a deepfake is found, send detailed reports or alerts that include confidence scores and the nature of the manipulation.
Description:
Examining the clarity of the project presentation, the ability to articulate the problem-solving approach, and the overall persuasive power of the demonstration. The PowerPoint presentation should be clear, organized, and comprehensive. It must effectively convey the idea using a balanced mix of visuals, text, and explanations to ensure the audience understands the solution.
Hall of Fame
In the CyberChallenge Hall of Fame, we honour the achievements of top cybersecurity champions. Join us in celebrating their accomplishments and commitment to securing our nation's cyber infrastructure.
Explore the profiles of our winners from previous years.
Explore the profiles of our winners from previous years.
Past Winners >







