
Seeking Feedback: Real-Time Screen + Keystroke Monitoring for AI-Aware Anti-Cheating System (FYP)
π§ Title:
Seeking Feedback: Real-Time Screen + Keystroke Monitoring for AI-Aware Anti-Cheating System (MVP FYP Project)
π Post Body:
Iβm a CS undergrad working on my Final Year Project, and Iβd really appreciate some constructive critique from the developer, ML, and privacy-conscious communities.
π Problem:
With remote learning and online exams becoming common, academic dishonesty is increasingly hard to detect β especially with the rise of LLMs, copy-paste coding, and browser switching during assessments.
Current proctoring tools focus mostly on webcams and raise serious privacy concerns, while still being easy to bypass.
π‘ Our MVP Proposal:
We're building a real-time, privacy-conscious anti-cheating system focused on:
Live screen stream monitoring (1β2 FPS sampling for efficiency)
Real-time keystroke analysis (flagging ctrl+c, ctrl+v, AI keywords like "ChatGPT", etc.)
Tamper detection (VM detection, sandbox evasion, plugin/modification flags)
Automated flagging via lightweight ML β only shows partial logs that triggered the alert
Auto self-destruct after the exam to eliminate data persistence or tracking concerns
Weβre deliberately not using webcams, microphones, or storing full keylogs/screens. Only flagged behavior is logged.
π Privacy Policy Safeguards:
App runs only during exam, self-uninstalls afterward
No webcam/audio access, no biometric tracking
Students agree via EULA + pre-exam consent
Source code will be partially open for transparency
π§ͺ Architecture (Draft)
Frontend: Electron-based cross-platform exam app
Monitoring Layer: Native C++/Rust agent for screen & process monitoring
Backend: Python API with flag logic, hosted on secure VPS (10β1000 concurrent streams)
ML: Lightweight detection models for anomaly + AI usage flags (not deep surveillance)
π¬ My Ask:
Is this technically viable at scale (1K students)?
What are the most critical flaws in this design?
How can I maintain control without violating ethical boundaries?
Would you (as a developer or educator) trust a system like this?
π Why This Matters:
If we can strike the right balance between cheating detection and privacy protection, we might be able to offer a legitimate solution to universities struggling with online examination integrity β without turning every student's room into a surveillance state.
All feedback β critical or supportive β is welcome.
Thanks in advance.
Replies