This project implements deepfake detection system using a combination of Xception neural network architecture and LSTM (Long Short-Term Memory) networks. The application provides a user-friendly Streamlit interface for real-time deepfake image detection.
- Advanced Deep Learning Model: Utilizes Xception as the base feature extractor
- Temporal Analysis: Incorporates LSTM for enhanced temporal feature recognition
- High Accuracy Deepfake Detection: Robust model trained on diverse deepfake datasets
- Deep Learning: TensorFlow, Keras
- Web Interface: Streamlit
- Data Processing: NumPy, Pandas
- Image Processing: OpenCV
- Visualization: Matplotlib, Seaborn
I used 600 videos each class (real and fake) of CelebDF V2, you can access it here https://github.com/yuezunli/celeb-deepfakeforensics
-
Clone the repository:
git clone https://github.com/farhanharvito/DeepfakeDetection cd deepfake-detection -
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install dependencies:
pip install -r requirements.txt
To launch the Streamlit application:
streamlit run app.pyYou can download the pre-trained model here
- Accuracy: 91.67%
- Precision: 89.06%
- Recall: 95%
- F1 Score: 91.64%
-
Feature Extraction:
- Xception network extracts deep visual features from input images
- Handles complex spatial patterns in potential deepfake images
-
Temporal Analysis:
- LSTM layers process extracted features
- Captures temporal dependencies and sequence-level information
-
Classification:
- Final dense layers make binary classification (Deepfake vs. Real)
See requirements.txt for the complete list of dependencies.
Distributed under the MIT License. See LICENSE for more information.
- Quick Classification of Xception And Resnet-50 Models on Deepfake Video Using Local Binary Pattern
- A Comparative Study of Deepfake Video Detection Method
- Celeb-DF: A Large-scale Challenging Dataset for DeepFake Forensics
Disclaimer: This project is for educational and research purposes. Always use AI responsibly.