Abstract:
The rise of digital media manipulation has made image forgery detection essential in digital
forensics. Traditional approaches using only handcrafted or deep learning features often
struggle with limited scope and generalization across forgery types. This research addresses
these challenges by developing a hybrid detection system capable of identifying both splicing
and copy-move forgeries using a combination of statistical and deep features.
The system extracts handcrafted features, Discrete Cosine Transform, Zernike Moments, and
Color Histograms, and fuses them with deep semantic features from MobileNetV2. These
features are concatenated and classified using a Random Forest model. Implementation was
done using Python, OpenCV, TensorFlow Keras, and scikit-learn. Testing was performed on
CASIA v2 and CoMoFoD datasets, with a CLI interface allowing real-time user input and
result display.
The system achieved 66 percent accuracy, 67 percent precision, 60 percent recall, 63 percent
F1-score, and an AUC of 0.77. These results validate the effectiveness of hybrid feature fusion
for forgery detection. Expert feedback confirmed the project’s novelty and relevance, though
improvements such as forgery localisation and GUI-based interfaces are recommended for
future development.