CVEForecast

Technical Details

Project Overview

The CVE Forecast project is an automated system designed to predict the number of Common Vulnerabilities and Exposures (CVEs) for the upcoming months. By leveraging a suite of time series forecasting models, the project provides insights into future trends in vulnerability disclosures.

Model Evaluation

Models are evaluated using multiple metrics to ensure reliable forecasts:

System Architecture

The project is built with a modular architecture, separating concerns into distinct Python modules. This design enhances maintainability, scalability, and testability.

Core Modules

Data Processing Pipeline

The data processing pipeline is designed to be efficient and robust, handling large volumes of CVE data.

  1. Data Ingestion: The system clones the official cvelistV5 repository from GitHub, which contains all CVE data in JSON format.
  2. Data Parsing: Each JSON file is parsed to extract the `publishedDate`. The system is designed to handle malformed JSON files gracefully.
  3. Time Series Aggregation: The extracted publication dates are aggregated into a monthly time series, counting the number of CVEs published each month.
  4. Data Validation: The time series data is validated to ensure completeness and correctness before being passed to the forecasting models.

Forecasting Models

The project employs a diverse set of over 25 time series forecasting models from the Darts library. This allows for a comprehensive analysis and the selection of the best-performing model for the given data.

Model Categories

Note: Models are automatically selected based on performance and stability. Deep learning models are available but disabled by default for CPU environments.

Model Evaluation

All models are evaluated using a rigorous and automated process defined within the system's configuration:

  1. Centralized Configuration: Key parameters for the evaluation, such as the train/test split ratio and forecast horizon, are managed in config.json.
  2. Optimized Hyperparameters: Model hyperparameters are systematically managed within the forecasting engine, leveraging best-known configurations for each model type.
  3. Performance Metrics: Models are ranked based on Mean Absolute Percentage Error (MAPE), with other metrics like MAE, RMSE, and MASE also calculated for a comprehensive assessment.
  4. Performance History: The performance of each model run, including metrics and hyperparameters, is logged in performance_history.json (path defined in config.json) to track performance and improvements over time.

Change Log

v.05 - Adolfo Suárez Madrid-Baraja 🇪🇸

  • Fixed a critical bug that prevented the cumulative graph from rendering due to an incorrect data structure in data.json.
  • Restored frontend compatibility by correcting the data generation logic, ensuring all charts now load correctly.

v.04 ORD ✈️ MAD

  • Enhanced model stability with improved error handling.
  • Added input validation and scaling for better numerical stability.
  • Optimized for CPU-only environments.
  • Implemented dynamic forecast period calculation.
  • Improved model selection based on MAPE scores.

Deployment and Automation

The CVE Forecast dashboard is deployed as a static website, with the data being updated daily through a fully automated CI/CD pipeline using GitHub Actions.

GitHub Actions Workflow

  1. Scheduled Trigger: The workflow is triggered daily at midnight UTC.
  2. Data Fetching: The latest CVE data is downloaded.
  3. Forecasting: The entire forecasting pipeline is executed, generating new predictions.
  4. Data Commit: The updated `data.json` is committed back to the repository.
  5. Deployment: The static site is automatically deployed, making the new forecasts available to users.