Metadata-Version: 2.4
Name: gatv2-ns3-ids
Version: 0.1.0
Summary: GATv2-NS3 Hybrid Intrusion Detection System with Self-Focusing Simulations
Author: AI Research Team
License: MIT
Project-URL: Homepage, https://github.com/your-org/gatv2-ns3-ids
Project-URL: Documentation, https://github.com/your-org/gatv2-ns3-ids/blob/main/README.md
Project-URL: Repository, https://github.com/your-org/gatv2-ns3-ids
Project-URL: Issues, https://github.com/your-org/gatv2-ns3-ids/issues
Keywords: graph-neural-networks,intrusion-detection,network-simulation,cybersecurity,explainable-ai
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Security
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: torch>=1.12.0
Requires-Dist: numpy>=1.21.0
Requires-Dist: pandas>=1.3.0
Requires-Dist: scikit-learn>=1.0.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: tqdm>=4.62.0
Requires-Dist: matplotlib>=3.5.0
Requires-Dist: seaborn>=0.11.0
Requires-Dist: networkx>=2.6.0
Requires-Dist: xgboost>=1.6.0
Requires-Dist: pygccxml==2.1.0
Provides-Extra: pyg
Requires-Dist: torch-geometric; extra == "pyg"
Requires-Dist: torch-scatter; extra == "pyg"
Requires-Dist: torch-sparse; extra == "pyg"
Provides-Extra: ns3
Requires-Dist: ns3-gym; extra == "ns3"
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: pytest-cov; extra == "dev"
Requires-Dist: black; extra == "dev"
Requires-Dist: flake8; extra == "dev"
Requires-Dist: wandb; extra == "dev"
Provides-Extra: all
Requires-Dist: gatv2-ns3-ids[dev,ns3,pyg]; extra == "all"

# GATv2-NS3 Hybrid IDS: Self-Focusing Simulations for Network Intrusion Detection

## Abstract

This repository implements a novel hybrid Intrusion Detection System (IDS) that combines Graph Attention Networks v2 (GATv2) with network simulation feedback from NS-3. Our key innovation, **Self-Focusing Simulations**, uses the model's attention uncertainty to intelligently guide simulation resources toward the most informative network states, achieving adaptive fidelity control and improved attack detection performance.

The system addresses the challenge of realistic network intrusion detection by:
- Building graph representations from actual network flow data
- Injecting realistic multi-stage attack patterns 
- Using attention entropy to trigger focused NS-3 simulations
- Providing comprehensive baseline comparisons across multiple model architectures

**Key Results**: Our comprehensive evaluation shows that MLP and GraphSAGE achieve the best performance on NSL-KDD (F1~0.75), while RandomForest dominates on Cisco dataset (F1=0.869). The GATv2-NS3 hybrid demonstrates competitive performance on NSL-KDD (F1=0.711) with potential for further optimization.

## Setup

### Prerequisites
- Python 3.8+
- Docker (for NS-3 simulation)
- Git

### Installation

1. **Clone the repository**
```bash
git clone <repository-url>
cd gatv2_ns3_ids
```

2. **Create Python environment**
```bash
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
export PYTHONPATH=src
```

3. **Build NS-3 Docker image**
```bash
docker build -f Dockerfile.marshallasch -t gatv2_ns3_ids:marshallasch .
```

4. **Verify installation**
```bash
# Test NS-3 Docker integration
docker run --rm gatv2_ns3_ids:marshallasch python3 -c "import ns3; print('NS-3 ready!')"

# Test data loading
python -c "from src.gatv2_ns3_ids.data.nsl_kdd import NSLKDDDatasetLoader; print('Data loaders ready!')"
```

## Repository Structure

```
gatv2_ns3_ids/
├── README.md                          # Project documentation
├── requirements.txt                   # Python dependencies
├── Dockerfile.marshallasch           # NS-3 Docker environment
├── configs/                          # Model configurations
│   ├── gatv2_cisco.yaml             # GATv2 Cisco config
│   ├── gatv2_nsl_kdd.yaml           # GATv2 NSL-KDD config
│   └── *_cisco.yaml, *_nsl_kdd.yaml # Baseline model configs
├── data/                             # Datasets
│   ├── nsl_kdd/                      # NSL-KDD dataset
│   ├── cisco_small/                  # Cisco network graphs (small)
│   ├── cisco_processed/              # Processed Cisco graphs
│   └── cisco_subgraphs/              # Cisco subgraphs
├── src/gatv2_ns3_ids/               # Source code
│   ├── models/                       # Model implementations
│   │   ├── gatv2_ids.py             # GATv2-NS3 hybrid model
│   │   ├── gin_ids.py               # GIN model
│   │   ├── graphsage_ids.py         # GraphSAGE model
│   │   ├── mlp_ids.py               # MLP model
│   │   ├── random_forest_ids.py     # Random Forest model
│   │   ├── logistic_regression_ids.py # Logistic Regression model
│   │   └── xgboost_ids.py           # XGBoost model
│   ├── data/                        # Data processing
│   │   ├── nsl_kdd.py              # NSL-KDD loader
│   │   ├── cisco_dataset.py        # Cisco dataset loader
│   │   ├── network_graph_builder.py # Graph construction
│   │   └── attack_pattern_generator.py # Attack injection
│   ├── simulation/                  # NS-3 integration
│   │   ├── base.py                 # Base simulation classes
│   │   ├── ns3_client.py           # Docker NS-3 client
│   │   ├── curiosity_loop.py       # Self-focusing mechanism
│   │   └── sim_cache.py            # Simulation caching
│   ├── scripts/                    # Training and analysis scripts
│   │   ├── train_nsl_kdd.py       # NSL-KDD training
│   │   ├── train_cisco_synthetic.py # Cisco training
│   │   ├── train_baselines.py     # Baseline training
│   │   ├── visualize_results.py   # Results visualization
│   │   ├── create_cisco_subgraphs.py # Cisco preprocessing
│   │   ├── create_small_cisco_graphs.py # Cisco data reduction
│   │   └── preprocess_cisco_dataset.py # Cisco preprocessing
│   ├── training/                   # Training utilities
│   │   ├── comprehensive_evaluation.py # Evaluation framework
│   │   └── multi_objective.py     # Multi-objective training
│   └── utils/                      # Utilities
│       └── common.py              # Common utilities
├── outputs/                        # Training results
│   ├── *_cisco/                   # Cisco model results
│   └── *_nsl_kdd/                 # NSL-KDD model results
└── leaderboards/                  # Visualization results
    ├── README.md                  # Leaderboard documentation
    ├── *_leaderboard.png         # Performance rankings
    ├── *_class_*.png             # Per-class analysis
    └── all_results.csv           # Raw results data
```

## Methods and Comparison

### Implemented Models

**Graph Neural Networks:**
- **GATv2-NS3 Hybrid** (Our method): Graph Attention Networks v2 with Self-Focusing Simulations
- **GraphSAGE**: Graph Sample and Aggregate
- **GIN**: Graph Isomorphism Network
- **MLP**: Multi-Layer Perceptron on node features

**Traditional Machine Learning:**
- **Random Forest**: Ensemble decision trees
- **XGBoost**: Gradient boosting
- **Logistic Regression**: Linear classification

### Why Our Method Stands Out

1. **Self-Focusing Simulations**: Unlike static simulation approaches, our method dynamically adjusts NS-3 simulation fidelity based on model attention uncertainty, focusing computational resources where they matter most.

2. **Attention-Driven Resource Allocation**: The Curiosity Loop mechanism uses attention entropy to identify uncertain predictions and triggers detailed network simulations only for the most informative cases.

3. **Multi-Objective Training**: Combines classification loss with simulation fidelity loss and attention sparsity loss for robust learning.

4. **Realistic Network Modeling**: Builds graph representations from actual network flow data rather than artificial topologies.

5. **Scalable Architecture**: Docker-based NS-3 integration ensures reproducible simulation environments across different systems.

## NSL-KDD Dataset and Experiments

### Purpose
The NSL-KDD experiments evaluate multi-class intrusion detection performance across 5 attack categories: normal, DoS, probe, R2L, and U2R attacks. This dataset tests the model's ability to distinguish between different attack types in a realistic network environment.

### How to Run

**Single Model Training:**
```bash
# Train GATv2-NS3 hybrid on NSL-KDD
source .venv/bin/activate && export PYTHONPATH=src
python -m gatv2_ns3_ids.scripts.train_nsl_kdd \
  --config configs/config_nsl_kdd_curiosity_loop.yaml \
  --nsl_kdd_path data/nsl_kdd \
  --output_dir outputs/gatv2_nsl_kdd \
  --epochs 20

# Train baseline models
python -m gatv2_ns3_ids.scripts.train_baselines \
  --config configs/config_baseline_graphsage.yaml \
  --model graphsage \
  --dataset nsl_kdd \
  --nsl_kdd_path data/nsl_kdd \
  --output_dir outputs/baseline_graphsage_nsl_kdd \
  --epochs 20
```

**Results Visualization:**
```bash
# Generate comprehensive leaderboards
python -m gatv2_ns3_ids.scripts.visualize_results
```

### Key Features
- **Multi-class classification** (5 categories)
- **Graph construction** via k-nearest neighbors on feature similarity
- **Class imbalance handling** with weighted loss functions
- **Attention uncertainty** triggers focused simulations

## Cisco Dataset and Experiments

### Purpose
The Cisco experiments focus on binary classification (normal vs. attack) using real enterprise network topologies. This dataset tests the model's performance on realistic network structures with synthetic attack injection.

### Differences from NSL-KDD
- **Binary classification** instead of multi-class
- **Real network topologies** from Cisco enterprise networks
- **Synthetic attack injection** with realistic attack patterns
- **Graph structure** based on actual network connections
- **Smaller scale** but more realistic network relationships

### How to Run

**Single Model Training:**
```bash
# Train GATv2-NS3 hybrid on Cisco
source .venv/bin/activate && export PYTHONPATH=src
python -m gatv2_ns3_ids.scripts.train_cisco_synthetic \
  --config configs/config_cisco_improved.yaml \
  --cisco_data_path data/cisco_small \
  --output_dir outputs/gatv2_cisco \
  --epochs 10 \
  --attack_ratio 0.10

# Train baseline models
python -m gatv2_ns3_ids.scripts.train_baselines \
  --config configs/config_baseline_randomforest.yaml \
  --model randomforest \
  --dataset cisco \
  --cisco_data_path data/cisco_small \
  --output_dir outputs/baseline_rf_cisco \
  --epochs 10 \
  --attack_ratio 0.10
```

**Results Visualization:**
```bash
# Generate comprehensive leaderboards
python -m gatv2_ns3_ids.scripts.visualize_results
```

## Evaluation Results and Comparison

### NSL-KDD Dataset Results (Multi-class Classification)

| Model | F1 Score | Accuracy | Training Time |
|-------|----------|----------|---------------|
| **MLP** | **0.7522** | **0.7528** | 0.55s |
| **GraphSAGE** | **0.7485** | **0.7513** | 2.29s |
| **XGBoost** | 0.7156 | 0.7231 | 1.49s |
| **GATv2** | 0.7111 | 0.7443 | 0.00s |
| **Logistic** | 0.7091 | 0.7294 | 7.76s |
| **GIN** | 0.6934 | 0.6632 | 3.47s |
| RandomForest | 0.4836 | 0.5499 | 3.63s |

### Cisco Dataset Results (Binary Classification)

| Model | F1 Score | Accuracy | Training Time |
|-------|----------|----------|---------------|
| **RandomForest** | **0.8686** | **0.8889** | 0.08s |
| **XGBoost** | 0.7801 | 0.7593 | 0.23s |
| **Logistic** | 0.7612 | 0.7407 | 0.01s |
| **GIN** | 0.7137 | 0.7037 | 0.06s |
| **MLP** | 0.6038 | 0.5556 | 0.04s |
| **GATv2** | 0.3125 | 0.1852 | 0.00s |
| GraphSAGE | 0.0579 | 0.1852 | 0.05s |

### Key Findings

1. **Dataset-Specific Performance**: 
   - **NSL-KDD**: MLP and GraphSAGE lead with F1 scores ~0.75, showing that both simple neural networks and graph-based approaches work well for multi-class intrusion detection
   - **Cisco**: RandomForest dominates with F1=0.869, while traditional ML models (XGBoost, Logistic) significantly outperform graph models on this binary classification task

2. **Graph Model Performance**: Graph models show mixed results - excellent on NSL-KDD but struggling on Cisco dataset, suggesting the graph structure may not be as beneficial for the specific Cisco network topology used.

3. **Efficiency vs Performance**: 
   - **Fastest**: Logistic Regression (0.01s) and MLP (0.04-0.55s) offer good speed-performance trade-offs
   - **Most Accurate**: MLP (NSL-KDD) and RandomForest (Cisco) achieve the best F1 scores on their respective datasets

4. **GATv2 Hybrid Results**: The GATv2-NS3 hybrid shows competitive performance on NSL-KDD (F1=0.711) but needs improvement on Cisco dataset, indicating potential for further optimization of the attention mechanism and simulation integration.

### Analysis Tools

```bash
# Generate comprehensive leaderboards and visualizations
python -m gatv2_ns3_ids.scripts.visualize_results

# View generated visualizations
ls leaderboards/
```

## Future Research Directions

### Technical Improvements
1. **Advanced Graph Architectures**: Explore Graph Transformers and Heterogeneous Graph Neural Networks for better representation learning.

2. **Multi-Modal Fusion**: Combine graph, sequence, and tabular data for comprehensive network analysis.

3. **Real-Time Processing**: Develop streaming algorithms for online intrusion detection with continuous learning.

4. **Federated Learning**: Implement privacy-preserving collaborative training across multiple network domains.

### Simulation Enhancements
1. **Advanced NS-3 Integration**: Implement full network protocol simulation with detailed packet-level analysis.

2. **Dynamic Topology**: Support time-varying network structures and adaptive attack scenarios.

3. **Multi-Fidelity Optimization**: Develop more sophisticated fidelity selection algorithms based on uncertainty quantification.

### Dataset Extensions
1. **Modern Datasets**: Extend evaluation to CIC-IDS-2017, UNSW-NB15, and other contemporary datasets.

2. **Industrial Networks**: Evaluate on SCADA, IoT, and industrial control system networks.

3. **Adversarial Robustness**: Test against adversarial attacks and evasion techniques.

### Applications
1. **Network Forensics**: Use attention mechanisms for post-incident analysis and attack attribution.

2. **Proactive Defense**: Develop predictive models for attack prevention rather than just detection.

3. **Explainable AI**: Enhance interpretability for security analysts through better visualization and explanation techniques.

---

## Citation

If you use this work in your research, please cite:

```bibtex
@misc{gatv2_ns3_hybrid_ids,
  title={GATv2-NS3 Hybrid IDS: Self-Focusing Simulations for Network Intrusion Detection},
  author={[Your Name]},
  year={2024},
  note={Graph attention networks with adaptive network simulation feedback}
}
```

## License

This project is licensed under the MIT License - see the LICENSE file for details.
