Metadata-Version: 2.1
Name: alignment-handbook
Version: 0.4.0.dev0
Summary: The Alignment Handbook
Home-page: https://github.com/huggingface/alignment-handbook
Author: The Hugging Face team (past and future)
Author-email: lewis@huggingface.co
License: Apache
Keywords: nlp deep learning rlhf llm
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: accelerate==0.23.0
Requires-Dist: bitsandbytes==0.41.2.post2
Requires-Dist: einops>=0.6.1
Requires-Dist: evaluate==0.4.0
Requires-Dist: datasets==2.14.6
Requires-Dist: deepspeed==0.12.2
Requires-Dist: hf_transfer>=0.1.4
Requires-Dist: huggingface-hub<1.0,>=0.19.2
Requires-Dist: jinja2>=3.0.0
Requires-Dist: ninja>=1.11.1
Requires-Dist: numpy>=1.24.2
Requires-Dist: packaging>=23.0
Requires-Dist: peft==0.7.1
Requires-Dist: protobuf<=3.20.2
Requires-Dist: safetensors>=0.3.3
Requires-Dist: sentencepiece>=0.1.99
Requires-Dist: scipy
Requires-Dist: tensorboard
Requires-Dist: tqdm>=4.64.1
Requires-Dist: transformers==4.36.2
Requires-Dist: trl==0.7.10
Requires-Dist: matplotlib>=3.9.0
Requires-Dist: scikit-learn>=1.5.0
Provides-Extra: tests
Requires-Dist: pytest; extra == "tests"
Requires-Dist: parameterized>=0.9.0; extra == "tests"
Provides-Extra: torch
Requires-Dist: torch==2.1.2; extra == "torch"
Provides-Extra: quality
Requires-Dist: black==23.1.0; extra == "quality"
Requires-Dist: isort>=5.12.0; extra == "quality"
Requires-Dist: flake8>=6.0.0; extra == "quality"
Provides-Extra: docs
Requires-Dist: hf-doc-builder>=0.4.0; extra == "docs"
Provides-Extra: dev
Requires-Dist: hf-doc-builder>=0.4.0; extra == "dev"
Requires-Dist: black==23.1.0; extra == "dev"
Requires-Dist: isort>=5.12.0; extra == "dev"
Requires-Dist: flake8>=6.0.0; extra == "dev"
Requires-Dist: pytest; extra == "dev"
Requires-Dist: parameterized>=0.9.0; extra == "dev"

# LLM RLHF Benchmark

## Install

```
# create conda environment
conda create -n if_rlhf python=3.10 absl-py pyparsing pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
conda activate if_rlhf

# check gpu
import torch
torch.cuda.is_available()

# install the remaining package dependencies
python -m pip install -e .
pip install -r requirements.txt

# install flash attention
MAX_JOBS=4 pip install flash-attn --no-build-isolation

# for deepspeed
conda install -c conda-forge mpi4py mpich
```

## Datasets
First make the train_dataset and eval_dataset
```
cd $HRI_RLHF_HOME
mkdir dataset
python src/reward_modeling/make_dataset_hh.py # makes train_dataset and eval_dataset
```
Second, split the eval datasets with subscripts like eval_dataset_shorter, eval_datset_longer, etc.. Refer to split_eval.ipynb

### Reward Modeling
#### llama3
example scripts for training reward model of hh_dataset
using noise of 20%, hh subset length threshold 10
```
CUDA_VISIBLE_DEVICES=4 ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero2.yaml --num_processes=1 --main_process_port=1231 src/reward_modeling/reward_modeling.py recipes/reward_modeling/llama3/config_Llama-3-8B_lora_r4_hh_subset_length_ratio0.2_threshold10.yaml
```

using noise of 20% hh subset sycophancy
```
CUDA_VISIBLE_DEVICES=6 ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero2.yaml --num_processes=1 --main_process_port=1233 src/reward_modeling/reward_modeling.py recipes/reward_modeling/llama3/config_Llama-3-8B_lora_r4_hh_subset_sycophancy_ratio0.4.yaml
```


### influence computation
After training the Reward Modelm, Influence computation requires 4 steps sequentially, though (1) and (2) can be done in parallel.
(1) compute val_grad_avg 
(2) compute lambda
(3) compute r_l
(4) compute influence
below is example code for each step using length bias of ratio0.2
(1) val_grad_avg
```
CUDA_VISIBLE_DEVICES=4 python src/influence/compute_val_grad_avg.py \
    --model_path "logs/reward_Llama-3-8B_lora_all_score_r4_hh_subset_length_ratio0.2_threshold10" \
    --tokenizer_path "meta-llama/Meta-Llama-3-8B" \
    --eval_dataset_dir "dataset/hh-rlhf_subset/llama3" \
    --val_names "shorter,longer,shorter_threshold10" # eval_dataset_shorter, eval_dataset_longer, eval_dataset_shorter_threshold10

```
(2) lambda 
```
CUDA_VISIBLE_DEVICES=4 python src/influence/compute_lambda.py \
    --model_path "logs/reward_Llama-3-8B_lora_all_score_r4_hh_subset_length_ratio0.2_threshold10" \
    --tokenizer_path "meta-llama/Meta-Llama-3-8B" \
    --train_data_path "dataset/hh-rlhf_subset/llama3/train_dataset_length_ratio0.2_threshold10"
```
(3) r_l
```
CUDA_VISIBLE_DEVICES=4 python src/influence/compute_r_l.py \
    --model_path "logs/reward_Llama-3-8B_lora_all_score_r4_hh_subset_length_ratio0.2_threshold10" \
    --tokenizer_path "meta-llama/Meta-Llama-3-8B" \
    --train_data_path "dataset/hh-rlhf_subset/llama3/train_dataset_length_ratio0.2_threshold10" \
    --val_names "shorter,longer,shorter_threshold10"
```
(4) influence
```
CUDA_VISIBLE_DEVICES=4 python src/influence/compute_influence.py \
    --model_path "logs/reward_Llama-3-8B_lora_all_score_r4_hh_subset_length_ratio0.2_threshold10" \
    --tokenizer_path "meta-llama/Meta-Llama-3-8B" \
    --train_data_path "dataset/hh-rlhf_subset/llama3/train_dataset_length_ratio0.2_threshold10" \
    --val_names "shorter,longer,shorter_threshold10"
```


# The Alignment Handbook

Robust recipes to align language models with human and AI preferences.

## What is this?

Just one year ago, chatbots were out of fashion and most people hadn't heard about techniques like Reinforcement Learning from Human Feedback (RLHF) to align language models with human preferences. Then, OpenAI broke the internet with ChatGPT and Meta followed suit by releasing the Llama series of language models which enabled the ML community to build their very own capable chatbots. This has led to a rich ecosystem of datasets and models that have mostly focused on teaching language models to follow instructions through supervised fine-tuning (SFT).

However, we know from the [InstructGPT](https://huggingface.co/papers/2203.02155) and [Llama2](https://huggingface.co/papers/2307.09288) papers that significant gains in helpfulness and safety can be had by augmenting SFT with human (or AI) preferences. At the same time, aligning language models to a set of preferences is a fairly novel idea and there are few public resources available on how to train these models, what data to collect, and what metrics to measure for best downstream performance.

The Alignment Handbook aims to fill that gap by providing the community with a series of robust training recipes that span the whole pipeline.

## News 🗞️

* November 10, 2023: We release all the training code to replicate Zephyr-7b-β 🪁!

## Links 🔗

* [Zephyr 7B models, datasets, and demos](https://huggingface.co/collections/HuggingFaceH4/zephyr-7b-6538c6d6d5ddd1cbb1744a66)


## Contents

The initial release of the handbook will focus on the following techniques:

* **Supervised fine-tuning:** teach language models to follow instructions and tips on how to collect and curate your own training dataset.
* **Reward modeling:** teach language models to distinguish model responses according to human or AI preferences.
* **Rejection sampling:** a simple, but powerful technique to boost the performance of your SFT model.
* **Direct preference optimisation (DPO):** a powerful and promising alternative to PPO.

## Getting started

To run the code in this project, first create a Python virtual environment using e.g. Conda:

```shell
conda create -n handbook python=3.10 && conda activate handbook
```

Next, install PyTorch `v2.1.0` - the precise version is important for reproducibility! Since this is hardware-dependent, we
direct you to the [PyTorch Installation Page](https://pytorch.org/get-started/locally/).

You can then install the remaining package dependencies as follows:

```shell
python -m pip install .
```

You will also need Flash Attention 2 installed, which can be done by running:
_Note: If your machine has less than 96GB of RAM and many CPU cores, reduce the MAX_JOBS., e.g. `MAX_JOBS=4 pip install flash-attn --no-build-isolation` _

```shell
python -m pip install flash-attn --no-build-isolation
```

Next, log into your Hugging Face account as follows:

```shell
huggingface-cli login
```

Finally, install Git LFS so that you can push models to the Hugging Face Hub:

```shell
sudo apt-get install git-lfs
```

You can now checkout the `scripts` and `recipes` directories for instructions on how to train some models 🪁!

## Project structure

```
├── LICENSE
├── Makefile                    <- Makefile with commands like `make style`
├── README.md                   <- The top-level README for developers using this project
├── chapters                    <- Educational content to render on hf.co/learn
├── recipes                     <- Recipe configs, accelerate configs, slurm scripts
├── scripts                     <- Scripts to train and evaluate chat models
├── setup.cfg                   <- Installation config (mostly used for configuring code quality & tests)
├── setup.py                    <- Makes project pip installable (pip install -e .) so `alignment` can be imported
├── src                         <- Source code for use in this project
└── tests                       <- Unit tests
```

## Citation

If you find the content of this repo useful in your work, please cite it as follows:

```bibtex
@misc{alignment_handbook2023,
  author = {Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Alexander M. Rush and Thomas Wolf},
  title = {The Alignment Handbook},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/huggingface/alignment-handbook}}
}
```
