One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training logo

One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training

We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.

GitHub Link

The GitHub link is https://github.com/jianshuod/tba

Introduce

The GitHub repository "jianshuod/TBA" contains the official code for the ICCV 2023 paper titled "One-bit Flip is All You Need When Bit-flip Attack Meets Model Training." The project focuses on the convergence of bit-flip attacks and model training. The code, developed using Python 3 and PyTorch, offers a main pipeline for the method and provides instructions for installation and usage. The repository includes details about task specifications, hyperparameters, and results, particularly highlighting the attacking of 8-bit quantized ResNet-18. The work is licensed under the Apache License 2.0. We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.

Content

This is the official implementation of our paper One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training, accepted by ICCV 2023. This research project is developed based on Python 3 and Pytorch. If you think this work or our codes are useful for your research, please cite our paper via: Install by running the following cmd in the work directory Step 1: Download the model checkpoint, and then place it in the directory "checkpoint/resnet18" Step 2: Fill out the path to this work directory in your server Step 3: configure the path to CIFAR-10 dataset in config.py The log for attacking 8-bit quantized ResNet-18 is provided. Please refer to log_resnet18_8.txt for our results. This project is licensed under the terms of the Apache License 2.0. See the LICENSE file for the full text.

Alternatives & Similar Tools

LongLLaMA-handle very long text contexts, up to 256,000 tokens logo

LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.