Trojan Attack Prediction
Scripts
File Size |
|
File Format |
|
Technical Details | Python dependencies are provided in the environment.yml and requirements.txt files within the Scripts component file. |
Input file
File Size |
|
File Format |
|
Scope And Content | Sample model input. |
- Collection
- Cite This Work
-
Armstrong, Christopher; Hartley, Daniel; Hutton, Spencer; Quach, Shirley (2023). Trojan Attack Prediction. In Data Science & Engineering Master of Advanced Study (DSE MAS) Capstone Projects. UC San Diego Library Digital Collections. https://doi.org/10.6075/J0B56JX8
- Description
-
As machine learning (ML) has gained prominence in the business world, the implementation of deep neural networks (DNN) has become more widespread. The security of DNN models has recently come under scrutiny as they are at risk of adversarial attacks such as backdoor Trojan attacks. These attacks depend on a trigger to activate malicious behavior. Due to the lack of transparency in DNNs, the effects of Trojans may remain undetected until activated by an attacker. This project demonstrates a significant reduction in the time and resources necessary to detect a poisoned model through the use of dimensionality reduction techniques. The detector utilizes Principal Component Analysis and Independent Component Analysis to reduce model weights that can then be used to train a classification model. This work builds on previous research, integrating reduction techniques to significantly reduce inference time while maintaining model accuracy at 85%. Are you protected from malicious AI?
Jacobs School of Engineering Data Science and Engineering Masters of Applied Science Program (DSE MAS) DSE 260 Capstone Project. - Date Collected
- 2023-01-07 to 2023-06-09
- Date Issued
- 2023
- Advisors
- Contributors
- Note
-
This project relies on external software packages, modules/libraries, or programs, use of which may carry specific license requirements. Users should comply with any licenses specified within the contents of this project.
- Series
- Corporate Name
- Topics
Formats
View formats within this collection
- Language
- English
- Related Resources
- National Institute of Standards and Technology (NIST) (image-classification-jun2020): https://pages.nist.gov/trojai/docs/data.html#image-classification-jun2020
- Project code on GitHub: https://github.com/shirleyquach/dse260/tree/model_optimizer
- Liu, Yingqi, Wen-Chuan Lee, Guanhong Tao, Shiqing Ma, Yousra Aafer, and Xiangyu Zhang. 2019. "ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation." Association for Computing Machinery, 1265–1282. https://doi.org/10.1145/3319535.3363216
- Wang, Bolun, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2019. "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks." 2019 IEEE Symposium on Security and Privacy, 707-723. https://sites.cs.ucsb.edu/~bolunwang/assets/docs/backdoor-sp19.pdf
Source data
Software
Described by
- License
-
Creative Commons Attribution 4.0 International Public License
- Rights Holder
- Armstrong, Chris; Hartley, Daniel; Hutton, Spencer; Quach, Shirley
- Copyright
-
Under copyright (US)
Use: This work is available from the UC San Diego Library. This digital copy of the work is intended to support research, teaching, and private study.
Constraint(s) on Use: This work is protected by the U.S. Copyright Law (Title 17, U.S.C.). Use of this work beyond that allowed by "fair use" or any license applied to this work requires written permission of the copyright holder(s). Responsibility for obtaining permissions and any use and distribution of this work rests exclusively with the user and not the UC San Diego Library. Inquiries can be made to the UC San Diego Library program having custody of the work.
- Digital Object Made Available By
-
Research Data Curation Program, UC San Diego, La Jolla, 92093-0175 (https://lib.ucsd.edu/rdcp)
- Last Modified
2024-07-18