In recent years, knowledge distillation has become a cornerstone of
efficiently deployed machine learning, with labs and industries using knowledge
distillation to train models that are inexpensive and resource-optimized.
Trojan attacks have contemporaneously gained significant prominence, revealing
fundamental vulnerabilities in deep learning models. Given the widespread use
of knowledge distillation, in this work we seek to exploit the unlabelled data
knowledge distillation process to embed Trojans in a student model without
introducing conspicuous behavior in the teacher. We ultimately devise a Trojan
attack that effectively reduces student accuracy, does not alter teacher
performance, and is efficiently constructible in practice.