Web Analytics
S3 Lab - Software & Systems Security Laboratory

Vessels: Efficient and Scalable Deep Learning Prediction on Trusted Processors

Kyungtae Kim, Chung Hwan Kim, Junghwan , Xiao Yu, Haifeng Chen, Dave (Jing) Tian, and Byoungyoung Lee

Proceedings of the 11th ACM Symposium on Cloud Computing (SOCC) 2020.

areas
Program Analysis, Security, Trusted Computing

abstract

Deep learning systems on the cloud are increasingly targeted by attacks that attempt to steal sensitive data. Intel SGX has been proven effective to protect the confidentiality and integrity of such data during computation. However, state-of-the-art SGX systems still suffer from substantial performance overhead induced by the limited physical memory of SGX. This limitation significantly undermines the usability of deep learning systems due to their memory-intensive characteristics.

In this paper, we provide a systematic study on the inefficiency of the existing SGX systems for deep learning prediction with a focus on their memory usage. Our study has revealed two causes of the inefficiency in the current memory usage paradigm: large memory allocation and low memory reusability. Based on this insight, we present Vessels, a new system that addresses the inefficiency and overcomes the limitation on SGX memory through memory usage optimization techniques. Vessels identifies the memory allocation and usage patterns of a deep learning program through model analysis and creates a trusted execution environment with an optimized memory pool, which minimizes the memory footprint with high memory reusability. Our experiments demonstrate that, by significantly reducing the memory foot-print and carefully scheduling the workloads, Vessels can achieve highly efficient and scalable deep learning prediction while providing strong data confidentiality and integrity with SGX.