Deep
learning (DL) has achieved great success in many application domains such as
image processing, speech recognition, and autonomous vehicles. However, how to
ensure the reliability and security of DL system remains an open problem. For
example, an attacker could add adversarial perturbations often imperceptible to
human eyes to an image to cause a deep neural network (DNN) to misclassify
perturbed images. In this project, we attempt to solve such the problem of
reliability and security of DL systems from the perspective of software
engineering. Traditional software represents its logic as control flows crafted
by human knowledge, while a DNN characterizes its behaviors by the weights of
neuron edges and the nonlinear activation functions (determined by the training
data). Therefore, detecting erroneous behaviors in DNNs is different from those
of traditional software in nature, which necessitates effective analysis,
testing and verification approaches. We plan to take a multi-pronged approach
to explore deeper understanding of defects (bugs) and adversarial examples in
DL systems, and methods to guarantee the reliability and security of DL
systems.
Deep
learning can provide new capabilities and approaches for addressing software
engineering problems. In this project, we will explore different software
engineering activities where deep learning provides promising solutions,
including software testing and debugging, program analysis and verification,
software mining and analytics.
Just
like huge amounts of data on the web enabled Big Data applications, now large
repositories of programs (e.g. open source code on GitHub) enable a new class
of applications that leverage these repositories of "Big Code". Using
Big Code means to automatically learn from existing code in order to solve
software engineering tasks such as predicting software bugs, predicting program
behavior, or automatically generating new code.