Su Jiawei

Example

I am a doctoral student of Kyushu University,department of informatics, faculty of Information Technology and Electrical Engineering, Fukuoka, Japan. I am now working on two topics:

  • Using machine learning for solving IOT security problems.
  • Security things of machine learning (adversarial machine learning).
    • The security of machine learning classifiers themselves is also questionable. For many different purposes, potential adversaries might want the classifiers to carry out very wrong classification results. For example, many research have revealed that deep neural networks are very sensitive to small adversarial perturbation on the inputs. Moreover, the universal perturbation can be also generated to distort many images at the same time. We specifically focus on the security problems of deep neural networks by proposing and evaluating novel and effective attack methods to reveal the weakness of the networks. According to the results, we also consider the deep reasons of the weakness and the corresponding solutions.
Example

Figure.1. Adversarial perturbation:

A dog image from cifar-10 data set, can be misclassified as a cat by merely modiying one pixel.


Example

Figure.2. Universal perturbation:

A single universarial perturbation can simultaneously distort many images and make the classifier misclassfy them.