RDSC Scientist Edgar Bernal Speaks at RIT

 In Events, News

Edgar Bernal, Associate Director and Senior Research Scientist at the Rochester Data Science Consortium, recently (April 15) presented at the Rochester Institute of Technology (RIT) as part of its ongoing Move78* Artificial Intelligence Seminar series that spotlights ongoing research in Upstate New York’s burgeoning artificial intelligence (AI) community.

Dr. Bernal’s talk, Towards Robust Neural Networks, addressed topics of sensitivity and robustness in neural networks. As is well known in the AI community, the training of a neural network involves optimization of a loss function. Dr. Bernal introduced evidence that indicates that neural networks that correspond to lower-lying minima in the optimization landscape tend to be more robust.

Dr. Bernal also introduced a novel loss function which, when tested against standard machine learning and computer vision data sets, successfully produced neural networks that are reliably robust.

In his talk, Dr. Bernal also touched upon the topic of adversarial attacks – often originating from external sources – that have the potential to degrade a neural network’s performance. Although the new neural network learning framework that Dr. Bernal proposes does not explicitly leverage adversarial data, it still manages to achieve competitive overall performance and robustness compared to methods that do.

A full abstract of Bernal’s lecture can be found below:

*Move 78 refers to the 78th move in Game 4 of the now legendary best-of-five GO game series between Lee Sedol, a top Go player, and AlphaGo, the computing system developed by DeepMind (now owned by Google).

Abstract:

 

In this talk, I will discuss the topics of sensitivity and robustness in feedforward and convolutional neural networks. Combining energy landscape techniques developed in computational chemistry with tools drawn from formal methods, I’ll introduce empirical evidence indicating that networks corresponding to lower-lying minima in the optimization landscape of the learning objective tend to be more robust. The robustness estimate used is the inverse of a proposed sensitivity measure, which is defined as the volume of an over-approximation of the reachable set of network outputs under all additive l∞-bounded perturbations on the input data. I’ll introduce a novel loss function which includes a sensitivity term in addition to the traditional task-oriented and regularization terms. In our experiments on standard machine learning and computer vision datasets, the results show that the proposed loss function leads to networks which reliably optimize the robustness measure as well as other related metrics of adversarial robustness without significant degradation in the classification error. The results also indicate that the proposed method outperforms state-of-the-art sensitivity-based learning approaches with regards to robustness to adversarial attacks. Although the introduced framework does not explicitly enforce an adversarial loss, it achieves competitive overall performance relative to methods that do.

Recent Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt