Title: Weight-Covariance Alignment for Adversarially Robust Neural Networks
Abstract: Stochastic Neural Networks (SNNs) that inject noise into their hidden layers have recently been shown to achieve strong robustness against adversarial attacks. However, existing SNNs are usually heuristically motivated, and further rely on adversarial training, which is computationally costly. We propose a new SNN that achieves state-of-the-art performance without relying on adversarial training, and enjoys solid theoretical justification. Specifically, while existing SNNs inject learned or hand-tuned isotropic noise, our SNN learns an anisotropic noise distribution to optimize a learning-theoretic bound on adversarial robustness. We evaluate our method on a number of popular benchmarks, show that it can be applied to different architectures, and that it provides robustness to a variety of white-box and black-box attacks, while being simple and fast to train compared to existing alternatives.
Abstract: Large-scale topological changes play a key role in capturing the fine debris of fracturing virtual brittle material. Real-world, tough brittle fractures have dynamic branching behaviour but numerical simulation of this phenomena is notoriously challenging. In order to robustly capture these visual characteristics, we present a new approach to simulate brittle fracture, combining elastostatic continuum mechanical models with rigid-body methods. When the object simulated by a rigid-body system collides, a continuum damage mechanics solver is launched, that simulates the crack propagation in quasi-static fashion by tracking a damage field. We combine the result of this elastostatic model with a novel technique to approximate cracks as non-manifold, infinitely thin mid-surfaces, which enables accurate modelling of material fragment volumes to compliment fast-and-rigid shatter effects. For enhanced realism, we also introduce a method to add fracture detail, incorporating dynamic effects, such as stress waves, without imposing stringent time step restrictions on the elastostatic simulation. We evaluate our method with numerous examples and comparisons, showing that it produces a breadth of brittle material fracture effects with high visual fidelity while requiring much less computation compared to fully elastodynamic simulations.
Title - Injecting Prior Knowledge into Image Caption Generation
Abstract - Automatically generating natural language descriptions from an image is a challenging problem in artificial intelligence that requires a good understanding of the visual and textual signals and the correlations between them. The state-of-the-art methods in image captioning struggles to approach human level performance, especially when data is limited. In this paper, we propose to improve the performance of the state-of-the-art image captioning models by incorporating two sources of prior knowledge: (i) a conditional latent topic attention, that uses a set of latent variables (topics) as an anchor to generate highly probable words and, (ii) a regularization technique that exploits the inductive biases in syntactic and semantic structure of captions and improves the generalization of image captioning models. Our experiments validate that our method produces more human interpretable captions and also leads to significant improvements on the MS-COCO dataset in both the full and low data regimes.