Deep neural architectures

Since the development of the first real deep neural network AlexNet in 2012, deep learning has made great progress in computer vision and natural language processing. Lots of these breakthroughs often come alone with the new architecture design of deep neural networks. We are interested in pushing the boundary of deep learning performance by advancing …

Generative adversarial networks

Generative Adversarial Networks (GANs) were called as the most interesting idea in the last 10 years in machine learning by Turing award recipient Yann LeCun. Their most significant impact has been observed in many challenging problems, such as plausible image generation, image-to-image translation, facial attribute manipulation and similar domains. However, there are still many research …

Label-noise learning

Learning with noisy labels becomes a more and more important topic recently. The reason is that, in the era of big data, datasets are becoming larger and larger. Often, large-scale datasets are infeasible to be annotated accurately due to the cost and time, which naturally brings us cheap datasets with noisy labels. However, the noisy …

Robust/adversarial learning

We are also interested in how to reduce the side effect of noise on the instance, which may be caused by the failure of sensors or even malicious attacks. We human have the ability to correctly recognise the objects even there are noise (e.g., we can easily recognise human faces under extreme illumination conditions, when …

Statistical (deep) learning theory

Deep learning algorithms have given exciting performances, e.g., painting pictures, beating Go champions, and autonomously driving cars, among others, showing that they have very good generalisation abilities (small differences between training and test errors). These empirical achievements have astounded yet confounded their human creators. Why do deep learning algorithms generalise so well on unseen data? …

Transfer learning

Just like human, machine can also find the common knowledge between tasks and transfer the knowledge from one task to another one. In machine learning, we can exploit training examples drawn from some related tasks (source domains) to improve the performance on the target task (target domain). This relates two terms in machine learning, i.e., …

Contacts

Prof Stefan B. Williams – Director, Digital Sciences Initiative
Faculty of Engineering, University of Sydney NSW 2006 Australia
+61 2 9351 8152 stefan.williams@sydney.edu.au