Developing AI-based solutions that enhance industry performance

Under the theme of AI and machine learning, we are developing theories that explain the generalisation ability of (deep) machine learning models that will enable research applicable to a range of different technologies and applications, particularly for industry engagement. Our team of researchers will:

  • Find principled designs for AI and machine learning algorithms to enhance the trustworthiness of AI and machine learning techniques and tools
  • Develop algorithms that deal with weakly supervised information, causally responsible representations, heterogeneous information fusion, visual plausible data generation, energy cost efficient computation, and model robustness and scalability in the wide

This research will have broader applications in eCommerce, health, cybersecurity, logistics and supply chain, and streamlined manufacturing.

Explore our research



Current projects

Deep neural architectures

Deep neural architectures

Since the development of the first real deep neural network AlexNet in 2012, deep learning has made great progress in computer vision and natural language processing. Lots of these breakthroughs often come alone with the new architecture design of deep neural networks. We are interested in pushing the boundary of deep learning performance by advancing …

Generative adversarial networks

Generative adversarial networks

Generative Adversarial Networks (GANs) were called as the most interesting idea in the last 10 years in machine learning by Turing award recipient Yann LeCun. Their most significant impact has been observed in many challenging problems, such as plausible image generation, image-to-image translation, facial attribute manipulation and similar domains. However, there are still many research …

Label-noise learning

Label-noise learning

Learning with noisy labels becomes a more and more important topic recently. The reason is that, in the era of big data, datasets are becoming larger and larger. Often, large-scale datasets are infeasible to be annotated accurately due to the cost and time, which naturally brings us cheap datasets with noisy labels. However, the noisy …

Robust/adversarial learning

Robust/adversarial learning

We are also interested in how to reduce the side effect of noise on the instance, which may be caused by the failure of sensors or even malicious attacks. We human have the ability to correctly recognise the objects even there are noise (e.g., we can easily recognise human faces under extreme illumination conditions, when …

Statistical (deep) learning theory

Statistical (deep) learning theory

Deep learning algorithms have given exciting performances, e.g., painting pictures, beating Go champions, and autonomously driving cars, among others, showing that they have very good generalisation abilities (small differences between training and test errors). These empirical achievements have astounded yet confounded their human creators. Why do deep learning algorithms generalise so well on unseen data? …

Transfer learning

Transfer learning

Just like human, machine can also find the common knowledge between tasks and transfer the knowledge from one task to another one. In machine learning, we can exploit training examples drawn from some related tasks (source domains) to improve the performance on the target task (target domain). This relates two terms in machine learning, i.e., …

Core Research Team

Contacts

Digital Sciences Initiative
Faculty of Engineering, University of Sydney NSW 2006 Australia
+61 439 070 977 or +61 404 710 450 DSI@sydney.edu.au