Industry 4.0 intends to address a fast-changing and challenging manufacturing environment with diverse demands, short order lead-time and product life cycle, limited capacities, and highly complex process technologies. A manufacturing system integrated with Industry 4.0 technologies, such as AI, machine learning, big data analytics, digital twin, and Internet of Things, is capable of performing real-time monitoring and optimization of manufacturing processes in various aspects from high level strategic resource and production planning down to real-time equipment-level smart dispatching and predictive maintenance. By fully using real-time data and AI, the system is able to help manufacturers shorten production and R&D processes, increase production capacity, reduce production cost, guarantee product quality, and improve product yield. It is suitable to help not only high-tech industries such as semiconductor wafer fabrication, but also conventional labor-intensive sectors. This talk illustrates the transformation of semiconductor manufacturing activities from automation to intelligenization by using Industry 4.0 technologies through real-life wafer fabrication applications.
Prof. MengChu Zhou (Fellow, IEEE) received his B.S. degree in Control Engineering from Nanjing University of Science and Technology, Nanjing, China in 1983, M.S. degree in Automatic Control from Beijing Institute of Technology, Beijing, China in 1986, and Ph. D. degree in Computer and Systems Engineering from Rensselaer Polytechnic Institute, Troy, NY in 1990. He joined New Jersey Institute of Technology (NJIT), Newark, NJ in 1990, and is now Distinguished Professor in Electrical and Computer Engineering.
His research interests are in Petri nets, intelligent automation, Internet of Things, big data, web services, and intelligent transportation. He has over 900 publications including 12 books, 600+ journal papers (500+ in IEEE transactions), 29 patents and 29 book-chapters. He is the founding Editor of IEEE Press Book Series on Systems Science and Engineering, Editor-in-Chief of IEEE/CAA Journal of Automatica Sinica, and Associate Editor of IEEE Internet of Things Journal, IEEE Transactions on Intelligent Transportation Systems, and IEEE Transactions on Systems, Man, and Cybernetics: Systems. He is a recipient of Humboldt Research Award for US Senior Scientists from Alexander von Humboldt Foundation, Franklin V. Taylor Memorial Award and the Norbert Wiener Award from IEEE Systems, Man and Cybernetics Society, and Excellence in Research Prize and Medal from NJIT. He is a highly cited scholar and ranked top one in the field of engineering worldwide in 2012 by Web of Science. His Google citation count is over 43800 with h-index being 104.
He is a life member of Chinese Association for Science and Technology-USA and served as its President in 1999. He is a Fellow of International Federation of Automatic Control (IFAC), American Association for the Advancement of Science (AAAS), Chinese Association of Automation (CAA) and National Academy of Inventors (NAI)
Details can be found athttps://web.njit.edu/~zhou/
Many real-world optimization problems are multiobjective by nature. Multiobjective evolutionary algorithms are a widely used algorithmic framework for solving multiobjective optimization problems. In this talk, I will briefly explain the basic ideas behind decomposition based multiobjective evolutionary algorithm (MOEA/D). Multitask learning can be naturally modelled as a multiobjective optimization problem. I will introduce a recent application of MOEA/D on multitask learning.
Prof. Ligang Liu is a professor at the University of Science and Technology of China. He received his PhD from Zhejiang University in 2001. He once worked at Microsoft Research Asia, Zhejiang University, and visited Harvard University. His research interests include computer graphics and geometry processing. He serves as the associated editors for many journals and served as the conference co-chairs and the program co-chairs of a number of conferences. He serves as the steering committee member of GMP and the secretary of Asia graphics Association. He is an awardee of the National Science Fund For Outstanding Young Scholars.
Details can be found at http://staff.ustc.edu.cn/~lgliu
Digital Twin Computing involves not only creating a digital parallel for real world but also facilitating the interaction between other digital twins as well the real world. A digital-physical hybrid society, perhaps reminding the already well known concept of “hardware in the loop” could digitally replicate and mutate copies of digital entities that are copied of real entities. By doing so it can lead to the design of mutated digital twins that do not exist in the real world. Interestingly this can be used from studying viruses in medicine to modelling electricity infrastructure in engineering. This speech refers to three main developments in Deep Learning, which is a major direction of the 21st century AI emphasizing their relevance to broader set of applications and in particular to digital twin computing.
1. Automated design of neural networks: To reduce the development cost of Deep Neural Networks (DNNs) and to promote the broader use of DNN usage, it is proposed to automate the DNN design, which led to an emerging field called automatic machine learning (Auto-ML). This idea was previously applied by the speaker and others on shallow neural networks using the principle of self-generation/growing [1-4]. Neural Architecture Search (NAS) methods explicitly find DNN architectures for a given supervised learning task. This is achieved by encoding the candidate architecture as a solution in some search space and treating the architecture design as an optimization problem. Growing neural network architectures instead of “searching for the best” has been our alternative strategy to this problem.
2. Interpretability of neural networks: Interpretation of such automatically designed DNN is of significant benefit to many applications including digital twin computing as it also allows integration of existing domain/scientific knowledge to the knowledge extracted from data from real world. Explainable AI (XAI) is an emerging and relevant area of research that has a strong connection to the level of interpretation although it may not fully exploit the different levels of interpretation possible.
3. Incremental Learning or learn from continuously incoming data: Continuously incoming data is a key characteristic of many real-world applications of AI. Experiments conducted in wet-labs and vital epidemiological data collected during a pandemic are two examples of continuous data streams. Analysing such data with efficient use of computing resources described in  is a key useful recent development in this direction of research.
Prof. Saman Halgamuge received the B.Sc. Engineering degree in Electronics and Telecommunication from the University of Moratuwa, Sri Lanka, and the Dipl.-Ing and Ph.D. degrees in data engineering from the Technical University of Darmstadt, Germany. He is currently a Professor of the Department of Mechanical Engineering of the School of Electrical Mechanical and Infrastructure Engineering, The University of Melbourne, Australia.
He is a Fellow of IEEE (2017-), a distinguished Lecturer of IEEE Computational Intelligence Society (2018-21) and listed as a top 2% most cited researcher for AI and Image Processing in Stanford database (2020-). His research interests are in AI, machine learning including deep learning, optimization, big data analytics and their applications in energy, mechatronics, bioinformatics and neural engineering. He graduated 45 PhD students at University of Melbourne and delivered about 50 keynotes at conferences (2000-).
He is currently an honorary Professor of multiple institutions including ANU in Canberra and a distinguished visiting professor of HEBUT in Tianjin. His previous roles include member of the Australian Research Council College of Experts, Associate Dean of Engineering Faculty at University of Melbourne and Head of Engineering School at ANU.
Details can be found at https://findanexpert.unimelb.edu.au/profile/2854-saman-halgamuge
Internet-of-things have become popular through the rapid development of internet technologies together with new artificial intelligence and machine learning approaches. However, although industries have benefited enormously and there has been a massive productivity improvement, there are new problems and issues related to system safety and reliability. In this talk, we will share some thoughts on the tools and methods to address these problems and it is important to pay attention to them from the design and development stage, as system issues might surface over long period of usage. It is also important to quantify the risk and uncertainties, and develop new tools and techniques for industrial IOT applications.
Prof M. Xie has been a Chair Professor of Industrial Engineering at City University of Hong Kong since 2011. Prior to that, he was full professor at National University of Singapore. He received his undergraduate and postgraduate education in Sweden with a PhD from Linkoping University, in 1987. Prof Xie has supervised over 60 PhD students and they hold regular position in finance, industry and academia in different continents.
Prof Xie has published over 300 journal papers and 8 books, including "Software Reliability Modelling" by World Scientific, "Statistical Models and Control Charts for High-Yield Processes" by Springer, "Computing Systems Reliability" by Kluwer Academic. He recently co-authored " Cyber-Physical Distributed Systems: Modeling, Reliability Analysis and Applications" that will be published by Wiley later this year. Prof Xie is an elected fellow of IEEE since 2006.
Details can be found athttps://www.cityu.edu.hk/seem/minxie/