论坛讲座

论坛讲座

网站首页 > 论坛讲座 > 正文

澳大利亚墨尔本大学Prof. Kate A. Smith-Miles 讲座通知

发布时间: 2019/07/29 14:15:26     点击次数:次   打印本页

澳大利亚墨尔本大学Prof. Kate A. Smith-Miles讲座通知

题目:Instance Spaces for Objective Assessment of Algorithms and Benchmark Test Suites

主讲人:Prof. Kate A. Smith-Miles澳大利亚墨尔本大学

邀请人:康雁飞副教授

时间:2019年7月31日(周三)下午2:00-4:30

地点:新主楼A618

主讲人简介:

Kate Smith-Miles holds an Australian Laureate Fellowship (2014-2019) from the Australian Research Council, and is a Professor of Applied Mathematics at The University of Melbourne. She was previously Head of the School of Mathematical Sciences at Monash University (2009-2014), and Head of the School of Engineering and IT at Deakin University (2006-2009). Having held chairs in three disciplines (mathematics, engineering and IT) has given her a broad interdisciplinary focus, and she was the inaugural Director of MAXIMA (Monash Academy for Cross and Interdisciplinary Mathematical Applications) from 2014-2017.

Kate has published over 250 refereed journal and international conference papers in the areas of neural networks, optimisation, machine learning, and various applied mathematics topics. She has supervised to completion 24 PhD students, and has been awarded over AUD$12 million in competitive grants. In 2010 she was awarded the Australian Mathematical Society Medal for distinguished research, and in 2017 she was awarded the E. O. Tuck Medal for outstanding research and distinguished service in applied mathematics by the Australian and New Zealand Industrial and Applied Mathematics Society (ANZIAM). Kate is a Fellow of the Institute of Engineers Australia and a Fellow of the Australian Mathematical Society (AustMS). She is past President of the AustMS, and a member of the Australian Research Council’s College of Experts from 2017-2019. She also regularly acts as a consultant to industry in the areas of optimisation, data mining, intelligent systems, and mathematical modelling.

讲座摘要:

Objective assessment of algorithm performance is notoriously difficult, with conclusions often inadvertently biased towards the chosen test instances. Rather than reporting average performance of algorithms across a set of chosen instances, we discuss a new methodology to enable the strengths and weaknesses of different algorithms to be compared across a broader generalised instance space. Initially developed for combinatorial optimisation, the methodology has recently been extended for machine learning classification, and to ask whether the UCI repository and OpenML are sufficient as benchmark test suites. Results will be presented to demonstrate: (i) how pockets of the instance space can be found where algorithm performance varies significantly from the average performance of an algorithm; (ii) how the properties of the instances can be used to predict algorithm performance on previously unseen instances with high accuracy; (iii) how the relative strengths and weaknesses of each algorithm can be visualized and measured objectively; and (iv) how new test instances can be generated to fill the instance space and offer greater insights into algorithmic power.An online tool to support this instance space analysis methodology will be demonstrated, available at matilda.unimelb.edu.au.