Saturday 10 08:30 - 10:00 T03 (2701-2702)


  • Referring Expressions in Knowledge Representation Systems
    T03
  • The schedule is approximate. Check the tutorial's website for further detail.

    T03 - Referring Expressions in Knowledge Representation Systems

    -

    https://cs.uwaterloo.ca/~david/ijcai19/

    Open website

    Saturday 10 08:30 - 10:00 W31 (2506)


  • Language Sense on Computer
    W31
  • The schedule is approximate. Check the workshop's website for further detail.

    W31 - Language Sense on Computer

    http://ultimavi.arc.net.my/ave/IJCAI2019/

    Open website

    Saturday 10 08:30 - 10:00 T07 (J)


  • Artificial Intelligence in Transportation
    T07
  • The schedule is approximate. Check the tutorial's website for further detail.

    T07 - Artificial Intelligence in Transportation

    -

    https://outreach.didichuxing.com/IJCAI2019/tutorial/

    Open website

    Saturday 10 08:30 - 10:00 W24 (2401)


  • Deep Learning for Human Activity Recognition
    W24
  • The schedule is approximate. Check the workshop's website for further detail.

    W24 - Deep Learning for Human Activity Recognition

    https://sites.google.com/site/zhangleuestc/deep-learning-for-human-activity-recognition

    Open website

    Saturday 10 08:30 - 10:00 W10 (2606)


  • Artificial Intelligence for Business Security (AIBS)
    W10
  • The schedule is approximate. Check the workshop's website for further detail.

    W10 - Artificial Intelligence for Business Security (AIBS)

    https://security.alibaba.com/alibs2019

    Open website

    Saturday 10 08:30 - 10:00 W05 (2501)


  • Artificial Intelligence in Affective Computing
    W05
  • The schedule is approximate. Check the workshop's website for further detail.

    W05 - Artificial Intelligence in Affective Computing

    http://bit.ly/IJCAI-AffComp-2019

    Open website

    Saturday 10 08:30 - 10:00 W18 (2505)


  • Evaluation of Adaptive Systems for Human-Autonomy Teaming (EASyHAT)
    W18
  • The schedule is approximate. Check the workshop's website for further detail.

    W18 - Evaluation of Adaptive Systems for Human-Autonomy Teaming (EASyHAT)

    https://sites.google.com/view/easyhat

    Open website

    Saturday 10 08:30 - 10:00 W47 (2403)


  • Strategic Reasoning (SR)
    W47
  • The schedule is approximate. Check the workshop's website for further detail.

    W47 - Strategic Reasoning (SR)

    http://sr2019.irisa.fr/

    Open website

    Saturday 10 08:30 - 10:00 W13 (2504)


  • What can FCA do for Artificial Intelligence?
    W13
  • The schedule is approximate. Check the workshop's website for further detail.

    W13 - What can FCA do for Artificial Intelligence?

    https://fca4ai.hse.ru/2019/

    Open website

    Saturday 10 08:30 - 10:00 T29 (2703-2704)


  • Creative and Artistic Writing via Text Generation
    T29
  • The schedule is approximate. Check the tutorial's website for further detail.

    T29 - Creative and Artistic Writing via Text Generation

    -

    https://lijuntaopku.github.io/ijcai2019tutorial/

    Open website

    Saturday 10 10:30 - 12:30 T07 (J)


  • Artificial Intelligence in Transportation
    T07
  • The schedule is approximate. Check the tutorial's website for further detail.

    T07 - Artificial Intelligence in Transportation

    -

    https://outreach.didichuxing.com/IJCAI2019/tutorial/

    Open website

    Saturday 10 10:30 - 12:30 W24 (2401)


  • Deep Learning for Human Activity Recognition
    W24
  • The schedule is approximate. Check the workshop's website for further detail.

    W24 - Deep Learning for Human Activity Recognition

    https://sites.google.com/site/zhangleuestc/deep-learning-for-human-activity-recognition

    Open website

    Saturday 10 10:30 - 12:30 W10 (2606)


  • Artificial Intelligence for Business Security (AIBS)
    W10
  • The schedule is approximate. Check the workshop's website for further detail.

    W10 - Artificial Intelligence for Business Security (AIBS)

    https://security.alibaba.com/alibs2019

    Open website

    Saturday 10 10:30 - 12:30 W05 (2501)


  • Artificial Intelligence in Affective Computing
    W05
  • The schedule is approximate. Check the workshop's website for further detail.

    W05 - Artificial Intelligence in Affective Computing

    http://bit.ly/IJCAI-AffComp-2019

    Open website

    Saturday 10 10:30 - 12:30 W18 (2505)


  • Evaluation of Adaptive Systems for Human-Autonomy Teaming (EASyHAT)
    W18
  • The schedule is approximate. Check the workshop's website for further detail.

    W18 - Evaluation of Adaptive Systems for Human-Autonomy Teaming (EASyHAT)

    https://sites.google.com/view/easyhat

    Open website

    Saturday 10 10:30 - 12:30 W47 (2403)


  • Strategic Reasoning (SR)
    W47
  • The schedule is approximate. Check the workshop's website for further detail.

    W47 - Strategic Reasoning (SR)

    http://sr2019.irisa.fr/

    Open website

    Saturday 10 10:30 - 12:30 W13 (2504)


  • What can FCA do for Artificial Intelligence?
    W13
  • The schedule is approximate. Check the workshop's website for further detail.

    W13 - What can FCA do for Artificial Intelligence?

    https://fca4ai.hse.ru/2019/

    Open website

    Saturday 10 10:30 - 12:30 T30 (2703-2704)


  • Mechanism Design Powered by Social Interactions
    T30
  • The schedule is approximate. Check the tutorial's website for further detail.

    T30 - Mechanism Design Powered by Social Interactions

    -

    http://dengji-zhao.net/ijcai19.html

    Open website

    Saturday 10 10:30 - 12:30 T03 (2701-2702)


  • Referring Expressions in Knowledge Representation Systems
    T03
  • The schedule is approximate. Check the tutorial's website for further detail.

    T03 - Referring Expressions in Knowledge Representation Systems

    -

    https://cs.uwaterloo.ca/~david/ijcai19/

    Open website

    Saturday 10 10:30 - 12:30 W31 (2506)


  • Language Sense on Computer
    W31
  • The schedule is approximate. Check the workshop's website for further detail.

    W31 - Language Sense on Computer

    http://ultimavi.arc.net.my/ave/IJCAI2019/

    Open website

    Saturday 10 14:00 - 15:30 W18 (2505)


  • Evaluation of Adaptive Systems for Human-Autonomy Teaming (EASyHAT)
    W18
  • The schedule is approximate. Check the workshop's website for further detail.

    W18 - Evaluation of Adaptive Systems for Human-Autonomy Teaming (EASyHAT)

    https://sites.google.com/view/easyhat

    Open website

    Saturday 10 14:00 - 15:30 W47 (2403)


  • Strategic Reasoning (SR)
    W47
  • The schedule is approximate. Check the workshop's website for further detail.

    W47 - Strategic Reasoning (SR)

    http://sr2019.irisa.fr/

    Open website

    Saturday 10 14:00 - 15:30 W13 (2504)


  • What can FCA do for Artificial Intelligence?
    W13
  • The schedule is approximate. Check the workshop's website for further detail.

    W13 - What can FCA do for Artificial Intelligence?

    https://fca4ai.hse.ru/2019/

    Open website

    Saturday 10 14:00 - 15:30 T31 (J)


  • Dual Learning for Machine Learning
    T31
  • The schedule is approximate. Check the tutorial's website for further detail.

    T31 - Dual Learning for Machine Learning

    -

    https://duallearning-tutorial.github.io/

    Open website

    Saturday 10 14:00 - 15:30 T27 (2701-2702)


  • Temporal Point Processes Learning for Event Sequences
    T27
  • The schedule is approximate. Check the tutorial's website for further detail.

    T27 - Temporal Point Processes Learning for Event Sequences

    -

    http://thinklab.sjtu.edu.cn/TPP_Tutor_IJCAI19.html

    Open website

    Saturday 10 14:00 - 15:30 W31 (2506)


  • Language Sense on Computer
    W31
  • The schedule is approximate. Check the workshop's website for further detail.

    W31 - Language Sense on Computer

    http://ultimavi.arc.net.my/ave/IJCAI2019/

    Open website

    Saturday 10 14:00 - 15:30 T02 (2703-2704)


  • Adaptive Influence Maximization
    T02
  • The schedule is approximate. Check the tutorial's website for further detail.

    T02 - Adaptive Influence Maximization

    -

    https://sites.google.com/view/aim-tutorial/home

    Open website

    Saturday 10 14:00 - 15:30 W34 (2402)


  • Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)
    W34
  • The schedule is approximate. Check the workshop's website for further detail.

    W34 - Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)

    https://sites.google.com/view/kdh2019

    Open website

    Saturday 10 14:00 - 15:30 W24 (2401)


  • Deep Learning for Human Activity Recognition
    W24
  • The schedule is approximate. Check the workshop's website for further detail.

    W24 - Deep Learning for Human Activity Recognition

    https://sites.google.com/site/zhangleuestc/deep-learning-for-human-activity-recognition

    Open website

    Saturday 10 14:00 - 15:30 W10 (2606)


  • Artificial Intelligence for Business Security (AIBS)
    W10
  • The schedule is approximate. Check the workshop's website for further detail.

    W10 - Artificial Intelligence for Business Security (AIBS)

    https://security.alibaba.com/alibs2019

    Open website

    Saturday 10 14:00 - 15:30 W05 (2501)


  • Artificial Intelligence in Affective Computing
    W05
  • The schedule is approximate. Check the workshop's website for further detail.

    W05 - Artificial Intelligence in Affective Computing

    http://bit.ly/IJCAI-AffComp-2019

    Open website

    Saturday 10 16:00 - 18:00 T28 (2701-2702)


  • Fair Division of Indivisible Items: Asymptotics and Graph-Theoretic Approaches
    T28
  • The schedule is approximate. Check the tutorial's website for further detail.

    T28 - Fair Division of Indivisible Items: Asymptotics and Graph-Theoretic Approaches

    -

    https://cs.stanford.edu/~warut/ijcai19-tutorial.html

    Open website

    Saturday 10 16:00 - 18:00 W31 (2506)


  • Language Sense on Computer
    W31
  • The schedule is approximate. Check the workshop's website for further detail.

    W31 - Language Sense on Computer

    http://ultimavi.arc.net.my/ave/IJCAI2019/

    Open website

    Saturday 10 16:00 - 18:00 T02 (2703-2704)


  • Adaptive Influence Maximization
    T02
  • The schedule is approximate. Check the tutorial's website for further detail.

    T02 - Adaptive Influence Maximization

    -

    https://sites.google.com/view/aim-tutorial/home

    Open website

    Saturday 10 16:00 - 18:00 W34 (2402)


  • Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)
    W34
  • The schedule is approximate. Check the workshop's website for further detail.

    W34 - Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)

    https://sites.google.com/view/kdh2019

    Open website

    Saturday 10 16:00 - 18:00 W24 (2401)


  • Deep Learning for Human Activity Recognition
    W24
  • The schedule is approximate. Check the workshop's website for further detail.

    W24 - Deep Learning for Human Activity Recognition

    https://sites.google.com/site/zhangleuestc/deep-learning-for-human-activity-recognition

    Open website

    Saturday 10 16:00 - 18:00 T33 (J)


  • Small Data Challenges in Big Data Era: Unsupervised and Semi-Supervised Methods
    T33
  • The schedule is approximate. Check the tutorial's website for further detail.

    T33 - Small Data Challenges in Big Data Era: Unsupervised and Semi-Supervised Methods

    -

    http://maple-lab.net/projects/small_data.htm

    Open website

    Saturday 10 16:00 - 18:00 W05 (2501)


  • Artificial Intelligence in Affective Computing
    W05
  • The schedule is approximate. Check the workshop's website for further detail.

    W05 - Artificial Intelligence in Affective Computing

    http://bit.ly/IJCAI-AffComp-2019

    Open website

    Saturday 10 16:00 - 18:00 W10 (2606)


  • Artificial Intelligence for Business Security (AIBS)
    W10
  • The schedule is approximate. Check the workshop's website for further detail.

    W10 - Artificial Intelligence for Business Security (AIBS)

    https://security.alibaba.com/alibs2019

    Open website

    Saturday 10 16:00 - 18:00 W18 (2505)


  • Evaluation of Adaptive Systems for Human-Autonomy Teaming (EASyHAT)
    W18
  • The schedule is approximate. Check the workshop's website for further detail.

    W18 - Evaluation of Adaptive Systems for Human-Autonomy Teaming (EASyHAT)

    https://sites.google.com/view/easyhat

    Open website

    Saturday 10 16:00 - 18:00 W47 (2403)


  • Strategic Reasoning (SR)
    W47
  • The schedule is approximate. Check the workshop's website for further detail.

    W47 - Strategic Reasoning (SR)

    http://sr2019.irisa.fr/

    Open website

    Saturday 10 16:00 - 18:00 W13 (2504)


  • What can FCA do for Artificial Intelligence?
    W13
  • The schedule is approximate. Check the workshop's website for further detail.

    W13 - What can FCA do for Artificial Intelligence?

    https://fca4ai.hse.ru/2019/

    Open website

    Saturday 10 16:00 - 18:00 T32 (2705-2706)


  • AI for Materials Science
    T32
  • The schedule is approximate. Check the tutorial's website for further detail.

    T32 - AI for Materials Science

    -

    https://www.cs.uwyo.edu/~larsko/aimat-tut/

    Open website

    Sunday 11 08:30 - 10:00 W17 (2503)


  • Bringing Semantic Knowledge into Vision and Text Understanding
    W17
  • The schedule is approximate. Check the workshop's website for further detail.

    W17 - Bringing Semantic Knowledge into Vision and Text Understanding

    http://www.cs.uga.edu/~shengli/Tusion2019.html

    Open website

    Sunday 11 08:30 - 10:00 W34 (2402)


  • Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)
    W34
  • The schedule is approximate. Check the workshop's website for further detail.

    W34 - Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)

    https://sites.google.com/view/kdh2019

    Open website

    Sunday 11 08:30 - 10:00 T22 (L)


  • Causal Reinforcement Learning
    T22
  • The schedule is approximate. Check the tutorial's website for further detail.

    T22 - Causal Reinforcement Learning

    -

    http://www.cisiad.uned.es/cursos/tutorial-IJCAI-2019/PGMs-medicine.php

    Open website

    Sunday 11 08:30 - 10:00 W15 (2504)


  • AI and the Unesco SDGs
    W15
  • The schedule is approximate. Check the workshop's website for further detail.

    W15 - AI and the Unesco SDGs

    https://www.k4all.org/event/ijcai19/

    Open website

    Sunday 11 08:30 - 10:00 W25 (2506)


  • AI-based Multimodal Analytics for Understanding Human Learning in Real-World Educational Contexts (AIMA4Edu)
    W25
  • The schedule is approximate. Check the workshop's website for further detail.

    W25 - AI-based Multimodal Analytics for Understanding Human Learning in Real-World Educational Contexts (AIMA4Edu)

    https://www.aima4edu.com

    Open website

    Sunday 11 08:30 - 10:00 W43 (2406)


  • Artificial Intelligence and Food
    W43
  • The schedule is approximate. Check the workshop's website for further detail.

    W43 - Artificial Intelligence and Food

    https://sites.google.com/view/ijcai2019aixfood/home

    Open website

    Sunday 11 08:30 - 10:00 T13 (2601-2602)


  • Concept-to-code: Aspect Sentiment Classification with Deep Learning
    T13
  • The schedule is approximate. Check the tutorial's website for further detail.

    T13 - Concept-to-code: Aspect Sentiment Classification with Deep Learning

    -

    http://deepthinking.ai/ijcai-2019/

    Open website

    Sunday 11 08:30 - 10:00 W08 (2501)


  • Qualitative Reasoning (QR)
    W08
  • The schedule is approximate. Check the workshop's website for further detail.

    W08 - Qualitative Reasoning (QR)

    https://qr2019.sme.uni-bamberg.de

    Open website

    Sunday 11 08:30 - 10:00 W14 (2502)


  • Education in Artificial Intelligence K-12 (EduAI)
    W14
  • The schedule is approximate. Check the workshop's website for further detail.

    W14 - Education in Artificial Intelligence K-12 (EduAI)

    http://eduai19.ist.tugraz.at/

    Open website

    Sunday 11 08:30 - 10:00 T04 (J)


  • Deep Bayesian Sequential Learning
    T04
  • The schedule is approximate. Check the tutorial's website for further detail.

    T04 - Deep Bayesian Sequential Learning

    -

    http://chien.cm.nctu.edu.tw/home/ijcai-tutorial/

    Open website

    Sunday 11 08:30 - 10:00 W12 (2403)


  • Big Social Media Data Management and Analysis (BSMDMA)
    W12
  • The schedule is approximate. Check the workshop's website for further detail.

    W12 - Big Social Media Data Management and Analysis (BSMDMA)

    https://www.comp.hkbu.edu.hk/~xinhuang/BSMDMA2019

    Open website

    Sunday 11 08:30 - 10:00 T17 (2703-2704)


  • Game Description Languages and Logics
    T17
  • The schedule is approximate. Check the tutorial's website for further detail.

    T17 - Game Description Languages and Logics

    -

    https://www.irit.fr/~Laurent.Perrussel/ijcai19.html

    Open website

    Sunday 11 08:30 - 10:00 W21 (2505)


  • Agent-based Complex Automated Negotiations (ACAN)
    W21
  • The schedule is approximate. Check the workshop's website for further detail.

    W21 - Agent-based Complex Automated Negotiations (ACAN)

    http://www.itolab.nitech.ac.jp/ACAN2019/index.html

    Open website

    Sunday 11 08:30 - 10:00 W36 (2401)


  • Artificial Intelligence Safety (AISafety)
    W36
  • The schedule is approximate. Check the workshop's website for further detail.

    W36 - Artificial Intelligence Safety (AISafety)

    https://www.ai-safety.org/

    Open website

    Sunday 11 08:30 - 10:00 W02 (2605)


  • Artificial Intelligence for Knowledge Management and Innovation (AI4KM)
    W02
  • The schedule is approximate. Check the workshop's website for further detail.

    W02 - Artificial Intelligence for Knowledge Management and Innovation (AI4KM)

    http://ifipgroup.com/7th-ai4km/

    Open website

    Sunday 11 08:30 - 10:00 T08 (2701-2702)


  • Coupling Everything: A Universal Guideline for Building State-of-The-Art Recommender Systems
    T08
  • The schedule is approximate. Check the tutorial's website for further detail.

    T08 - Coupling Everything: A Universal Guideline for Building State-of-The-Art Recommender Systems

    -

    https://sites.google.com/view/lianghu/home/tutorials/ijcai2019

    Open website

    Sunday 11 08:30 - 10:00 W26 (2405)


  • Explainable AI
    W26
  • The schedule is approximate. Check the workshop's website for further detail.

    W26 - Explainable AI

    https://sites.google.com/view/xai2019/home

    Open website

    Sunday 11 08:30 - 10:00 T18 (2705-2706)


  • What I talk about when I talk about reproducibility: A tutorial
    T18
  • The schedule is approximate. Check the tutorial's website for further detail.

    T18 - What I talk about when I talk about reproducibility: A tutorial

    -

    https://folk.idi.ntnu.no/odderik/IJCAI19-Tutorial

    Open website

    Sunday 11 08:30 - 10:00 W32 (2301)


  • Data Science Meets Optimisation (DSO)
    W32
  • The schedule is approximate. Check the workshop's website for further detail.

    W32 - Data Science Meets Optimisation (DSO)

    https://sites.google.com/view/ijcai2019dso/

    Open website

    Sunday 11 10:30 - 12:30 W08 (2501)


  • Qualitative Reasoning (QR)
    W08
  • The schedule is approximate. Check the workshop's website for further detail.

    W08 - Qualitative Reasoning (QR)

    https://qr2019.sme.uni-bamberg.de

    Open website

    Sunday 11 10:30 - 12:30 W14 (2502)


  • Education in Artificial Intelligence K-12 (EduAI)
    W14
  • The schedule is approximate. Check the workshop's website for further detail.

    W14 - Education in Artificial Intelligence K-12 (EduAI)

    http://eduai19.ist.tugraz.at/

    Open website

    Sunday 11 10:30 - 12:30 T04 (J)


  • Deep Bayesian Sequential Learning
    T04
  • The schedule is approximate. Check the tutorial's website for further detail.

    T04 - Deep Bayesian Sequential Learning

    -

    http://chien.cm.nctu.edu.tw/home/ijcai-tutorial/

    Open website

    Sunday 11 10:30 - 12:30 W12 (2403)


  • Big Social Media Data Management and Analysis (BSMDMA)
    W12
  • The schedule is approximate. Check the workshop's website for further detail.

    W12 - Big Social Media Data Management and Analysis (BSMDMA)

    https://www.comp.hkbu.edu.hk/~xinhuang/BSMDMA2019

    Open website

    Sunday 11 10:30 - 12:30 T17 (2703-2704)


  • Game Description Languages and Logics
    T17
  • The schedule is approximate. Check the tutorial's website for further detail.

    T17 - Game Description Languages and Logics

    -

    https://www.irit.fr/~Laurent.Perrussel/ijcai19.html

    Open website

    Sunday 11 10:30 - 12:30 W21 (2505)


  • Agent-based Complex Automated Negotiations (ACAN)
    W21
  • The schedule is approximate. Check the workshop's website for further detail.

    W21 - Agent-based Complex Automated Negotiations (ACAN)

    http://www.itolab.nitech.ac.jp/ACAN2019/index.html

    Open website

    Sunday 11 10:30 - 12:30 W36 (2401)


  • Artificial Intelligence Safety (AISafety)
    W36
  • The schedule is approximate. Check the workshop's website for further detail.

    W36 - Artificial Intelligence Safety (AISafety)

    https://www.ai-safety.org/

    Open website

    Sunday 11 10:30 - 12:30 W02 (2605)


  • Artificial Intelligence for Knowledge Management and Innovation (AI4KM)
    W02
  • The schedule is approximate. Check the workshop's website for further detail.

    W02 - Artificial Intelligence for Knowledge Management and Innovation (AI4KM)

    http://ifipgroup.com/7th-ai4km/

    Open website

    Sunday 11 10:30 - 12:30 T08 (2701-2702)


  • Coupling Everything: A Universal Guideline for Building State-of-The-Art Recommender Systems
    T08
  • The schedule is approximate. Check the tutorial's website for further detail.

    T08 - Coupling Everything: A Universal Guideline for Building State-of-The-Art Recommender Systems

    -

    https://sites.google.com/view/lianghu/home/tutorials/ijcai2019

    Open website

    Sunday 11 10:30 - 12:30 W26 (2405)


  • Explainable AI
    W26
  • The schedule is approximate. Check the workshop's website for further detail.

    W26 - Explainable AI

    https://sites.google.com/view/xai2019/home

    Open website

    Sunday 11 10:30 - 12:30 T18 (2705-2706)


  • What I talk about when I talk about reproducibility: A tutorial
    T18
  • The schedule is approximate. Check the tutorial's website for further detail.

    T18 - What I talk about when I talk about reproducibility: A tutorial

    -

    https://folk.idi.ntnu.no/odderik/IJCAI19-Tutorial

    Open website

    Sunday 11 10:30 - 12:30 W32 (2301)


  • Data Science Meets Optimisation (DSO)
    W32
  • The schedule is approximate. Check the workshop's website for further detail.

    W32 - Data Science Meets Optimisation (DSO)

    https://sites.google.com/view/ijcai2019dso/

    Open website

    Sunday 11 10:30 - 12:30 W17 (2503)


  • Bringing Semantic Knowledge into Vision and Text Understanding
    W17
  • The schedule is approximate. Check the workshop's website for further detail.

    W17 - Bringing Semantic Knowledge into Vision and Text Understanding

    http://www.cs.uga.edu/~shengli/Tusion2019.html

    Open website

    Sunday 11 10:30 - 12:30 W34 (2402)


  • Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)
    W34
  • The schedule is approximate. Check the workshop's website for further detail.

    W34 - Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)

    https://sites.google.com/view/kdh2019

    Open website

    Sunday 11 10:30 - 12:30 T22 (L)


  • Causal Reinforcement Learning
    T22
  • The schedule is approximate. Check the tutorial's website for further detail.

    T22 - Causal Reinforcement Learning

    -

    http://www.cisiad.uned.es/cursos/tutorial-IJCAI-2019/PGMs-medicine.php

    Open website

    Sunday 11 10:30 - 12:30 W15 (2504)


  • AI and the Unesco SDGs
    W15
  • The schedule is approximate. Check the workshop's website for further detail.

    W15 - AI and the Unesco SDGs

    https://www.k4all.org/event/ijcai19/

    Open website

    Sunday 11 10:30 - 12:30 W25 (2506)


  • AI-based Multimodal Analytics for Understanding Human Learning in Real-World Educational Contexts (AIMA4Edu)
    W25
  • The schedule is approximate. Check the workshop's website for further detail.

    W25 - AI-based Multimodal Analytics for Understanding Human Learning in Real-World Educational Contexts (AIMA4Edu)

    https://www.aima4edu.com

    Open website

    Sunday 11 10:30 - 12:30 W43 (2406)


  • Artificial Intelligence and Food
    W43
  • The schedule is approximate. Check the workshop's website for further detail.

    W43 - Artificial Intelligence and Food

    https://sites.google.com/view/ijcai2019aixfood/home

    Open website

    Sunday 11 10:30 - 12:30 T13 (2601-2602)


  • Concept-to-code: Aspect Sentiment Classification with Deep Learning
    T13
  • The schedule is approximate. Check the tutorial's website for further detail.

    T13 - Concept-to-code: Aspect Sentiment Classification with Deep Learning

    -

    http://deepthinking.ai/ijcai-2019/

    Open website

    Sunday 11 14:00 - 15:30 W36 (2401)


  • Artificial Intelligence Safety (AISafety)
    W36
  • The schedule is approximate. Check the workshop's website for further detail.

    W36 - Artificial Intelligence Safety (AISafety)

    https://www.ai-safety.org/

    Open website

    Sunday 11 14:00 - 15:30 W26 (2405)


  • Explainable AI
    W26
  • The schedule is approximate. Check the workshop's website for further detail.

    W26 - Explainable AI

    https://sites.google.com/view/xai2019/home

    Open website

    Sunday 11 14:00 - 15:30 W02 (2605)


  • Artificial Intelligence for Knowledge Management and Innovation (AI4KM)
    W02
  • The schedule is approximate. Check the workshop's website for further detail.

    W02 - Artificial Intelligence for Knowledge Management and Innovation (AI4KM)

    http://ifipgroup.com/7th-ai4km/

    Open website

    Sunday 11 14:00 - 15:30 T06 (2701-2702)


  • The quest for large scale adoption of intelligent systems
    T06
  • The schedule is approximate. Check the tutorial's website for further detail.

    T06 - The quest for large scale adoption of intelligent systems

    -

    http://nduta.weebly.com/uploads/4/9/9/6/49967363/tutorial_2019_.pdf

    Open website

    Sunday 11 14:00 - 15:30 W06 (2606)


  • Biomedical infOrmatics with Optimization and Machine learning (BOOM)
    W06
  • The schedule is approximate. Check the workshop's website for further detail.

    W06 - Biomedical infOrmatics with Optimization and Machine learning (BOOM)

    https://www.ijcai-boom.org

    Open website

    Sunday 11 14:00 - 15:30 T25 (J)


  • AI Ethics
    T25
  • The schedule is approximate. Check the tutorial's website for further detail.

    T25 - AI Ethics

    -

    http://homepages.inf.ed.ac.uk/mrovatso/index.php/Main/AIEthics

    Open website

    Sunday 11 14:00 - 15:30 W34 (2402)


  • Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)
    W34
  • The schedule is approximate. Check the workshop's website for further detail.

    W34 - Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)

    https://sites.google.com/view/kdh2019

    Open website

    Sunday 11 14:00 - 15:30 W17 (2503)


  • Bringing Semantic Knowledge into Vision and Text Understanding
    W17
  • The schedule is approximate. Check the workshop's website for further detail.

    W17 - Bringing Semantic Knowledge into Vision and Text Understanding

    http://www.cs.uga.edu/~shengli/Tusion2019.html

    Open website

    Sunday 11 14:00 - 15:30 T23 (2705-2706)


  • The AI Universe of ``Actions``: Agency, Causality, Commonsense and Deception
    T23
  • The schedule is approximate. Check the tutorial's website for further detail.

    T23 - The AI Universe of ``Actions``: Agency, Causality, Commonsense and Deception

    -

    https://www.cs.nmsu.edu/~tson/tutorials/rac-ijcai19.html

    Open website

    Sunday 11 14:00 - 15:30 W15 (2504)


  • AI and the Unesco SDGs
    W15
  • The schedule is approximate. Check the workshop's website for further detail.

    W15 - AI and the Unesco SDGs

    https://www.k4all.org/event/ijcai19/

    Open website

    Sunday 11 14:00 - 15:30 W32 (2301)


  • Data Science Meets Optimisation (DSO)
    W32
  • The schedule is approximate. Check the workshop's website for further detail.

    W32 - Data Science Meets Optimisation (DSO)

    https://sites.google.com/view/ijcai2019dso/

    Open website

    Sunday 11 14:00 - 15:30 W25 (2506)


  • AI-based Multimodal Analytics for Understanding Human Learning in Real-World Educational Contexts (AIMA4Edu)
    W25
  • The schedule is approximate. Check the workshop's website for further detail.

    W25 - AI-based Multimodal Analytics for Understanding Human Learning in Real-World Educational Contexts (AIMA4Edu)

    https://www.aima4edu.com

    Open website

    Sunday 11 14:00 - 15:30 T14 (L)


  • Non-IID Learning of Complex Data and Behaviors
    T14
  • The schedule is approximate. Check the tutorial's website for further detail.

    T14 - Non-IID Learning of Complex Data and Behaviors

    -

    https://sites.google.com/site/trongdinhthacdo/talks-and-tutorials/ijcai-2019

    Open website

    Sunday 11 14:00 - 15:30 W08 (2501)


  • Qualitative Reasoning (QR)
    W08
  • The schedule is approximate. Check the workshop's website for further detail.

    W08 - Qualitative Reasoning (QR)

    https://qr2019.sme.uni-bamberg.de

    Open website

    Sunday 11 14:00 - 15:30 T21 (2601-2602)


  • New Frontiers of Automated Mechanism Design for Pricing and Auctions
    T21
  • The schedule is approximate. Check the tutorial's website for further detail.

    T21 - New Frontiers of Automated Mechanism Design for Pricing and Auctions

    -

    https://sites.google.com/view/amdtutorial-ijcai19

    Open website

    Sunday 11 14:00 - 15:30 W14 (2502)


  • Education in Artificial Intelligence K-12 (EduAI)
    W14
  • The schedule is approximate. Check the workshop's website for further detail.

    W14 - Education in Artificial Intelligence K-12 (EduAI)

    http://eduai19.ist.tugraz.at/

    Open website

    Sunday 11 14:00 - 15:30 T05 (2703-2704)


  • An Introduction to Formal Argumentation Theory
    T05
  • The schedule is approximate. Check the tutorial's website for further detail.

    T05 - An Introduction to Formal Argumentation Theory

    -

    https://users.cs.cf.ac.uk/CaminadaM/IJCAI19_tutorial.html

    Open website

    Sunday 11 14:00 - 15:30 W21 (2505)


  • Agent-based Complex Automated Negotiations (ACAN)
    W21
  • The schedule is approximate. Check the workshop's website for further detail.

    W21 - Agent-based Complex Automated Negotiations (ACAN)

    http://www.itolab.nitech.ac.jp/ACAN2019/index.html

    Open website

    Sunday 11 16:00 - 18:00 W17 (2503)


  • Bringing Semantic Knowledge into Vision and Text Understanding
    W17
  • The schedule is approximate. Check the workshop's website for further detail.

    W17 - Bringing Semantic Knowledge into Vision and Text Understanding

    http://www.cs.uga.edu/~shengli/Tusion2019.html

    Open website

    Sunday 11 16:00 - 18:00 W34 (2402)


  • Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)
    W34
  • The schedule is approximate. Check the workshop's website for further detail.

    W34 - Knowledge Discovery in Healthcare-AI for Aging, Rehabilitation and Independent Assisted Living (KDH-ARIAL)

    https://sites.google.com/view/kdh2019

    Open website

    Sunday 11 16:00 - 18:00 T23 (2705-2706)


  • The AI Universe of ``Actions``: Agency, Causality, Commonsense and Deception
    T23
  • The schedule is approximate. Check the tutorial's website for further detail.

    T23 - The AI Universe of ``Actions``: Agency, Causality, Commonsense and Deception

    -

    https://www.cs.nmsu.edu/~tson/tutorials/rac-ijcai19.html

    Open website

    Sunday 11 16:00 - 18:00 W15 (2504)


  • AI and the Unesco SDGs
    W15
  • The schedule is approximate. Check the workshop's website for further detail.

    W15 - AI and the Unesco SDGs

    https://www.k4all.org/event/ijcai19/

    Open website

    Sunday 11 16:00 - 18:00 W32 (2301)


  • Data Science Meets Optimisation (DSO)
    W32
  • The schedule is approximate. Check the workshop's website for further detail.

    W32 - Data Science Meets Optimisation (DSO)

    https://sites.google.com/view/ijcai2019dso/

    Open website

    Sunday 11 16:00 - 18:00 W25 (2506)


  • AI-based Multimodal Analytics for Understanding Human Learning in Real-World Educational Contexts (AIMA4Edu)
    W25
  • The schedule is approximate. Check the workshop's website for further detail.

    W25 - AI-based Multimodal Analytics for Understanding Human Learning in Real-World Educational Contexts (AIMA4Edu)

    https://www.aima4edu.com

    Open website

    Sunday 11 16:00 - 18:00 T14 (L)


  • Non-IID Learning of Complex Data and Behaviors
    T14
  • The schedule is approximate. Check the tutorial's website for further detail.

    T14 - Non-IID Learning of Complex Data and Behaviors

    -

    https://sites.google.com/site/trongdinhthacdo/talks-and-tutorials/ijcai-2019

    Open website

    Sunday 11 16:00 - 18:00 W08 (2501)


  • Qualitative Reasoning (QR)
    W08
  • The schedule is approximate. Check the workshop's website for further detail.

    W08 - Qualitative Reasoning (QR)

    https://qr2019.sme.uni-bamberg.de

    Open website

    Sunday 11 16:00 - 18:00 T21 (2601-2602)


  • New Frontiers of Automated Mechanism Design for Pricing and Auctions
    T21
  • The schedule is approximate. Check the tutorial's website for further detail.

    T21 - New Frontiers of Automated Mechanism Design for Pricing and Auctions

    -

    https://sites.google.com/view/amdtutorial-ijcai19

    Open website

    Sunday 11 16:00 - 18:00 W14 (2502)


  • Education in Artificial Intelligence K-12 (EduAI)
    W14
  • The schedule is approximate. Check the workshop's website for further detail.

    W14 - Education in Artificial Intelligence K-12 (EduAI)

    http://eduai19.ist.tugraz.at/

    Open website

    Sunday 11 16:00 - 18:00 T05 (2703-2704)


  • An Introduction to Formal Argumentation Theory
    T05
  • The schedule is approximate. Check the tutorial's website for further detail.

    T05 - An Introduction to Formal Argumentation Theory

    -

    https://users.cs.cf.ac.uk/CaminadaM/IJCAI19_tutorial.html

    Open website

    Sunday 11 16:00 - 18:00 W21 (2505)


  • Agent-based Complex Automated Negotiations (ACAN)
    W21
  • The schedule is approximate. Check the workshop's website for further detail.

    W21 - Agent-based Complex Automated Negotiations (ACAN)

    http://www.itolab.nitech.ac.jp/ACAN2019/index.html

    Open website

    Sunday 11 16:00 - 18:00 W36 (2401)


  • Artificial Intelligence Safety (AISafety)
    W36
  • The schedule is approximate. Check the workshop's website for further detail.

    W36 - Artificial Intelligence Safety (AISafety)

    https://www.ai-safety.org/

    Open website

    Sunday 11 16:00 - 18:00 W26 (2405)


  • Explainable AI
    W26
  • The schedule is approximate. Check the workshop's website for further detail.

    W26 - Explainable AI

    https://sites.google.com/view/xai2019/home

    Open website

    Sunday 11 16:00 - 18:00 W02 (2605)


  • Artificial Intelligence for Knowledge Management and Innovation (AI4KM)
    W02
  • The schedule is approximate. Check the workshop's website for further detail.

    W02 - Artificial Intelligence for Knowledge Management and Innovation (AI4KM)

    http://ifipgroup.com/7th-ai4km/

    Open website

    Sunday 11 16:00 - 18:00 T06 (2701-2702)


  • The quest for large scale adoption of intelligent systems
    T06
  • The schedule is approximate. Check the tutorial's website for further detail.

    T06 - The quest for large scale adoption of intelligent systems

    -

    http://nduta.weebly.com/uploads/4/9/9/6/49967363/tutorial_2019_.pdf

    Open website

    Sunday 11 16:00 - 18:00 W06 (2606)


  • Biomedical infOrmatics with Optimization and Machine learning (BOOM)
    W06
  • The schedule is approximate. Check the workshop's website for further detail.

    W06 - Biomedical infOrmatics with Optimization and Machine learning (BOOM)

    https://www.ijcai-boom.org

    Open website

    Sunday 11 16:00 - 18:00 T25 (J)


  • AI Ethics
    T25
  • The schedule is approximate. Check the tutorial's website for further detail.

    T25 - AI Ethics

    -

    http://homepages.inf.ed.ac.uk/mrovatso/index.php/Main/AIEthics

    Open website

    Monday 12 08:30 - 10:00 W09 (2505)


  • Scaling-Up Reinforcement Learning (SURL)
    W09
  • The schedule is approximate. Check the workshop's website for further detail.

    W09 - Scaling-Up Reinforcement Learning (SURL)

    http://surl.tirl.info/

    Open website

    Monday 12 08:30 - 10:00 W04 (2504)


  • Financial Technology and Natural Language Processing (FinNLP)
    W04
  • The schedule is approximate. Check the workshop's website for further detail.

    W04 - Financial Technology and Natural Language Processing (FinNLP)

    https://sites.google.com/nlg.csie.ntu.edu.tw/finnlp/

    Open website

    Monday 12 08:30 - 10:00 W20 (2506)


  • Federated machine learning for data privacy (FML)
    W20
  • The schedule is approximate. Check the workshop's website for further detail.

    W20 - Federated machine learning for data privacy (FML)

    https://www.fedai.org/#/conferences/FML2019

    Open website

    Monday 12 08:30 - 10:00 T09 (2601-2602)


  • Timeline-based Planning: Theory and Practice
    T09
  • The schedule is approximate. Check the tutorial's website for further detail.

    T09 - Timeline-based Planning: Theory and Practice

    -

    https://overlay.uniud.it/ijcai2019-tutorial-timelines/

    Open website

    Monday 12 08:30 - 10:00 DC (2605)


  • Doctoral Consortium
    DC
  • The schedule is approximate. Check the workshop's website for further detail.

    DC - Doctoral Consortium

    https://www.public.asu.edu/~cbaral/ijcai19-dc/

    Open website

    Monday 12 08:30 - 10:00 W45 (2303)


  • Natural Language Processing for Social Media (SocialNLP)
    W45
  • The schedule is approximate. Check the workshop's website for further detail.

    W45 - Natural Language Processing for Social Media (SocialNLP)

    https://sites.google.com/site/socialnlp2019/

    Open website

    Monday 12 08:30 - 10:00 T34 (2603-2604)


  • Solving Games with Complex Strategy Spaces
    T34
  • The schedule is approximate. Check the tutorial's website for further detail.

    T34 - Solving Games with Complex Strategy Spaces

    -

    https://sites.google.com/view/ijcai-2019tutorialcgt/home

    Open website

    Monday 12 08:30 - 10:00 T19 (2701-2702)


  • From Satisfiability to Optimization Modulo Theories
    T19
  • The schedule is approximate. Check the tutorial's website for further detail.

    T19 - From Satisfiability to Optimization Modulo Theories

    -

    http://disi.unitn.it/rseba/IJCAI2019-Tutorial-Sebastiani.html

    Open website

    Monday 12 08:30 - 10:00 W22 (2501)


  • Search-Oriented Conversational AI (SCAI)
    W22
  • The schedule is approximate. Check the workshop's website for further detail.

    W22 - Search-Oriented Conversational AI (SCAI)

    https://scai.info/ijcai2019/

    Open website

    Monday 12 08:30 - 10:00 T16 (L)


  • Medical decision analysis with probabilistic graphical models
    T16
  • The schedule is approximate. Check the tutorial's website for further detail.

    T16 - Medical decision analysis with probabilistic graphical models

    -

    http://www.cisiad.uned.es/cursos/tutorial-IJCAI-2019/PGMs-medicine.php

    Open website

    Monday 12 08:30 - 10:00 W19 (2406)


  • Multi-output Learning (MoL)
    W19
  • The schedule is approximate. Check the workshop's website for further detail.

    W19 - Multi-output Learning (MoL)

    https://ijcai-mol.github.io/

    Open website

    Monday 12 08:30 - 10:00 W36 (2401)


  • Artificial Intelligence Safety (AISafety)
    W36
  • The schedule is approximate. Check the workshop's website for further detail.

    W36 - Artificial Intelligence Safety (AISafety)

    https://www.ai-safety.org/

    Open website

    Monday 12 08:30 - 10:00 T01 (J)


  • Hands-On Deep Learning with TensorFlow 2.0
    T01
  • The schedule is approximate. Check the tutorial's website for further detail.

    T01 - Hands-On Deep Learning with TensorFlow 2.0

    -

    http://bit.ly/tf-ijcai

    Open website

    Monday 12 08:30 - 10:00 W28 (2304)


  • AI for Internet of Things (AI4IoT)
    W28
  • The schedule is approximate. Check the workshop's website for further detail.

    W28 - AI for Internet of Things (AI4IoT)

    https://www.zurich.ibm.com/AI4IoT/

    Open website

    Monday 12 08:30 - 10:00 W03 (2503)


  • Neural-Symbolic Learning and Reasoning
    W03
  • The schedule is approximate. Check the workshop's website for further detail.

    W03 - Neural-Symbolic Learning and Reasoning

    https://sites.google.com/view/nesy19/

    Open website

    Monday 12 08:30 - 10:00 T10 (2705-2706)


  • Epistemic reasoning in AI
    T10
  • The schedule is approximate. Check the tutorial's website for further detail.

    T10 - Epistemic reasoning in AI

    -

    http://people.irisa.fr/Francois.Schwarzentruber/ijcai2019_tutorial/

    Open website

    Monday 12 08:30 - 10:00 AW (2405)


  • Awards Ceremony
    AW
  • The schedule is approximate. Check the workshop's website for further detail.

    AW - Awards Ceremony

    Monday 12 08:30 - 10:00 W44 (2302)


  • Declarative Learning Based Programming
    W44
  • The schedule is approximate. Check the workshop's website for further detail.

    W44 - Declarative Learning Based Programming

    http://delbp.github.io

    Open website

    Monday 12 08:30 - 10:00 W07 (2606)


  • Smart Simulation and Modeling for Complex Systems (SSMCS)
    W07
  • The schedule is approximate. Check the workshop's website for further detail.

    W07 - Smart Simulation and Modeling for Complex Systems (SSMCS)

    http://www.uow.edu.au/~fren/SSMCS2019/index.html

    Open website

    Monday 12 08:30 - 10:00 W38 (2301)


  • Multi-Agent Path Finding
    W38
  • The schedule is approximate. Check the workshop's website for further detail.

    W38 - Multi-Agent Path Finding

    http://idm-lab.org/wiki/IJCAI19-MAPF/index.php/Main/HomePage

    Open website

    Monday 12 08:30 - 10:00 W30 (2502)


  • Humanizing AI
    W30
  • The schedule is approximate. Check the workshop's website for further detail.

    W30 - Humanizing AI

    https://www.humanizing-ai.com/hai-19.html

    Open website

    Monday 12 08:30 - 10:00 W16 (2404)


  • Linguistic and Cognitive Approaches to Dialogue Agents (LaCATODA)
    W16
  • The schedule is approximate. Check the workshop's website for further detail.

    W16 - Linguistic and Cognitive Approaches to Dialogue Agents (LaCATODA)

    http://arakilab.media.eng.hokudai.ac.jp/IJCAI2019/LACATODA2019

    Open website

    Monday 12 08:30 - 10:00 W23 (2403)


  • Human Brain and Artificial Intelligence (HBAI)
    W23
  • The schedule is approximate. Check the workshop's website for further detail.

    W23 - Human Brain and Artificial Intelligence (HBAI)

    http://www.ijcai-hbai.org/

    Open website

    Monday 12 08:30 - 10:00 T15 (2703-2704)


  • Argumentation and Machine Learning: When the Whole is Greater than the Sum of its Parts
    T15
  • The schedule is approximate. Check the tutorial's website for further detail.

    T15 - Argumentation and Machine Learning: When the Whole is Greater than the Sum of its Parts

    -

    https://scienceartificial.com/IJCAI2019Tutorial

    Open website

    Monday 12 08:30 - 10:00 W37 (2402)


  • AI for Social Good
    W37
  • The schedule is approximate. Check the workshop's website for further detail.

    W37 - AI for Social Good

    https://aiforgood2019.github.io/

    Open website

    Monday 12 10:30 - 12:30 T10 (2705-2706)


  • Epistemic reasoning in AI
    T10
  • The schedule is approximate. Check the tutorial's website for further detail.

    T10 - Epistemic reasoning in AI

    -

    http://people.irisa.fr/Francois.Schwarzentruber/ijcai2019_tutorial/

    Open website

    Monday 12 10:30 - 12:30 T01 (J)


  • Hands-On Deep Learning with TensorFlow 2.0
    T01
  • The schedule is approximate. Check the tutorial's website for further detail.

    T01 - Hands-On Deep Learning with TensorFlow 2.0

    -

    http://bit.ly/tf-ijcai

    Open website

    Monday 12 10:30 - 12:30 W28 (2304)


  • AI for Internet of Things (AI4IoT)
    W28
  • The schedule is approximate. Check the workshop's website for further detail.

    W28 - AI for Internet of Things (AI4IoT)

    https://www.zurich.ibm.com/AI4IoT/

    Open website

    Monday 12 10:30 - 12:30 W03 (2503)


  • Neural-Symbolic Learning and Reasoning
    W03
  • The schedule is approximate. Check the workshop's website for further detail.

    W03 - Neural-Symbolic Learning and Reasoning

    https://sites.google.com/view/nesy19/

    Open website

    Monday 12 10:30 - 12:30 W44 (2302)


  • Declarative Learning Based Programming
    W44
  • The schedule is approximate. Check the workshop's website for further detail.

    W44 - Declarative Learning Based Programming

    http://delbp.github.io

    Open website

    Monday 12 10:30 - 12:30 W07 (2606)


  • Smart Simulation and Modeling for Complex Systems (SSMCS)
    W07
  • The schedule is approximate. Check the workshop's website for further detail.

    W07 - Smart Simulation and Modeling for Complex Systems (SSMCS)

    http://www.uow.edu.au/~fren/SSMCS2019/index.html

    Open website

    Monday 12 10:30 - 12:30 W38 (2301)


  • Multi-Agent Path Finding
    W38
  • The schedule is approximate. Check the workshop's website for further detail.

    W38 - Multi-Agent Path Finding

    http://idm-lab.org/wiki/IJCAI19-MAPF/index.php/Main/HomePage

    Open website

    Monday 12 10:30 - 12:30 W30 (2502)


  • Humanizing AI
    W30
  • The schedule is approximate. Check the workshop's website for further detail.

    W30 - Humanizing AI

    https://www.humanizing-ai.com/hai-19.html

    Open website

    Monday 12 10:30 - 12:30 W16 (2404)


  • Linguistic and Cognitive Approaches to Dialogue Agents (LaCATODA)
    W16
  • The schedule is approximate. Check the workshop's website for further detail.

    W16 - Linguistic and Cognitive Approaches to Dialogue Agents (LaCATODA)

    http://arakilab.media.eng.hokudai.ac.jp/IJCAI2019/LACATODA2019

    Open website

    Monday 12 10:30 - 12:30 W23 (2403)


  • Human Brain and Artificial Intelligence (HBAI)
    W23
  • The schedule is approximate. Check the workshop's website for further detail.

    W23 - Human Brain and Artificial Intelligence (HBAI)

    http://www.ijcai-hbai.org/

    Open website

    Monday 12 10:30 - 12:30 T15 (2703-2704)


  • Argumentation and Machine Learning: When the Whole is Greater than the Sum of its Parts
    T15
  • The schedule is approximate. Check the tutorial's website for further detail.

    T15 - Argumentation and Machine Learning: When the Whole is Greater than the Sum of its Parts

    -

    https://scienceartificial.com/IJCAI2019Tutorial

    Open website

    Monday 12 10:30 - 12:30 W37 (2402)


  • AI for Social Good
    W37
  • The schedule is approximate. Check the workshop's website for further detail.

    W37 - AI for Social Good

    https://aiforgood2019.github.io/

    Open website

    Monday 12 10:30 - 12:30 W09 (2505)


  • Scaling-Up Reinforcement Learning (SURL)
    W09
  • The schedule is approximate. Check the workshop's website for further detail.

    W09 - Scaling-Up Reinforcement Learning (SURL)

    http://surl.tirl.info/

    Open website

    Monday 12 10:30 - 12:30 W04 (2504)


  • Financial Technology and Natural Language Processing (FinNLP)
    W04
  • The schedule is approximate. Check the workshop's website for further detail.

    W04 - Financial Technology and Natural Language Processing (FinNLP)

    https://sites.google.com/nlg.csie.ntu.edu.tw/finnlp/

    Open website

    Monday 12 10:30 - 12:30 W20 (2506)


  • Federated machine learning for data privacy (FML)
    W20
  • The schedule is approximate. Check the workshop's website for further detail.

    W20 - Federated machine learning for data privacy (FML)

    https://www.fedai.org/#/conferences/FML2019

    Open website

    Monday 12 10:30 - 12:30 ICCMA'19 Award Ceremony (2405)


  • Third International Competition on Computational Models of Argumentation (ICCMA'19) Award Ceremony
    ICCMA'19 Award Ceremony
  • Monday 12 10:30 - 12:30 T09 (2601-2602)


  • Timeline-based Planning: Theory and Practice
    T09
  • The schedule is approximate. Check the tutorial's website for further detail.

    T09 - Timeline-based Planning: Theory and Practice

    -

    https://overlay.uniud.it/ijcai2019-tutorial-timelines/

    Open website

    Monday 12 10:30 - 12:30 DC (2605)


  • Doctoral Consortium
    DC
  • The schedule is approximate. Check the workshop's website for further detail.

    DC - Doctoral Consortium

    https://www.public.asu.edu/~cbaral/ijcai19-dc/

    Open website

    Monday 12 10:30 - 12:30 W45 (2303)


  • Natural Language Processing for Social Media (SocialNLP)
    W45
  • The schedule is approximate. Check the workshop's website for further detail.

    W45 - Natural Language Processing for Social Media (SocialNLP)

    https://sites.google.com/site/socialnlp2019/

    Open website

    Monday 12 10:30 - 12:30 T34 (2603-2604)


  • Solving Games with Complex Strategy Spaces
    T34
  • The schedule is approximate. Check the tutorial's website for further detail.

    T34 - Solving Games with Complex Strategy Spaces

    -

    https://sites.google.com/view/ijcai-2019tutorialcgt/home

    Open website

    Monday 12 10:30 - 12:30 T19 (2701-2702)


  • From Satisfiability to Optimization Modulo Theories
    T19
  • The schedule is approximate. Check the tutorial's website for further detail.

    T19 - From Satisfiability to Optimization Modulo Theories

    -

    http://disi.unitn.it/rseba/IJCAI2019-Tutorial-Sebastiani.html

    Open website

    Monday 12 10:30 - 12:30 W22 (2501)


  • Search-Oriented Conversational AI (SCAI)
    W22
  • The schedule is approximate. Check the workshop's website for further detail.

    W22 - Search-Oriented Conversational AI (SCAI)

    https://scai.info/ijcai2019/

    Open website

    Monday 12 10:30 - 12:30 T16 (L)


  • Medical decision analysis with probabilistic graphical models
    T16
  • The schedule is approximate. Check the tutorial's website for further detail.

    T16 - Medical decision analysis with probabilistic graphical models

    -

    http://www.cisiad.uned.es/cursos/tutorial-IJCAI-2019/PGMs-medicine.php

    Open website

    Monday 12 10:30 - 12:30 W19 (2406)


  • Multi-output Learning (MoL)
    W19
  • The schedule is approximate. Check the workshop's website for further detail.

    W19 - Multi-output Learning (MoL)

    https://ijcai-mol.github.io/

    Open website

    Monday 12 10:30 - 12:30 W36 (2401)


  • Artificial Intelligence Safety (AISafety)
    W36
  • The schedule is approximate. Check the workshop's website for further detail.

    W36 - Artificial Intelligence Safety (AISafety)

    https://www.ai-safety.org/

    Open website

    Monday 12 14:00 - 15:30 T12 (2601-2602)


  • Fine-grained Opinion Mining: Current Trend and Cutting-Edge Dimensions
    T12
  • The schedule is approximate. Check the tutorial's website for further detail.

    T12 - Fine-grained Opinion Mining: Current Trend and Cutting-Edge Dimensions

    -

    https://happywwy.github.io/

    Open website

    Monday 12 14:00 - 15:30 W30 (2502)


  • Humanizing AI
    W30
  • The schedule is approximate. Check the workshop's website for further detail.

    W30 - Humanizing AI

    https://www.humanizing-ai.com/hai-19.html

    Open website

    Monday 12 14:00 - 15:30 W23 (2403)


  • Human Brain and Artificial Intelligence (HBAI)
    W23
  • The schedule is approximate. Check the workshop's website for further detail.

    W23 - Human Brain and Artificial Intelligence (HBAI)

    http://www.ijcai-hbai.org/

    Open website

    Monday 12 14:00 - 15:30 W37 (2402)


  • AI for Social Good
    W37
  • The schedule is approximate. Check the workshop's website for further detail.

    W37 - AI for Social Good

    https://aiforgood2019.github.io/

    Open website

    Monday 12 14:00 - 15:30 W11 (2405)


  • Semantic Deep Learning (SemDeep)
    W11
  • The schedule is approximate. Check the workshop's website for further detail.

    W11 - Semantic Deep Learning (SemDeep)

    http://www.dfki.de/~declerck/semdeep-5/

    Open website

    Monday 12 14:00 - 15:30 W09 (2505)


  • Scaling-Up Reinforcement Learning (SURL)
    W09
  • The schedule is approximate. Check the workshop's website for further detail.

    W09 - Scaling-Up Reinforcement Learning (SURL)

    http://surl.tirl.info/

    Open website

    Monday 12 14:00 - 15:30 W04 (2504)


  • Financial Technology and Natural Language Processing (FinNLP)
    W04
  • The schedule is approximate. Check the workshop's website for further detail.

    W04 - Financial Technology and Natural Language Processing (FinNLP)

    https://sites.google.com/nlg.csie.ntu.edu.tw/finnlp/

    Open website

    Monday 12 14:00 - 15:30 W20 (2506)


  • Federated machine learning for data privacy (FML)
    W20
  • The schedule is approximate. Check the workshop's website for further detail.

    W20 - Federated machine learning for data privacy (FML)

    https://www.fedai.org/#/conferences/FML2019

    Open website

    Monday 12 14:00 - 15:30 DC (2605)


  • Doctoral Consortium
    DC
  • The schedule is approximate. Check the workshop's website for further detail.

    DC - Doctoral Consortium

    https://www.public.asu.edu/~cbaral/ijcai19-dc/

    Open website

    Monday 12 14:00 - 15:30 W46 (2404)


  • Bridging the Gap Between Human and Automated Reasoning
    W46
  • The schedule is approximate. Check the workshop's website for further detail.

    W46 - Bridging the Gap Between Human and Automated Reasoning

    http://ratiolog.uni-koblenz.de/bridging2019

    Open website

    Monday 12 14:00 - 15:30 T20 (2701-2702)


  • Computing with SAT Oracles
    T20
  • The schedule is approximate. Check the tutorial's website for further detail.

    T20 - Computing with SAT Oracles

    -

    http://reason.di.fc.ul.pt/ijcai19tut/

    Open website

    Monday 12 14:00 - 15:30 W22 (2501)


  • Search-Oriented Conversational AI (SCAI)
    W22
  • The schedule is approximate. Check the workshop's website for further detail.

    W22 - Search-Oriented Conversational AI (SCAI)

    https://scai.info/ijcai2019/

    Open website

    Monday 12 14:00 - 15:30 W36 (2401)


  • Artificial Intelligence Safety (AISafety)
    W36
  • The schedule is approximate. Check the workshop's website for further detail.

    W36 - Artificial Intelligence Safety (AISafety)

    https://www.ai-safety.org/

    Open website

    Monday 12 14:00 - 15:30 T11 (2705-2706)


  • Iterated Belief Change
    T11
  • The schedule is approximate. Check the tutorial's website for further detail.

    T11 - Iterated Belief Change

    -

    https://sites.google.com/view/ijcai2019ibc/

    Open website

    Monday 12 14:00 - 15:30 W03 (2503)


  • Neural-Symbolic Learning and Reasoning
    W03
  • The schedule is approximate. Check the workshop's website for further detail.

    W03 - Neural-Symbolic Learning and Reasoning

    https://sites.google.com/view/nesy19/

    Open website

    Monday 12 14:00 - 15:30 W44 (2302)


  • Declarative Learning Based Programming
    W44
  • The schedule is approximate. Check the workshop's website for further detail.

    W44 - Declarative Learning Based Programming

    http://delbp.github.io

    Open website

    Monday 12 14:00 - 15:30 W07 (2606)


  • Smart Simulation and Modeling for Complex Systems (SSMCS)
    W07
  • The schedule is approximate. Check the workshop's website for further detail.

    W07 - Smart Simulation and Modeling for Complex Systems (SSMCS)

    http://www.uow.edu.au/~fren/SSMCS2019/index.html

    Open website

    Monday 12 14:00 - 15:30 W38 (2301)


  • Multi-Agent Path Finding
    W38
  • The schedule is approximate. Check the workshop's website for further detail.

    W38 - Multi-Agent Path Finding

    http://idm-lab.org/wiki/IJCAI19-MAPF/index.php/Main/HomePage

    Open website

    Monday 12 14:00 - 15:30 T24 (J)


  • Scalable Deep Learning: from theory to practice
    T24
  • The schedule is approximate. Check the tutorial's website for further detail.

    T24 - Scalable Deep Learning: from theory to practice

    -

    https://sites.google.com/view/scalable-deep-learning-ijcai19

    Open website

    Monday 12 16:00 - 18:00 DC (2605)


  • Doctoral Consortium
    DC
  • The schedule is approximate. Check the workshop's website for further detail.

    DC - Doctoral Consortium

    https://www.public.asu.edu/~cbaral/ijcai19-dc/

    Open website

    Monday 12 16:00 - 18:00 W46 (2404)


  • Bridging the Gap Between Human and Automated Reasoning
    W46
  • The schedule is approximate. Check the workshop's website for further detail.

    W46 - Bridging the Gap Between Human and Automated Reasoning

    http://ratiolog.uni-koblenz.de/bridging2019

    Open website

    Monday 12 16:00 - 18:00 T20 (2701-2702)


  • Computing with SAT Oracles
    T20
  • The schedule is approximate. Check the tutorial's website for further detail.

    T20 - Computing with SAT Oracles

    -

    http://reason.di.fc.ul.pt/ijcai19tut/

    Open website

    Monday 12 16:00 - 18:00 W22 (2501)


  • Search-Oriented Conversational AI (SCAI)
    W22
  • The schedule is approximate. Check the workshop's website for further detail.

    W22 - Search-Oriented Conversational AI (SCAI)

    https://scai.info/ijcai2019/

    Open website

    Monday 12 16:00 - 18:00 W36 (2401)


  • Artificial Intelligence Safety (AISafety)
    W36
  • The schedule is approximate. Check the workshop's website for further detail.

    W36 - Artificial Intelligence Safety (AISafety)

    https://www.ai-safety.org/

    Open website

    Monday 12 16:00 - 18:00 T11 (2705-2706)


  • Iterated Belief Change
    T11
  • The schedule is approximate. Check the tutorial's website for further detail.

    T11 - Iterated Belief Change

    -

    https://sites.google.com/view/ijcai2019ibc/

    Open website

    Monday 12 16:00 - 18:00 W03 (2503)


  • Neural-Symbolic Learning and Reasoning
    W03
  • The schedule is approximate. Check the workshop's website for further detail.

    W03 - Neural-Symbolic Learning and Reasoning

    https://sites.google.com/view/nesy19/

    Open website

    Monday 12 16:00 - 18:00 W44 (2302)


  • Declarative Learning Based Programming
    W44
  • The schedule is approximate. Check the workshop's website for further detail.

    W44 - Declarative Learning Based Programming

    http://delbp.github.io

    Open website

    Monday 12 16:00 - 18:00 W07 (2606)


  • Smart Simulation and Modeling for Complex Systems (SSMCS)
    W07
  • The schedule is approximate. Check the workshop's website for further detail.

    W07 - Smart Simulation and Modeling for Complex Systems (SSMCS)

    http://www.uow.edu.au/~fren/SSMCS2019/index.html

    Open website

    Monday 12 16:00 - 18:00 W38 (2301)


  • Multi-Agent Path Finding
    W38
  • The schedule is approximate. Check the workshop's website for further detail.

    W38 - Multi-Agent Path Finding

    http://idm-lab.org/wiki/IJCAI19-MAPF/index.php/Main/HomePage

    Open website

    Monday 12 16:00 - 18:00 T24 (J)


  • Scalable Deep Learning: from theory to practice
    T24
  • The schedule is approximate. Check the tutorial's website for further detail.

    T24 - Scalable Deep Learning: from theory to practice

    -

    https://sites.google.com/view/scalable-deep-learning-ijcai19

    Open website

    Monday 12 16:00 - 18:00 T12 (2601-2602)


  • Fine-grained Opinion Mining: Current Trend and Cutting-Edge Dimensions
    T12
  • The schedule is approximate. Check the tutorial's website for further detail.

    T12 - Fine-grained Opinion Mining: Current Trend and Cutting-Edge Dimensions

    -

    https://happywwy.github.io/

    Open website

    Monday 12 16:00 - 18:00 W30 (2502)


  • Humanizing AI
    W30
  • The schedule is approximate. Check the workshop's website for further detail.

    W30 - Humanizing AI

    https://www.humanizing-ai.com/hai-19.html

    Open website

    Monday 12 16:00 - 18:00 W23 (2403)


  • Human Brain and Artificial Intelligence (HBAI)
    W23
  • The schedule is approximate. Check the workshop's website for further detail.

    W23 - Human Brain and Artificial Intelligence (HBAI)

    http://www.ijcai-hbai.org/

    Open website

    Monday 12 16:00 - 18:00 W11 (2405)


  • Semantic Deep Learning (SemDeep)
    W11
  • The schedule is approximate. Check the workshop's website for further detail.

    W11 - Semantic Deep Learning (SemDeep)

    http://www.dfki.de/~declerck/semdeep-5/

    Open website

    Monday 12 16:00 - 18:00 W37 (2402)


  • AI for Social Good
    W37
  • The schedule is approximate. Check the workshop's website for further detail.

    W37 - AI for Social Good

    https://aiforgood2019.github.io/

    Open website

    Monday 12 16:00 - 18:00 W09 (2505)


  • Scaling-Up Reinforcement Learning (SURL)
    W09
  • The schedule is approximate. Check the workshop's website for further detail.

    W09 - Scaling-Up Reinforcement Learning (SURL)

    http://surl.tirl.info/

    Open website

    Monday 12 16:00 - 18:00 W04 (2504)


  • Financial Technology and Natural Language Processing (FinNLP)
    W04
  • The schedule is approximate. Check the workshop's website for further detail.

    W04 - Financial Technology and Natural Language Processing (FinNLP)

    https://sites.google.com/nlg.csie.ntu.edu.tw/finnlp/

    Open website

    Monday 12 16:00 - 18:00 W20 (2506)


  • Federated machine learning for data privacy (FML)
    W20
  • The schedule is approximate. Check the workshop's website for further detail.

    W20 - Federated machine learning for data privacy (FML)

    https://www.fedai.org/#/conferences/FML2019

    Open website

    Monday 12 19:00 - 22:00 IJCAI 2019 Welcome Reception (Multifunction Hall, N1, University of Macau)


  • IJCAI 2019 Welcome Reception
    IJCAI 2019 Welcome Reception
  • Tuesday 13 08:30 - 09:30 Opening (D-I)


  • Opening Session
    Opening
  • Tuesday 13 09:30 - 10:20 Invited Talk (D-I)

    Chair: Sarit Kraus
  • Doing for our robots what evolution did for us
    Leslie Kaelbling
    Invited Talk
  • Tuesday 13 09:30 - 18:00 DB1 - Demo Booths 1 (Hall A)

    Chair: TBA
    • #11051
      CRSRL: Customer Routing System Using Reinforcement Learning
      Chong Long, Zining Liu, Xiaolu Lu, Zehong Hu, Yafang Wang
      Details | PDF
      Demo Booths 1

      Allocating resources to customers in the customer service is a difficult problem, because designing an optimal strategy to achieve an optimal trade-off between available resources and customers' satisfaction is non-trivial. In this paper, we formalize the customer routing problem, and propose a novel framework based on deep reinforcement learning (RL) to address this problem. To make it more practical, a demo is provided to show and compare different models, which visualizes all decision process, and in particular, the system shows how the optimal strategy is reached. Besides, our demo system also ships with a variety of models that users can choose based on their needs.

    • #11054
      ATTENet: Detecting and Explaining Suspicious Tax Evasion Groups
      Qinghua Zheng, Yating Lin, Huan He, Jianfei Ruan, Bo Dong
      Details | PDF
      Demo Booths 1

      In this demonstration, we present ATTENet, a novel visual analytic system for detecting and explaining suspicious affiliated-transaction-based tax evasion (ATTE) groups. First, the system constructs a taxpayer interest interacted network, which contains economic behaviors and social relationships between taxpayers. Then, the system combines basic features and structure features of each group in the network with network embedding method structure2Vec, and then detects suspicious ATTE groups with random forest algorithm. Last, to explore and explain the detection results, the system provides an ATTENet visualization with three coordinated views and interactive tools. We demonstrate ATTENet on a non-confidential dataset which contains two years of real tax data obtained by our cooperative tax authorities to verify the usefulness of our system.

    • #11033
      DeepRec: An Open-source Toolkit for Deep Learning based Recommendation
      Shuai Zhang, Yi Tay, Lina Yao, Bin Wu, Aixin Sun
      Details | PDF
      Demo Booths 1

      Deep learning based recommender systems have been extensively explored in recent years. However, the large number of models proposed each year poses a big challenge for both researchers and practitioners in reproducing the results for further comparisons. Although a portion of papers provides source code, they adopted different programming languages or different deep learning packages, which also raises the bar in grasping the ideas. To alleviate this problem, we released the open source project: \textbf{DeepRec}. In this toolkit, we have implemented a number of deep learning based recommendation algorithms using Python and the widely used deep learning package - Tensorflow. Three major recommendation scenarios: rating prediction, top-N recommendation (item ranking) and sequential recommendation, were considered. Meanwhile, DeepRec maintains good modularity and extensibility to easily incorporate new models into the framework. It is distributed under the terms of the GNU General Public License. The source code is available at github: https://github.com/cheungdaven/DeepRec

    • #11034
      Agent-based Decision Support for Pain Management in Primary Care Settings
      Xu Guo, Han Yu, Chunyan Miao, Yiqiang Chen
      Details | PDF
      Demo Booths 1

      The lack of systematic pain management training and support among primary care physicians (PCPs) limits their ability to provide quality care for patients with pain. Here, we demonstrate an Agent-based Clinical Decision Support System to empower PCPs to leverage knowledge from pain specialists. The system learns a general-purpose representation space on patients, automatically diagnoses pain, recommends therapy and medicine, and suggests a referral program to PCPs in their decision-making tasks.

    • #11036
      A Mobile Application for Sound Event Detection
      Yingwei Fu, Kele Xu, Haibo Mi, Huaimin Wang, Dezhi Wang, Boqing Zhu
      Details | PDF
      Demo Booths 1

      Sound event detection is intended to analyze and recognize the sound events in audio streams and it has widespread applications in real life. Recently, deep neural networks such as convolutional recurrent neural networks have shown state-of-the-art performance in this task. However, the previous methods were designed and implemented on devices with rich computing resources, and there are few applications on mobile devices. This paper focuses on the solution on the mobile platform for sound event detection. The architecture of the solution includes offline training and online detection. During offline training process, multi model-based distillation method is used to compress model to enable real-time detection. The online detection process includes acquisition of sensor data, processing of audio signals, and detecting and recording of sound events. Finally, we implement an application on the mobile device that can detect sound events in near real time.

    • #11039
      Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation
      Yu-Hua Chen, Bryan Wang, Yi-Hsuan Yang
      Details | PDF
      Demo Booths 1

      We present in this paper PerformacnceNet, a neural network model we proposed recently to achieve score-to-audio music generation. The model learns to convert a music piece from the symbolic domain to the audio domain, assigning performance-level attributes such as changes in velocity automatically to the music and then synthesizing the audio. The model is therefore not just a neural audio synthesizer, but an AI performer that learns to interpret a musical score in its own way. The code and sample outputs of the model can be found online at https://github.com/bwang514/PerformanceNet.

    • #11045
      A Quantitative Analysis Platform for PD-L1 Immunohistochemistry based on Point-level Supervision Model
      Haibo Mi, Kele Xu, Yang Xiang, Yulin He, Dawei Feng, Huaimin Wang, Chun Wu, Yanming Song, Xiaolei Sun
      Details | PDF
      Demo Booths 1

      Recently, deep learning has witnessed dramatic progress in the medical image analysis field. In the precise treatment of cancer immunotherapy, the quantitative analysis of PD-L1 immunohistochemistry is of great importance. It is quite common that pathologists manually quantify the cell nuclei. This process is very time-consuming and error-prone. In this paper, we describe the development of a platform for PD-L1 pathological image quantitative analysis using deep learning approaches. As point-level annotations can provide a rough estimate of the object locations and classifications, this platform adopts a point-level supervision model to classify, localize, and count the PD-L1 cells nuclei. Presently, this platform has achieved an accurate quantitative analysis of PD-L1 for two types of carcinoma, and it is deployed in one of the first-class hospitals in China.

    • #11052
      Explainable Deep Neural Networks for Multivariate Time Series Predictions
      Roy Assaf, Anika Schumann
      Details | PDF
      Demo Booths 1

      We demonstrate that CNN deep neural networks can not only be used for making predictions based on multivariate time series data, but also for explaining these predictions. This is important for a number of applications where predictions are the basis for decisions and actions. Hence, confidence in the prediction result is crucial. We design a two stage convolutional neural network architecture which uses particular kernel sizes. This allows us to utilise gradient based techniques for generating saliency maps for both the time dimension and the features. These are then used for explaining which features during which time interval are responsible for a given prediction, as well as explaining during which time intervals was the joint contribution of all features most important for that prediction. We demonstrate our approach for predicting the average energy production of photovoltaic power plants and for explaining these predictions.

    • #11025
      Neural Discourse Segmentation
      Jing Li
      Details | PDF
      Demo Booths 1

      Identifying discourse structures and coherence relations in a piece of text is a fundamental task in natural language processing. The first step of this process is segmenting sentences into clause-like units called elementary discourse units (EDUs). Traditional solutions to discourse segmentation heavily rely on carefully designed features. In this demonstration, we present SegBot, a system to split a given piece of text into sequence of EDUs by using an end-to-end neural segmentation model. Our model does not require hand-crafted features or external knowledge except word embeddings, yet it outperforms state-of-the-art solutions to discourse segmentation.

    • #11040
      Design and Implementation of a Disambiguity Framework for Smart Voice Controlled Devices
      Kehua Lei, Tianyi Ma, Jia Jia, Cunjun Zhang, Zhihan Yang
      Details | PDF
      Demo Booths 1

      With about 100 million people using it recently, SVCD(Smart Voice Controlled Device) are becoming demotic. Whether at home or in an office, usually, multiple appliances are under the control of a single SVCD and several people may manipulate an SVCD simultaneously. However, present SVCD fails to handle them appropriately. In this paper, we propose a novel framework for SVCD to eliminate orders’ ambiguity for single user or multi-user. We also design an algorithm combining Word2Vec and emotion detection for the device to wipe off ambiguity. Finally, we apply our framework into a virtual smart home scene and the performance of it indicates that our strategy resolves the problems commendably.

    • #11049
      AntProphet: an Intention Mining System behind Alipay's Intelligent Customer Service Bot
      Cen Chen, Xiaolu Zhang, Sheng Ju, Chilin Fu, Caizhi Tang, Jun Zhou, Xiaolong Li
      Details | PDF
      Demo Booths 1

      We create an intention mining system, named AntProphet, for Alipay's intelligent customer service bot, to alleviate the burden of customer service. Whenever users have any questions, AntProphet is the first stop to help users to answer their questions. Our system gathers users' profile and their historical behavioral trajectories, together with contextual information to predict users' intention, i.e., the potential questions that users want to resolve. AntProphet takes care of more than 90% of the customer service demands in the Alipay APP and resolves most of the users' problems on the spot, thus significantly reduces the burden of manpower. With the help of it, the overall satisfaction rate of our customer service bot exceeds 85%.

    Tuesday 13 10:50 - 11:50 MTA|AM - Art and Music (2705-2706)

    Chair: Wenwu Wang
    • #3626
      Temporal Pyramid Pooling Convolutional Neural Network for Cover Song Identification
      Zhesong Yu, Xiaoshuo Xu, Xiaoou Chen, Deshun Yang
      Details | PDF
      Art and Music

      Cover song identification is an important problem in the field of Music Information Retrieval. Most existing methods rely on hand-crafted features and sequence alignment methods, and further breakthrough is hard to achieve. In this paper, Convolutional Neural Networks (CNNs) are used for representation learning toward this task. We show that they could be naturally adapted to deal with key transposition in cover songs. Additionally, Temporal Pyramid Pooling is utilized to extract information on different scales and transform songs with different lengths into fixed-dimensional representations. Furthermore, a training scheme is designed to enhance the robustness of our model. Extensive experiments demonstrate that combined with these techniques, our approach is robust against musical variations existing in cover songs and outperforms state-of-the-art methods on several datasets with low time complexity.

    • #4170
      Dilated Convolution with Dilated GRU for Music Source Separation
      Jen-Yu Liu, Yi-Hsuan Yang
      Details | PDF
      Art and Music

      Stacked dilated convolutions used in Wavenet have been shown effective for generating high-quality audios. By replacing pooling/striding with dilation in convolution layers, they can preserve high-resolution information and still reach distant locations. Producing high-resolution predictions is also crucial in music source separation, whose goal is to separate different sound sources while maintain the quality of the separated sounds. Therefore, in this paper, we use stacked dilated convolutions as the backbone for music source separation. Although stacked dilated convolutions can reach wider context than standard convolutions do, their effective receptive fields are still fixed and might not be wide enough for complex music audio signals. To reach even further information at remote locations, we propose to combine a dilated convolution with a modified GRU called Dilated GRU to form a block. A Dilated GRU receives information from k-step before instead of the previous step for a fixed k. This modification allows a GRU unit to reach a location with fewer recurrent steps and run faster because it can execute in parallel partially. We show that the proposed model with a stack of such blocks performs equally well or better than the state-of-the-art for separating both vocals and accompaniment.

    • #6280
      Musical Composition Style Transfer via Disentangled Timbre Representations
      Yun-Ning Hung, I-Tung Chiang, Yi-An Chen, Yi-Hsuan Yang
      Details | PDF
      Art and Music

      Music creation involves not only composing the different parts (e.g., melody, chords) of a musical work but also arranging/selecting the instruments to play the different parts. While the former has received increasing attention, the latter has not been much investigated. This paper presents, to the best of our knowledge, the first deep learning models for rearranging music of arbitrary genres. Specifically, we build encoders and decoders that take a piece of polyphonic musical audio as input, and predict as output its musical score. We investigate disentanglement techniques such as adversarial training to separate latent factors that are related to the musical content (pitch) of different parts of the piece, and that are related to the instrumentation (timbre) of the parts per short-time segment. By disentangling pitch and timbre, our models have an idea of how each piece was composed and arranged. Moreover, the models can realize “composition style transfer” by rearranging a musical piece without much affecting its pitch content. We validate the effectiveness of the models by experiments on instrument activity detection and composition style transfer. To facilitate follow-up research, we open source our code at https://github.com/biboamy/instrument-disentangle.

    • #1469
      SynthNet: Learning to Synthesize Music End-to-End
      Florin Schimbinschi, Christian Walder, Sarah M. Erfani, James Bailey
      Details | PDF
      Art and Music

      We consider the problem of learning a mapping directly from annotated music to waveforms, bypassing traditional single note synthesis. We propose a specific architecture based on WaveNet, a convolutional autoregressive generative model designed for text to speech. We investigate the representations learned by these models on music and concludethat mappings between musical notes and the instrument timbre can be learned directly from the raw audio coupled with the musical score, in binary piano roll format.Our model requires minimal training data (9 minutes), is substantially better in quality and converges 6 times faster in comparison to strong baselines in the form of powerful text to speech models.The quality of the generated waveforms (generation accuracy) is sufficiently high,that they are almost identical to the ground truth.Our evaluations are based on both the RMSE of the Constant-Q transform, and mean opinion scores from human subjects.We validate our work using 7 distinct synthetic instrument timbres, real cello music and also provide visualizations and links to all generated audio.

    Tuesday 13 10:50 - 12:20 ML|AL - Active Learning 1 (J)

    Chair: Giuseppe Riccardi
    • #392
      Deeper Connections between Neural Networks and Gaussian Processes Speed-up Active Learning
      Evgenii Tsymbalov, Sergei Makarychev, Alexander Shapeev, Maxim Panov
      Details | PDF
      Active Learning 1

      Active learning methods for neural networks are usually based on greedy criteria, which ultimately give a single new design point for the evaluation. Such an approach requires either some heuristics to sample a batch of design points at one active learning iteration, or retraining the neural network after adding each data point, which is computationally inefficient. Moreover, uncertainty estimates for neural networks sometimes are overconfident for the points lying far from the training sample. In this work, we propose to approximate Bayesian neural networks (BNN) by Gaussian processes (GP), which allows us to update the uncertainty estimates of predictions efficiently without retraining the neural network while avoiding overconfident uncertainty prediction for out-of-sample points. In a series of experiments on real-world data, including large-scale problems of chemical and physical modeling, we show the superiority of the proposed approach over the state-of-the-art methods.

    • #3209
      Multi-View Active Learning for Video Recommendation
      Jia-Jia Cai, Jun Tang, Qing-Guo Chen, Yao Hu, Xiaobo Wang, Sheng-Jun Huang
      Details | PDF
      Active Learning 1

      On many video websites, the recommendation is implemented as a prediction problem of video-user pairs, where the videos are represented by text features extracted from the metadata. However, the metadata is manually annotated by users and is usually missing for online videos. To train an effective recommender system with lower annotation cost, we propose an active learning approach to fully exploit the visual view of videos, while querying as few annotations as possible from the text view. On one hand, a joint model is proposed to learn the mapping from visual view to text view by simultaneously aligning the two views and minimizing the classification loss. On the other hand, a novel strategy based on prediction inconsistency and watching frequency is proposed to actively select the most important videos for metadata querying. Experiments on both classification datasets and real video recommendation tasks validate that the proposed approach can significantly reduce the annotation cost.

    • #6159
      Active Learning within Constrained Environments through Imitation of an Expert Questioner
      Kalesha Bullard, Yannick Schroecker, Sonia Chernova
      Details | PDF
      Active Learning 1

      Active learning agents typically employ a query selection algorithm which solely considers the agent's learning objectives. However, this may be insufficient in more realistic human domains.  This work uses imitation learning to enable an agent in a constrained environment to concurrently reason about both its internal learning goals and environmental constraints externally imposed, all within its objective function. Experiments are conducted on a concept learning task to test generalization of the proposed algorithm to different environmental conditions and analyze how time and resource constraints impact efficacy of solving the learning problem. Our findings show the environmentally-aware learning agent is able to statistically outperform all other active learners explored under most of the constrained conditions. A key implication is adaptation for active learning agents to more realistic human environments, where constraints are often externally imposed on the learner.

    • #1644
      Mindful Active Learning
      Zhila Esna Ashari, Hassan Ghasemzadeh
      Details | PDF
      Active Learning 1

      We propose a novel active learning framework for activity recognition using wearable sensors. Our work is unique in that it takes physical and cognitive limitations of the oracle into account when selecting sensor data to be annotated by the oracle. Our approach is inspired by human-beings' limited capacity to respond to external stimulus such as responding to a prompt on their mobile devices. This capacity constraint is manifested not only in the number of queries that a person can respond to in a given time-frame but also in the lag between the time that a query is made and when it is responded to. We introduce the notion of mindful active learning and propose a computational framework, called EMMA, to maximize the active learning performance taking informativeness of sensor data, query budget, and human memory into account. We formulate this optimization problem, propose an approach to model memory retention, discuss complexity of the problem, and propose a greedy heuristic to solve the problem. We demonstrate the effectiveness of our approach on three publicly available datasets and by simulating oracles with various memory strengths. We show that the activity recognition accuracy ranges from 21% to 97% depending on memory strength, query budget, and difficulty of the machine learning task. Our results also indicate that EMMA achieves an accuracy level that is, on average, 13.5% higher than the case when only informativeness of the sensor data is considered for active learning. Additionally, we show that the performance of our approach is at most 20% less than experimental upper-bound and up to 80% higher than experimental lower-bound. We observe that mindful active learning is most beneficial when query budget is small and/or oracle's memory is weak, thus emphasizing contributions of our work in human-centered mobile health settings and for elderly with cognitive impairments.

    • #4697
      ActiveHNE: Active Heterogeneous Network Embedding
      Xia Chen, Guoxian Yu, Jun Wang, Carlotta Domeniconi, Zhao Li, Xiangliang Zhang
      Details | PDF
      Active Learning 1

      Heterogeneous network embedding (HNE) is a challenging task due to the diverse node types and/or diverse relationships between nodes. Existing HNE methods are typically  unsupervised.  To maximize the profit of utilizing the rare and valuable supervised information in HNEs, we develop a novel Active Heterogeneous Network Embedding (ActiveHNE) framework, which includes two components: Discriminative Heterogeneous Network Embedding (DHNE) and Active Query in Heterogeneous Networks (AQHN).In DHNE, we introduce a novel semi-supervised heterogeneous network embedding method based on graph convolutional neural network. In AQHN, we first introduce three active selection strategies based on uncertainty and representativeness, and then derive a batch selection method that assembles these strategies using a multi-armed bandit mechanism. ActiveHNE aims at improving the performance of HNE by feeding the most valuable supervision obtained by AQHN into DHNE. Experiments on public datasets demonstrate the effectiveness of ActiveHNE and its advantage on reducing the query cost.

    • #2628
      Deep Active Learning with Adaptive Acquisition
      Manuel Haussmann, Fred Hamprecht, Melih Kandemir
      Details | PDF
      Active Learning 1

      Model selection is treated as a standard performance boosting step in many machine learning applications. Once all other properties of a learning problem are fixed, the model is selected by grid search on a held-out validation set. This is strictly inapplicable to active learning. Within the standardized workflow, the acquisition function is chosen among available heuristics a priori, and its success is observed only after the labeling budget is already exhausted. More importantly, none of the earlier studies report a unique consistently successful acquisition heuristic to the extent to stand out as the unique best choice. We present a method to break this vicious circle by defining the acquisition function as a learning predictor and training it by reinforcement feedback collected from each labeling round. As active learning is a scarce data regime, we bootstrap from a well-known heuristic that filters the bulk of data points on which all heuristics would agree, and learn a policy to warp the top portion of this ranking in the most beneficial way for the character of a specific data distribution. Our system consists of a Bayesian neural net, the predictor, a bootstrap acquisition function, a probabilistic state definition, and another Bayesian policy network that can effectively incorporate this input distribution. We observe on three benchmark data sets that our method always manages to either invent a new superior acquisition function or to adapt itself to the a priori unknown best performing heuristic for each specific data set.

    Tuesday 13 10:50 - 12:35 Panel (K)

    Chair: Marie desJardins
  • Diversity in AI: Where Are We and Where Are We Headed
    Marie desJardins
    Panel
  • Tuesday 13 10:50 - 12:35 ML|DL - Deep Learning 1 (L)

    Chair: Tengfei Ma
    • #1601
      FakeTables: Using GANs to Generate Functional Dependency Preserving Tables with Bounded Real Data
      Haipeng Chen, Sushil Jajodia, Jing Liu, Noseong Park, Vadim Sokolov, V. S. Subrahmanian
      Details | PDF
      Deep Learning 1

      In many cases, an organization wishes to release some data, but is restricted in the amount of data to be released due to legal, privacy and other concerns. For instance, the US Census Bureau releases only 1% of its table of records every year, along with statistics about the entire table. However, the machine learning (ML) models trained on the released sub-table are usually sub-optimal. In this paper, our goal is to find a way to augment the sub-table by generating a synthetic table from the released sub-table, under the constraints that the generated synthetic table (i) has similar statistics as the entire table, and (ii) preserves the functional dependencies of the released sub-table. We propose a novel generative adversarial network framework called ITS-GAN, where both the generator and the discriminator are specifically designed to satisfy these two constraints. By evaluating the augmentation performance of ITS-GAN on two representative datasets, the US Census Bureau data and US Bureau of Transportation Statistics (BTS) data, we show that ITS-GAN yields high quality classification results, and significantly outperforms various state-of-the-art data augmentation approaches.

    • #1954
      Boundary Perception Guidance: A Scribble-Supervised Semantic Segmentation Approach
      Bin Wang, Guojun Qi, Sheng Tang, Tianzhu Zhang, Yunchao Wei, Linghui Li, Yongdong Zhang
      Details | PDF
      Deep Learning 1

      Semantic segmentation suffers from the fact that densely annotated masks are expensive to obtain. To tackle this problem, we aim at learning to segment by only leveraging scribbles that are much easier to collect for supervision. To fully explore the limited pixel-level annotations from scribbles, we present a novel Boundary Perception Guidance (BPG) approach, which consists of two basic components, i.e., prediction refinement and boundary regression. Specifically, the prediction refinement progressively makes a better segmentation by adopting an iterative upsampling and a semantic feature  enhancement strategy. In the boundary regression, we employ class-agnostic edge maps for supervision to effectively guide the segmentation network in localizing the boundaries between different semantic regions, leading to producing finer-grained representation of feature maps for semantic segmentation. The experiment results on the PASCAL VOC 2012 demonstrate the proposed BPG achieves mIoU of 73.2% without fully connected Conditional Random Field (CRF) and 76.0% with CRF, setting up the new state-of-the-art in literature.

    • #2829
      Open-Ended Long-Form Video Question Answering via Hierarchical Convolutional Self-Attention Networks
      Zhu Zhang, Zhou Zhao, Zhijie Lin, Jingkuan Song, Xiaofei He
      Details | PDF
      Deep Learning 1

      Open-ended video question answering aims to automatically generate the natural-language answer from referenced video contents according to the given question. Currently, most existing approaches focus on short-form video question answering with multi-modal recurrent encoder-decoder networks. Although these works have achieved promising performance, they may still be ineffectively applied to long-form video question answering due to the lack of long-range dependency modeling and the suffering from the heavy computational cost. To tackle these problems, we propose a fast hierarchical convolutional self-attention encoder-decoder network. Concretely, we first develop a hierarchical convolutional self-attention encoder to efficiently model long-form video contents, which builds the hierarchical structure for video sequences and captures question-aware long-range dependencies from video context. We then devise a multi-scale attentive decoder to incorporate multi-layer video representations for answer generation, which avoids the information missing of the top encoder layer. The extensive experiments show the effectiveness and efficiency of our method.

    • #3298
      Attribute Aware Pooling for Pedestrian Attribute Recognition
      Kai Han, Yunhe Wang, Han Shu, Chuanjian Liu, Chunjing Xu, Chang Xu
      Details | PDF
      Deep Learning 1

      This paper expands the strength of deep convolutional neural networks (CNNs) to the pedestrian attribute recognition problem by devising a novel attribute aware pooling algorithm. Existing vanilla CNNs cannot be straightforwardly applied to handle multi-attribute data because of the larger label space as well as the attribute entanglement and correlations. We tackle these challenges that hampers the development of CNNs for multi-attribute classification by fully exploiting the correlation between different attributes. The multi-branch architecture is adopted for fucusing on attributes at different regions. Besides the prediction based on each branch itself, context information of each branch are employed for decision as well. The attribute aware pooling is developed to integrate both kinds of information. Therefore, attributes which are indistinct or tangled with others can be accurately recognized by exploiting the context information. Experiments on benchmark datasets demonstrate that the proposed pooling method appropriately explores and exploits the correlations between attributes for the pedestrian attribute recognition.

    • #365
      MUSICAL: Multi-Scale Image Contextual Attention Learning for Inpainting
      Ning Wang, Jingyuan Li, Lefei Zhang, Bo Du
      Details | PDF
      Deep Learning 1

      We study the task of image inpainting, where an image with missing region is recovered with plausible context. Recent approaches based on deep neural networks have exhibited potential for producing elegant detail and are able to take advantage of background information, which gives texture information about missing region in the image. These methods often perform pixel/patch level replacement on the deep feature maps of missing region and therefore enable the generated content to have similar texture as background region. However, this kind of replacement is a local strategy and often performs poorly when the background information is misleading. To this end, in this study, we propose to use a multi-scale image contextual attention learning (MUSICAL) strategy that helps to flexibly handle richer background information while avoid to misuse of it. However, such strategy may not promising in generating context of reasonable style. To address this issue, both of the style loss and the perceptual loss are introduced into the proposed method to achieve the style consistency of the generated image. Furthermore, we have also noticed that replacing some of the down sampling layers in the baseline network with the stride 1 dilated convolution layers is beneficial for producing sharper and fine-detailed results. Experiments on the Paris Street View, Places, and CelebA datasets indicate the superior performance of our approach compares to the state-of-the-arts. 

    • #1516
      Neurons Merging Layer: Towards Progressive Redundancy Reduction for Deep Supervised Hashing
      Chaoyou Fu, Liangchen Song, Xiang Wu, Guoli Wang, Ran He
      Details | PDF
      Deep Learning 1

      Deep supervised hashing has become an active topic in information retrieval. It generates hashing bits by the output neurons of a deep hashing network. During binary discretization, there often exists much redundancy between hashing bits that degenerates retrieval performance in terms of both storage and accuracy. This paper proposes a simple yet effective Neurons Merging Layer (NMLayer) for deep supervised hashing. A graph is constructed to represent the redundancy relationship between hashing bits that is used to guide the learning of a hashing network. Specifically, it is dynamically learned by a novel mechanism defined in our active and frozen phases. According to the learned relationship, the NMLayer merges the redundant neurons together to balance the importance of each output neuron. Moreover, multiple NMLayers are progressively trained for a deep hashing network to learn a more compact hashing code from a long redundant code. Extensive experiments on four datasets demonstrate that our proposed method outperforms state-of-the-art hashing methods.

    • #1498
      Deeply-learned Hybrid Representations for Facial Age Estimation
      Zichang Tan, Yang Yang, Jun Wan, Guodong Guo, Stan Z. Li
      Details | PDF
      Deep Learning 1

      In this paper, we propose a novel unified network named Deep Hybrid-Aligned Architecture for facial age estimation. It contains global, local and global-local branches. They are jointly optimized and thus can capture multiple types of features with complementary information. In each branch, we employ a separate loss for each sub-network to extract the independent features and use a recurrent fusion to explore correlations among those region features. Considering that the pose variations may lead to misalignment in different regions, we design an Aligned Region Pooling operation to generate aligned region features. Moreover, a new large age dataset named Web-FaceAge owning more than 120K samples is collected under diverse scenes and spanning a large age range. Experiments on five age benchmark datasets, including Web-FaceAge, Morph, FG-NET, CACD and Chalearn LAP 2015, show that the proposed method outperforms the state-of-the-art approaches significantly.

    Tuesday 13 10:50 - 12:35 ML|RS - Recommender Systems 1 (2701-2702)

    Chair: William K. Cheung
    • #663
      Matrix Completion in the Unit Hypercube via Structured Matrix Factorization
      Emanuele Bugliarello, Swayambhoo Jain, Vineeth Rakesh
      Details | PDF
      Recommender Systems 1

      Several complex tasks that arise in organizations can be simplified by mapping them into a matrix completion problem. In this paper, we address a key challenge faced by our company: predicting the efficiency of artists in rendering visual effects (VFX) in film shots. We tackle this challenge by using a two-fold approach: first, we transform this task into a constrained matrix completion problem with entries bounded in the unit interval [0,1]; second, we propose two novel matrix factorization models that leverage our knowledge of the VFX environment. Our first approach, expertise matrix factorization (EMF), is an interpretable method that structures the latent factors as weighted user-item interplay. The second one, survival matrix factorization (SMF), is instead a probabilistic model for the underlying process defining employees' efficiencies. We show the effectiveness of our proposed models by extensive numerical tests on our VFX dataset and two additional datasets with values that are also bounded in the [0,1] interval.

    • #704
      Modeling Multi-Purpose Sessions for Next-Item Recommendations via Mixture-Channel Purpose Routing Networks
      Shoujin Wang, Liang Hu, Yan Wang, Quan Z. Sheng, Mehmet Orgun, Longbing Cao
      Details | PDF
      Recommender Systems 1

      A session-based recommender system (SBRS) suggests the next item by modeling the dependencies between items in a session. Most of existing SBRSs assume the items inside a session are associated with one (implicit) purpose. However, this may not always be true in reality, and a session may often consist of multiple subsets of items for different purposes (e.g., breakfast and decoration). Specifically, items (e.g., bread and milk) in a subsethave strong purpose-specific dependencies whereas items (e.g., bread and vase) from different subsets have much weaker or even no dependencies due to the difference of purposes. Therefore, we propose a mixture-channel model to accommodate the multi-purpose item subsets for more precisely representing a session. Filling gaps in existing SBRSs, this model recommends more diverse items to satisfy different purposes. Accordingly, we design effective mixture-channel purpose routing networks (MCPRN) with a purpose routing network to detect the purposes of each item and assign it into the corresponding channels. Moreover, a purpose specific recurrent network is devised to model the dependencies between items within each channel for a specific purpose. The experimental results show the superiority of MCPRN over the state-of-the-art methods in terms of both recommendation accuracy and diversity.  

    • #1488
      A Review-Driven Neural Model for Sequential Recommendation
      Chenliang Li, Xichuan Niu, Xiangyang Luo, Zhenzhong Chen, Cong Quan
      Details | PDF
      Recommender Systems 1

      Writing review for a purchased item is a unique channel to express a user's opinion in E-Commerce. Recently, many deep learning based solutions have been proposed by exploiting user reviews for rating prediction. In contrast, there has been few attempt to enlist the semantic signals covered by user reviews for the task of collaborative filtering. In this paper, we propose a novel review-driven neural sequential recommendation model (named RNS) by considering user's intrinsic preference (long-term) and sequential patterns (short-term). In detail, RNS is devised to encode each user or item with the aspect-aware representations extracted from the reviews. Given a sequence of historical purchased items for a user, we devise a novel hierarchical attention over attention mechanism to capture sequential patterns at both union-level and individual-level. Extensive experiments on three real-world datasets of different domains demonstrate that RNS obtains significant performance improvement over uptodate state-of-the-art sequential recommendation models.

    • #3150
      PD-GAN: Adversarial Learning for Personalized Diversity-Promoting Recommendation
      Qiong Wu, Yong Liu, Chunyan Miao, Binqiang Zhao, Yin Zhao, Lu Guan
      Details | PDF
      Recommender Systems 1

      This paper proposes Personalized Diversity-promoting GAN (PD-GAN), a novel recommendation model to generate diverse, yet relevant recommendations. Specifically, for each user, a generator recommends a set of diverse and relevant items by sequentially sampling from a personalized Determinantal Point Process (DPP) kernel matrix. This kernel matrix is constructed by two learnable components: the general co-occurrence of diverse items and the user's personal preference to items. To learn the first component, we propose a novel pairwise learning paradigm using training pairs, and each training pair consists of a set of diverse items and a set of similar items randomly sampled from the observed data of all users. The second component is learnt through adversarial training against a discriminator which strives to distinguish between recommended items and the ground-truth sets randomly sampled from the observed data of the target user. Experimental results show that PD-GAN is superior to generate recommendations that are both diverse and relevant.

    • #4290
      DARec: Deep Domain Adaptation for Cross-Domain Recommendation via Transferring Rating Patterns
      Feng Yuan, Lina Yao, Boualem Benatallah
      Details | PDF
      Recommender Systems 1

      Cross-domain recommendation has long been one of the major topics in recommender systems.Recently, various deep models have been proposed to transfer the learned knowledge across domains, but most of them focus on extracting abstract transferable features from auxilliary contents, e.g., images and review texts, and the patterns in the rating matrix itself is rarely touched. In this work, inspired by the concept of domain adaptation, we proposed a deep domain adaptation model (DARec) that is capable of extracting and transferring patterns from rating matrices only without relying on any auxillary information. We empirically demonstrate on public datasets that our method achieves the best performance among several state-of-the-art alternative cross-domain recommendation models.

    • #5182
      STAR-GCN: Stacked and Reconstructed Graph Convolutional Networks for Recommender Systems
      Jiani Zhang, Xingjian Shi, Shenglin Zhao, Irwin King
      Details | PDF
      Recommender Systems 1

      We propose a new STAcked and Reconstructed Graph Convolutional Networks (STAR-GCN) architecture to learn node representations for boosting the performance in recommender systems, especially in the cold start scenario. STAR-GCN employs a stack of GCN encoder-decoders combined with intermediate supervision to improve the final prediction performance. Unlike the graph convolutional matrix completion model with one-hot encoding node inputs, our STAR-GCN learns low-dimensional user and item latent factors as the input to restrain the model space complexity. Moreover, our STAR-GCN can produce node embeddings for new nodes by reconstructing masked input node embeddings, which essentially tackles the cold start problem. Furthermore, we discover a label leakage issue when training GCN-based models for link prediction tasks and propose a training strategy to avoid the issue. Empirical results on multiple rating prediction benchmarks demonstrate our model achieves state-of-the-art performance in four out of five real-world datasets and significant improvements in predicting ratings in the cold start scenario. The code implementation is available in https://github.com/jennyzhang0215/STAR-GCN.

    • #6424
      Disparity-preserved Deep Cross-platform Association for Cross-platform Video Recommendation
      Shengze Yu, Xin Wang, Wenwu Zhu, Peng Cui, Jingdong Wang
      Details | PDF
      Recommender Systems 1

      Cross-platform recommendation aims to improve recommendation accuracy through associating information from different platforms. Existing cross-platform recommendation approaches assume all cross-platform information to be consistent with each other and can be aligned. However, there remain two unsolved challenges: i) there exist inconsistencies in cross-platform association due to platform-specific disparity, and ii) data from distinct platforms may have different semantic granularities. In this paper, we propose a cross-platform association model for cross-platform video recommendation, i.e., Disparity-preserved Deep Cross-platform Association (DCA), taking platform-specific disparity and granularity difference into consideration. The proposed DCA model employs a partially-connected multi-modal autoencoder, which is capable of explicitly capturing platform-specific information, as well as utilizing nonlinear mapping functions to handle granularity differences. We then present a cross-platform video recommendation approach based on the proposed DCA model. Extensive experiments for our cross-platform recommendation framework on real-world dataset demonstrate that the proposed DCA model significantly outperform existing cross-platform recommendation methods in terms of various evaluation metrics.

    Tuesday 13 10:50 - 12:35 AMS|NG - Noncooperative Games 1 (2703-2704)

    Chair: Fei Fang
    • #2776
      Be a Leader or Become a Follower: The Strategy to Commit to with Multiple Leaders
      Matteo Castiglioni, Alberto Marchesi, Nicola Gatti
      Details | PDF
      Noncooperative Games 1

      We study the problem of computing correlated strategies to commit to in games with multiple leaders and followers. To the best of our knowledge, this problem is widely unexplored so far, as the majority of the works in the literature focus on games with a single leader and one or more followers. The fundamental ingredient of our model is that a leader can decide whether to participate in the commitment or to defect from it by taking on the role of follower. This introduces a preliminary stage where, before the underlying game is played, the leaders make their decisions to reach an agreement on the correlated strategy to commit to. We distinguish three solution concepts on the basis of the constraints that they enforce on the agreement reached by the leaders. Then, we provide a comprehensive study of the properties of our solution concepts, in terms of existence, relation with other solution concepts, and computational complexity.

    • #2814
      Civic Crowdfunding for Agents with Negative Valuations and Agents with Asymmetric Beliefs
      Sankarshan Damle, Moin Hussain Moti, Praphul Chandra, Sujit Gujar
      Details | PDF
      Noncooperative Games 1

      In the last decade, civic crowdfunding has proved to be effective in generating funds for the provision of public projects. However, the existing literature deals only with citizen's with positive valuation and symmetric belief towards the project's provision.  In this work, we present novel mechanisms which break these two barriers, i.e., mechanisms which incorporate negative valuation and asymmetric belief, independently. For negative valuation, we present a methodology for converting existing mechanisms to mechanisms that incorporate agents with negative valuations. Particularly, we adapt existing PPR and PPS mechanisms, to present novel PPRN and PPSN mechanisms which incentivize strategic agents to contribute to the project based on their true preference. With respect to asymmetric belief, we propose a reward scheme Belief Based Reward (BBR) based on Robust Bayesian Truth Serum mechanism. With BBR, we propose a general mechanism for civic crowdfunding which incorporates asymmetric agents. We leverage PPR and PPS, to present PPRx and PPSx. We prove that in PPRx and PPSx, agents with greater belief towards the project's provision contribute more than agents with lesser belief. Further, we also show that contributions are such that the project is provisioned at equilibrium.

    • #4500
      Network Formation under Random Attack and Probabilistic Spread
      Yu Chen, Shahin Jabbari, Michael Kearns, Sanjeev Khanna, Jamie Morgenstern
      Details | PDF
      Noncooperative Games 1

      We study a network formation game where agents receive benefits by forming connections to other agents but also incur both direct and indirect costs from the formed connections. Specifically, once the agents have purchased their connections, an attack starts at a randomly chosen vertex in the network and spreads according to the independent cascade model with a fixed probability, destroying any infected agents. The utility or welfare of an agent in our game is defined to be the expected size of the agent's connected component post-attack minus her expenditure in forming connections. Our goal is to understand the properties of the equilibrium networks formed in this game. Our first result concerns the edge density of equilibrium networks. A network connection increases both the likelihood of remaining connected to other agents after an attack as well the likelihood of getting infected by a cascading spread of infection. We show that the latter concern primarily prevails and any equilibrium network in our game contains only $O(n\log n)$ edges where $n$ denotes the number of agents. On the other hand, there are equilibrium networks that contain $\Omega(n)$ edges showing that our edge density bound is tight up to a logarithmic factor. Our second result shows that the presence of attack and its spread through a cascade does not significantly lower social welfare as long as the network is not too dense. We show that any non-trivial equilibrium network with $O(n)$ edges has $\Theta(n^2)$ social welfare, asymptotically similar to the social welfare guarantee in the game without any attacks.

    • #4515
      Equilibrium Characterization for Data Acquisition Games
      Jinshuo Dong, Hadi Elzayn, Shahin Jabbari, Michael Kearns, Zachary Schutzman
      Details | PDF
      Noncooperative Games 1

      We study a game between two firms which each provide a service based on machine learning.  The firms are presented with the opportunity to purchase a new corpus of data, which will allow them to potentially improve the quality of their products. The firms can decide whether or not they want to buy the data, as well as which learning model to build on that data. We demonstrate a reduction from this potentially complicated action space  to a one-shot, two-action game in which each firm only decides whether or not to buy the data. The game admits several regimes which depend on the relative strength of the two firms at the outset and the price at which the data is being offered. We analyze the game's Nash equilibria in all parameter regimes and demonstrate that, in expectation, the outcome of the game is that the initially stronger firm's market position weakens whereas the initially weaker firm's market position becomes stronger. Finally, we consider the perspective of the users of the service and demonstrate that the expected outcome at equilibrium is not the one which maximizes the welfare of the consumers.

    • #4907
      Compact Representation of Value Function in Partially Observable Stochastic Games
      Karel Horák, Branislav Bošanský, Christopher Kiekintveld, Charles Kamhoua
      Details | PDF
      Noncooperative Games 1

      Value methods for solving stochastic games with partial observability model the uncertainty of the players as a probability distribution over possible states, where the dimension of the belief space is the number of states. For many practical problems, there are exponentially many states which causes scalability problems. We propose an abstraction technique that addresses this curse of dimensionality by projecting the high-dimensional beliefs onto characteristic vectors of significantly lower dimension (e.g., marginal probabilities). Our main contributions are (1) a novel compact representation of the uncertainty in partially observable stochastic games and (2) a novel algorithm using this representation that is based on existing state-of-the-art algorithms for solving stochastic games with partial observability. Experimental evaluation confirms that the new algorithm using the compact representation dramatically increases scalability compared to the state of the art.

    • #6310
      Temporal Information Design in Contests
      Priel Levy, David Sarne, Yonatan Aumann
      Details | PDF
      Noncooperative Games 1

      We study temporal information design in contests, wherein the organizer may, possibly incrementally, disclose information about the participation and performance of some contestants to other (later) contestants. We show that such incremental disclosure can increase the organizer's profit. The expected profit, however, depends on the exact information disclosure structure, and the optimal structure depends on the parameters of the problem. We provide a game-theoretic analysis of such information disclosure schemes as they apply to two common models of contests: (a) simple contests, wherein contestants' decisions concern only their participation; and (b) Tullock contests, wherein contestants choose the effort levels to expend. For each of these we analyze and characterize the equilibrium strategy, and exhibit the potential benefits of information design. 

    • #886
      Possibilistic Games with Incomplete Information
      Nahla Ben Amor, Helene Fargier, Régis Sabbadin, Meriem Trabelsi
      Details | PDF
      Noncooperative Games 1

      Bayesian games offer a suitable framework for games where the utility degrees are additive in essence. This approach does nevertheless not apply to ordinal games, where the utility degrees do not capture more than a ranking, nor to situations of decision under qualitative uncertainty. This paper proposes a representation framework for ordinal games under possibilistic incomplete information (π-games) and extends the fundamental notion of Nash equilibrium (NE) to this framework. We show that deciding whether a NE exists is a difficult problem (NP-hard) and propose a  Mixed Integer Linear Programming  (MILP) encoding. Experiments on variants of the GAMUT problems confirm the feasibility of this approach.

    Tuesday 13 10:50 - 12:35 HAI|PUM - Personalization and User Modeling (2601-2602)

    Chair: Li Chen
    • #254
      Deep Adversarial Social Recommendation
      Wenqi Fan, Tyler Derr, Yao Ma, Jianping Wang, Jiliang Tang, Qing Li
      Details | PDF
      Personalization and User Modeling

      Recent years have witnessed rapid developments on social recommendation techniques for improving the performance of recommender systems due to the growing influence of social networks to our daily life. The majority of existing social recommendation methods unify user representation for the user-item interactions (item domain) and user-user connections (social domain). However, it may restrain user representation learning in each respective domain, since users behave and interact differently in the two domains, which makes their representations to be heterogeneous. In addition, most of traditional recommender systems can not efficiently optimize these objectives, since they utilize negative sampling technique which is unable to provide enough informative guidance towards the training during the optimization process. In this paper, to address the aforementioned challenges, we propose a novel deep adversarial social recommendation framework DASO. It adopts a bidirectional mapping method to transfer users' information between social domain and item domain using adversarial learning. Comprehensive experiments on two real-world datasets show the effectiveness of the proposed framework.

    • #1863
      Minimizing Time-to-Rank: A Learning and Recommendation Approach
      Haoming Li, Sujoy Sikdar, Rohit Vaish, Junming Wang, Lirong Xia, Chaonan Ye
      Details | PDF
      Personalization and User Modeling

      Consider the following problem faced by an online voting platform: A user is provided with a list of alternatives, and is asked to rank them in order of preference using only drag-and-drop operations. The platform's goal is to recommend an initial ranking that minimizes the time spent by the user in arriving at her desired ranking. We develop the first optimization framework to address this problem, and make theoretical as well as practical contributions. On the practical side, our experiments on the Amazon Mechanical Turk platform provide two interesting insights about user behavior: First, that users' ranking strategies closely resemble selection or insertion sort, and second, that the time taken for a drag-and-drop operation depends linearly on the number of positions moved. These insights directly motivate our theoretical model of the optimization problem. We show that computing an optimal recommendation is NP-hard, and provide exact and approximation algorithms for a variety of special cases of the problem. Experimental evaluation on MTurk shows that, compared to a random recommendation strategy, the proposed approach reduces the (average) time-to-rank by up to 50%.

    • #2398
      DeepAPF: Deep Attentive Probabilistic Factorization for Multi-site Video Recommendation
      Huan Yan, Xiangning Chen, Chen Gao, Yong Li, Depeng Jin
      Details | PDF
      Personalization and User Modeling

      Existing web video systems recommend videos according to users' viewing history from its own website. However, since many users watch videos in multiple websites, this approach fails to capture these users' interests across sites. In this paper, we investigate the user viewing behavior in multiple sites based on a large scale real dataset. We find that user interests are comprised of cross-site consistent part and site-specific part with different degrees of the importance. Existing linear matrix factorization recommendation model has limitation in modeling such complicated interactions. Thus, we propose a model of Deep Attentive Probabilistic Factorization (DeepAPF) to exploit deep learning method to approximate such complex user-video interaction. DeepAPF captures both cross-site common interests and site-specific interests with non-uniform importance weights learned by the attentional network. Extensive experiments show that our proposed model outperforms by 17.62%, 7.9% and 8.1% with the comparison of three state-of-the-art baselines. Our study provides insight to integrate user viewing records from multiple sites via the trusted third party, which gains mutual benefits in video recommendation.

    • #3165
      Personalized Multimedia Item and Key Frame Recommendation
      Le Wu, Lei Chen, Yonghui Yang, Richang Hong, Yong Ge, Xing Xie, Meng Wang
      Details | PDF
      Personalization and User Modeling

      When recommending or advertising items to users, an emerging trend is to present each multimedia item with  a key frame image (e.g., the poster of a movie). As each multimedia item can be represented as  multiple fine-grained  visual images (e.g., related images of the movie), personalized key frame recommendation is necessary in these applications to attract users' unique visual preferences. However, previous personalized key frame recommendation models relied on users' fine grained image  behavior of  multimedia items (e.g., user-image interaction behavior), which is often not available in real scenarios.  In this paper, we study the general problem of joint multimedia item and key frame recommendation in the absence of the fine-grained user-image behavior. We argue that the key challenge of this problem lies in discovering users' visual profiles for key frame recommendation, as most recommendation models  would fail without any users' fine-grained image behavior. To tackle this challenge, we leverage users' item behavior by projecting users(items) in two latent spaces: a collaborative latent space and a visual latent space. We further design a model to discern both the collaborative and  visual dimensions of users, and model how users make decisive item preferences from these two spaces. As a result, the learned user visual profiles could be directly applied for key frame recommendation. Finally, experimental results on a real-world dataset clearly show the effectiveness of our proposed model on the two recommendation tasks.

    • #3571
      Discrete Trust-aware Matrix Factorization for Fast Recommendation
      Guibing Guo, Enneng Yang, Li Shen, Xiaochun Yang, Xiaodong He
      Details | PDF
      Personalization and User Modeling

      Trust-aware recommender systems have received much attention recently for their abilities to capture the influence among connected users. However, they suffer from the efficiency issue due to large amount of data and time-consuming real-valued operations. Although existing discrete collaborative filtering may alleviate this issue to some extent, it is unable to accommodate social influence. In this paper we propose a discrete trust-aware matrix factorization (DTMF) model to take dual advantages of both social relations and discrete technique for fast recommendation. Specifically, we map the latent representation of users and items into a joint hamming space by recovering the rating and trust interactions between users and items. We adopt a sophisticated discrete coordinate descent (DCD) approach to optimize our proposed model. In addition, experiments on two real-world datasets demonstrate the superiority of our approach against other state-of-the-art approaches in terms of ranking accuracy and efficiency.

    • #3677
      An Input-aware Factorization Machine for Sparse Prediction
      Yantao Yu, Zhen Wang, Bo Yuan
      Details | PDF
      Personalization and User Modeling

      Factorization machines (FMs) are a class of general predictors working effectively with sparse data, which represents features using factorized parameters and weights. However, the accuracy of FMs can be adversely affected by the fixed representation trained for each feature, as the same feature is usually not equally predictive and useful in different instances. In fact, the inaccurate representation of features may even introduce noise and degrade the overall performance. In this work, we improve FMs by explicitly considering the impact of individual input upon the representation of features. We propose a novel model named \textit{Input-aware Factorization Machine} (IFM), which learns a unique input-aware factor for the same feature in different instances via a neural network. Comprehensive experiments on three real-world recommendation datasets are used to demonstrate the effectiveness and mechanism of IFM. Empirical results indicate that IFM is significantly better than the standard FM model and consistently outperforms four state-of-the-art deep learning based methods.

    • #4234
      Dynamic Item Block and Prediction Enhancing Block for Sequential Recommendation
      Guibing Guo, Shichang Ouyang, Xiaodong He, Fajie Yuan, Xiaohua Liu
      Details | PDF
      Personalization and User Modeling

      Sequential recommendation systems have become a research hotpot recently to suggest users with the next item of interest (to interact with). However, existing approaches suffer from two limitations: (1) The representation of an item is relatively static and fixed for all users. We argue that even a same item should be represented distinctively with respect to different users and time steps. (2) The generation of a prediction for a user over an item is computed in a single scale (e.g., by their inner product), ignoring the nature of multi-scale user preferences. To resolve these issues, in this paper we propose two enhancing building blocks for sequential recommendation. Specifically, we devise a Dynamic Item Block (DIB) to learn dynamic item representation by aggregating the embeddings of those who rated the same item before that time step. Then, we come up with a Prediction Enhancing Block (PEB) to project user representation into multiple scales, based on which many predictions can be made and attentively aggregated for enhanced learning. Each prediction is generated by a softmax over a sampled itemset rather than the whole item space for efficiency. We conduct a series of experiments on four real datasets, and show that even a basic model can be greatly enhanced with the involvement of DIB and PEB in terms of ranking accuracy. The code and datasets can be obtained from https://github.com/ouououououou/DIB-PEB-Sequential-RS

    Tuesday 13 10:50 - 12:35 KRR|ACC - Action, Change and Causality (2603-2604)

    Chair: Ruichu Cai
    • #921
      Estimating Causal Effects of Tone in Online Debates
      Dhanya Sridhar, Lise Getoor
      Details | PDF
      Action, Change and Causality

      Statistical methods applied to social media posts shed light on the dynamics of online dialogue. For example, users' wording choices predict their persuasiveness and users adopt the language patterns of other dialogue participants. In this paper, we estimate the causal effect of reply tones in debates on linguistic and sentiment changes in subsequent responses. The challenge for this estimation is that a reply's tone and subsequent responses are confounded by the users' ideologies on the debate topic and their emotions. To overcome this challenge, we learn representations of ideology using generative models of text. We study debates from 4Forums.com and compare annotated tones of replying such as emotional versus factual, or reasonable versus attacking. We show that our latent confounder representation reduces bias in ATE estimation. Our results suggest that factual and asserting tones affect dialogue and provide a methodology for estimating causal effects from text.

    • #2116
      Automatic Verification of FSA Strategies via Counterexample-Guided Local Search for Invariants
      Kailun Luo, Yongmei Liu
      Details | PDF
      Action, Change and Causality

      Strategy representation and reasoning has received much attention over the past years. In this paper, we consider the representation of general strategies that solve a class of (possibly infinitely many) games with similar structures, and their automatic verification, which is an undecidable problem. We propose to represent a general strategy by an FSA (Finite State Automaton) with edges labelled by restricted Golog programs. We formalize the semantics of FSA strategies in the situation calculus. Then we propose an incomplete method for verifying whether an FSA strategy is a winning strategy by counterexample-guided local search for appropriate invariants. We implemented our method and did experiments on combinatorial game and also single-agent domains. Experimental results showed that our system can successfully verify most of them within a reasonable amount of time.

    • #2213
      Causal Discovery with Cascade Nonlinear Additive Noise Model
      Ruichu Cai, Jie Qiao, Kun Zhang, Zhenjie Zhang, Zhifeng Hao
      Details | PDF
      Action, Change and Causality

      Identification of causal direction between a causal-effect pair from observed data has recently attracted much attention. Various methods based on functional causal models have been proposed to solve this problem, by assuming the causal process satisfies some (structural) constraints and showing that the reverse direction violates such constraints. The nonlinear additive noise model has been demonstrated to be effective for this purpose, but the model class is not transitive--even if each direct causal relation follows this model, indirect causal influences, which result from omitted intermediate causal variables and are frequently encountered in practice, do not necessarily follow the model constraints; as a consequence, the nonlinear additive noise model may fail to correctly discover causal direction. In this work, we propose a cascade nonlinear additive noise model to represent such causal influences--each direct causal relation follows the nonlinear additive noise model but we observe only the initial cause and final effect. We further propose a method to estimate the model, including the unmeasured intermediate variables, from data, under the variational auto-encoder framework. Our theoretical results show that with our model, causal direction is identifiable under suitable technical conditions on the data generation process. Simulation results illustrate the power of the proposed method in identifying indirect causal relations across various settings, and experimental results on real data suggest that the proposed model and method greatly extend the applicability of causal discovery based on functional causal models in nonlinear cases.

    • #3621
      Boosting Causal Embeddings via Potential Verb-Mediated Causal Patterns
      Zhipeng Xie, Feiteng Mu
      Details | PDF
      Action, Change and Causality

      Existing approaches to causal embeddings rely heavily on hand-crafted high-precision causal patterns, leading to limited coverage. To solve this problem, this paper proposes a method to boost causal embeddings by exploring potential verb-mediated causal patterns. It first constructs a seed set of causal word pairs, then uses them as supervision to characterize the causal strengths of extracted verb-mediated patterns, and finally exploits the weighted extractions by those verb-mediated patterns in the construction of boosted causal embeddings. Experimental results have shown that the boosted causal embeddings outperform several state-of-the-arts significantly on both English and Chinese. As by-products, the top-ranked patterns coincide with human intuition about causality.

    • #5703
      From Statistical Transportability to Estimating the Effect of Stochastic Interventions
      Juan D. Correa, Elias Bareinboim
      Details | PDF
      Action, Change and Causality

      Learning systems often face a critical challenge when applied to settings that differ from those under which they were initially trained. In particular, the assumption that both the source/training and the target/deployment domains follow the same causal mechanisms and observed distributions is commonly violated. This implies that the robustness and convergence guarantees usually expected from these methods are no longer attainable. In this paper, we study these violations through causal lens using the formalism of statistical transportability [Pearl and Bareinboim, 2011] (PB, for short). We start by proving sufficient and necessary graphical conditions under which a probability distribution observed in the source domain can be extrapolated to the target one, where strictly less data is available. We develop the first sound and complete procedure for statistical transportability, which formally closes the problem introduced by PB. Further, we tackle the general challenge of identification of stochastic interventions from observational data [Sec.~4.4, Pearl, 2000]. This problem has been solved in the context of atomic interventions using Pearl's do-calculus, which lacks complete treatment in the stochastic case. We prove completeness of stochastic identification by constructing a reduction of any instance of this problem to an instance of statistical transportability, closing the problem.

    • #6337
      ASP-based Discovery of Semi-Markovian Causal Models under Weaker Assumptions
      Zhalama, Jiji Zhang, Frederick Eberhardt, Wolfgang Mayer, Mark Junjie Li
      Details | PDF
      Action, Change and Causality

      In recent years the possibility of relaxing the so-called Faithfulness assumption in automated causal discovery has been investigated. The investigation showed (1) that the Faithfulness assumption can be weakened in various ways that in an important sense preserve its power, and (2) that weakening of Faithfulness may help to speed up methods based on Answer Set Programming. However, this line of work has so far only considered the discovery of causal models without latent variables. In this paper, we study weakenings of Faithfulness for constraint-based discovery of semi-Markovian causal models, which accommodate the possibility of latent variables, and show that both (1) and (2) remain the case in this more realistic setting.

    • #10961
      (Sister Conferences Best Papers Track) On Causal Identification under Markov Equivalence
      Amin Jaber, Jiji Zhang, Elias Bareinboim
      Details | PDF
      Action, Change and Causality

      In this work, we investigate the problem of computing an experimental distribution from a combination of the observational distribution and a partial qualitative description of the causal structure of the domain under investigation. This description is given by a partial ancestral graph (PAG) that represents a Markov equivalence class of causal diagrams, i.e., diagrams that entail the same conditional independence model over observed variables, and is learnable from the observational data. Accordingly, we develop a complete algorithm to compute the causal effect of an arbitrary set of intervention variables on an arbitrary outcome set.

    Tuesday 13 10:50 - 12:35 NLP|NLP - Natural Language Processing 1 (2605-2606)

    Chair: Tianyong Hao
    • #1128
      Leap-LSTM: Enhancing Long Short-Term Memory for Text Categorization
      Ting Huang, Gehui Shen, Zhi-Hong Deng
      Details | PDF
      Natural Language Processing 1

      Recurrent Neural Networks (RNNs) are widely used in the field of natural language processing (NLP), ranging from text categorization to question answering and machine translation. However, RNNs generally read the whole text from beginning to end or vice versa sometimes, which makes it inefficient to process long texts. When reading a long document for a categorization task, such as topic categorization, large quantities of words are irrelevant and can be skipped. To this end, we propose Leap-LSTM, an LSTM-enhanced model which dynamically leaps between words while reading texts. At each step, we utilize several feature encoders to extract messages from preceding texts, following texts and the current word, and then determine whether to skip the current word. We evaluate Leap-LSTM on several text categorization tasks: sentiment analysis, news categorization, ontology classification and topic classification, with five benchmark data sets. The experimental results show that our model reads faster and predicts better than standard LSTM. Compared to previous models which can also skip words, our model achieves better trade-offs between performance and efficiency.

    • #1860
      Deep Mask Memory Network with Semantic Dependency and Context Moment for Aspect Level Sentiment Classification
      Peiqin Lin, Meng Yang, Jianhuang Lai
      Details | PDF
      Natural Language Processing 1

      Aspect level sentiment classification aims at identifying the sentiment of each aspect term in a sentence. Deep memory networks often use location information between context word and aspect to generate the memory. Although improved results are achieved, the relation information among aspects in the same sentence is ignored and the word location can't bring enough and accurate information for the analysis on the aspect sentiment. In this paper, we propose a novel framework for aspect level sentiment classification, deep mask memory network with semantic dependency and context moment (DMMN-SDCM), which integrates semantic parsing information of the aspect and the inter-aspect relation information into deep memory network. With the designed attention mechanism based on semantic dependency information, different parts of the context memory in different computational layers are selected and useful inter-aspect information in the same sentence is exploited for the desired aspect. To make full use of the inter-aspect relation information, we also jointly learn a context moment learning task, which aims to learn the sentiment distribution of the entire sentence for providing a background for the desired aspect. We examined the merit of our model on SemEval 2014 Datasets, and the experimental results show that our model achieves a state-of-the-art performance.

    • #3512
      Robust Embedding with Multi-Level Structures for Link Prediction
      Zihan Wang, Zhaochun Ren, Chunyu He, Peng Zhang, Yue Hu
      Details | PDF
      Natural Language Processing 1

      Knowledge Graph (KG) embedding has become crucial for the task of link prediction. Recent work applies encoder-decoder models to tackle this problem, where an encoder is formulated as a graph neural network (GNN) and a decoder is represented by an embedding method. These approaches enforce embedding techniques with structure information. Unfortunately, existing GNN-based frameworks still confront 3 severe problems: low representational power, stacking in a flat way, and poor robustness to noise. In this work, we propose a novel multi-level graph neural network (M-GNN) to address the above challenges. We first identify an injective aggregate scheme and design a powerful GNN layer using multi-layer perceptrons (MLPs). Then, we define graph coarsening schemes for various kinds of relations, and stack GNN layers on a series of coarsened graphs, so as to model hierarchical structures. Furthermore, attention mechanisms are adopted so that our approach can make predictions accurately even on the noisy knowledge graph. Results on WN18 and FB15k datasets show that our approach is effective in the standard link prediction task, significantly and consistently outperforming competitive baselines. Furthermore, robustness analysis on FB15k-237 dataset demonstrates that our proposed M-GNN is highly robust to sparsity and noise. 

    • #5698
      Medical Concept Representation Learning from Multi-source Data
      Tian Bai, Brian L. Egleston, Richard Bleicher, Slobodan Vucetic
      Details | PDF
      Natural Language Processing 1

      Representing words as low dimensional vectors is very useful in many natural language processing tasks. This idea has been extended to medical domain where medical codes listed in medical claims are represented as vectors to facilitate exploratory analysis and predictive modeling. However, depending on a type of a medical provider, medical claims can use medical codes from different ontologies or from a combination of ontologies, which complicates learning of the representations. To be able to properly utilize such multi-source medical claim data, we propose an approach that represents medical codes from different ontologies in the same vector space. We first modify the Pointwise Mutual Information (PMI) measure of similarity between the codes. We then develop a new negative sampling method for word2vec model that implicitly factorizes the modified PMI matrix. The new approach was evaluated on the code cross-reference problem, which aims at identifying similar codes across different ontologies. In our experiments, we evaluated cross-referencing between ICD-9 and CPT medical code ontologies. Our results indicate that vector representations of codes learned by the proposed approach provide superior cross-referencing when compared to several existing approaches.

    • #3757
      Graph-based Neural Sentence Ordering
      Yongjing Yin, Linfeng Song, Jinsong Su, Jiali Zeng, Chulun Zhou, Jiebo Luo
      Details | PDF
      Natural Language Processing 1

      Sentence ordering is to restore the original paragraph from a set of sentences. It involves capturing global dependencies among sentences regardless of their input order. In this paper, we propose a novel and flexible graph-based neural sentence ordering model, which adopts graph recurrent network \citep{Zhang:acl18} to accurately learn semantic representations of the sentences. Instead of assuming connections between all pairs of input sentences, we use entities that are shared among multiple sentences to make more expressive graph representations with less noise. Experimental results show that our proposed model outperforms the existing state-of-the-art systems on several benchmark datasets, demonstrating the effectiveness of our model. We also conduct a thorough analysis on how entities help the performance. Our code is available at https://github.com/DeepLearnXMU/NSEG.git.

    • #3096
      Incorporating Structural Information for Better Coreference Resolution
      Fang Kong, Fu Jian
      Details | PDF
      Natural Language Processing 1

      Coreference resolution plays an important role in text understanding. In the literature, various neural approaches have been proposed and achieved considerable success. However, structural information, which has been proven useful in coreference resolution, has been largely ignored in previous neural approaches. In this paper, we focus on effectively incorporating structural information to neural coreference resolution from three aspects. Firstly, nodes in the parse trees are employed as a constraint to filter out impossible text spans (i.e., mention candidates) in reducing the computational complexity. Secondly, contextual information is encoded in the traversal node sequence instead of the word sequence to better capture hierarchical information for text span representation. Lastly, additional structural features (e.g., the path, siblings, degrees, category of the current node) are encoded to enhance the mention representation. Experimentation on the data-set of the CoNLL 2012 Shared Task shows the effectiveness of our proposed approach in incorporating structural information into neural coreference resolution.

    • #3033
      Adapting BERT for Target-Oriented Multimodal Sentiment Classification
      Jianfei Yu, Jing Jiang
      Details | PDF
      Natural Language Processing 1

      As an important task in Sentiment Analysis, Target-oriented Sentiment Classification (TSC) aims to identify sentiment polarities over each opinion target in a sentence. However, existing approaches to this task primarily rely on the textual content, but ignoring the other increasingly popular multimodal data sources (e.g., images), which can enhance the robustness of these text-based models. Motivated by this observation and inspired by the recently proposed BERT architecture, we study Target-oriented Multimodal Sentiment Classification (TMSC) and propose a multimodal BERT architecture. To model intra-modality dynamics, we first apply BERT to obtain target-sensitive textual representations. We then borrow the idea from self-attention and design a target attention mechanism to perform target-image matching to derive target-sensitive visual representations. To model inter-modality dynamics, we further propose to stack a set of self-attention layers to capture multimodal interactions. Experimental results show that our model can outperform several highly competitive approaches for TSC and TMSC.

    Tuesday 13 10:50 - 12:35 ML|KM - Kernel Methods (2501-2502)

    Chair: Junchi Yan
    • #248
      Exchangeability and Kernel Invariance in Trained MLPs
      Russell Tsuchida, Fred Roosta, Marcus Gallagher
      Details | PDF
      Kernel Methods

      In the analysis of machine learning models, it is often convenient to assume that the parameters are IID. This assumption is not satisfied when the parameters are updated through training processes such as Stochastic Gradient Descent. A relaxation of the IID condition is a probabilistic symmetry known as exchangeability. We show the sense in which the weights in MLPs are exchangeable. This yields the result that in certain instances, the layer-wise kernel of fully-connected layers remains approximately constant during training. Our results shed light on such kernel properties throughout training while limiting the use of unrealistic assumptions.

    • #1350
      Deep Spectral Kernel Learning
      Hui Xue, Zheng-Fan Wu, Wei-Xiang Sun
      Details | PDF
      Kernel Methods

      Recently, spectral kernels have attracted wide attention in complex dynamic environments. These advanced kernels mainly focus on breaking through the crucial limitation on locality, that is, the stationarity and the monotonicity. But actually, owing to the inefficiency of shallow models in computational elements, they are more likely unable to accurately reveal dynamic and potential variations. In this paper, we propose a novel deep spectral kernel network (DSKN) to naturally integrate non-stationary and non-monotonic spectral kernels into elegant deep architectures in an interpretable way, which can be further generalized to cover most kernels. Concretely, we firstly deal with the general form of spectral kernels by the inverse Fourier transform. Secondly, DSKN is constructed by embedding the preeminent spectral kernels into each layer to boost the efficiency in computational elements, which can effectively reveal the dynamic input-dependent characteristics and potential long-range correlations by compactly representing complex advanced concepts. Thirdly, detailed analyses of DSKN are presented. Owing to its universality, we propose a unified spectral transform technique to flexibly extend and reasonably initialize domain-related DSKN. Furthermore, the representer theorem of DSKN is given. Systematical experiments demonstrate the superiority of DSKN compared to state-of-the-art relevant algorithms on varieties of standard real-world tasks.

    • #2312
      GCN-LASE: Towards Adequately Incorporating Link Attributes in Graph Convolutional Networks
      Ziyao Li, Liang Zhang, Guojie Song
      Details | PDF
      Kernel Methods

      Graph Convolutional Networks (GCNs) have proved to be a most powerful architecture in aggregating local neighborhood information for individual graph nodes. Low-rank proximities and node features are successfully leveraged in existing GCNs, however, attributes that graph links may carry are commonly ignored, as almost all of these models simplify graph links into binary or scalar values describing node connectedness. In our paper instead, links are reverted to hypostatic relationships between entities with descriptional attributes. We propose GCN-LASE (GCN with Link Attributes and Sampling Estimation), a novel GCN model taking both node and link attributes as inputs. To adequately captures the interactions between link and node attributes, their tensor product is used as neighbor features, based on which we define several graph kernels and further develop according architectures for LASE. Besides, to accelerate the training process, the sum of features in entire neighborhoods are estimated through Monte Carlo method, with novel  sampling strategies designed for LASE to minimize the estimation variance. Our experiments show that LASE outperforms strong baselines over various graph datasets, and further experiments corroborate the informativeness of link attributes and our model's ability of adequately leveraging them.

    • #4091
      High Dimensional Bayesian Optimization via Supervised Dimension Reduction
      Miao Zhang, Huiqi Li, Steven Su
      Details | PDF
      Kernel Methods

      Bayesian optimization (BO) has been broadly applied to computational expensive problems, but it is still challenging to extend BO to high dimensions. Existing works are usually under strict assumption of an additive or a linear embedding structure for objective functions. This paper directly introduces a supervised dimension reduction method, Sliced Inverse Regression (SIR), to high dimensional Bayesian optimization, which could effectively learn the intrinsic sub-structure of objective function during the optimization. Furthermore, a kernel trick is developed to reduce computational complexity and learn nonlinear subset of the unknowing function when applying SIR to extremely high dimensional BO. We present several computational benefits and derive theoretical regret bounds of our algorithm. Extensive experiments on synthetic examples and two real applications demonstrate the superiority of our algorithms for high dimensional Bayesian optimization.

    • #4362
      Graph Space Embedding
      João Pereira, Albert K. Groen, Erik S. G. Stroes, Evgeni Levin
      Details | PDF
      Kernel Methods

      We propose the Graph Space Embedding (GSE), a technique that maps the input into a space where interactions are implicitly encoded, with little computations required. We provide theoretical results on an optimal regime for the GSE, namely a feasibility region for its parameters, and demonstrate the experimental relevance of our findings. Next, we introduce a strategy to gain insight on which interactions are responsible for the certain predictions, paving the way for a far more transparent model. In an empirical evaluation on a real-world clinical cohort containing patients with suspected coronary artery disease, the GSE achieves far better performance than traditional algorithms.

    • #5315
      Entangled Kernels
      Riikka Huusari, Hachem Kadri
      Details | PDF
      Kernel Methods

      We consider the problem of operator-valued kernel learning and investigate the possibility of going beyond the well-known separable kernels. Borrowing tools and concepts from the field of quantum computing, such as partial trace and entanglement, we propose a new view on operator-valued kernels and define a general family of kernels that encompasses previously known operator-valued kernels, including separable and transformable kernels. Within this framework, we introduce another novel class of operator-valued kernels called entangled kernels that are not separable. We propose an efficient two-step algorithm for this framework, where the entangled kernel is learned based on a novel extension of kernel alignment to operator-valued kernels. The utility of the algorithm is illustrated on both artificial and real data.

    • #5808
      Multi-view Clustering via Late Fusion Alignment Maximization
      Siwei Wang, Xinwang Liu, En Zhu, Chang Tang, Jiyuan Liu, Jingtao Hu, Jingyuan Xia, Jianping Yin
      Details | PDF
      Kernel Methods

      Multi-view clustering (MVC) optimally integrates complementary information from different views to improve clustering performance. Although demonstrating promising performance in many applications, we observe that most of existing methods directly combine multiple views to learn an optimal similarity for clustering. These methods would cause intensive computational complexity and over-complicated optimization. In this paper, we theoretically uncover the connection between existing k-means clustering and the alignment between base partitions and consensus partition. Based on this observation, we propose a simple but effective multi-view algorithm termed {Multi-view Clustering via Late Fusion Alignment Maximization (MVC-LFA)}. In specific, MVC-LFA proposes to maximally align the consensus partition with the weighted base partitions. Such a criterion is beneficial to significantly reduce the computational complexity and simplify the optimization procedure. Furthermore, we design a three-step iterative algorithm to solve the new resultant optimization problem with theoretically guaranteed convergence. Extensive experiments on five multi-view benchmark datasets demonstrate the effectiveness and efficiency of the proposed MVC-LFA.

    Tuesday 13 10:50 - 12:35 ML|C - Classification 1 (2503-2504)

    Chair: Min-Ling Zhang
    • #755
      Learning Topic Models by Neighborhood Aggregation
      Ryohei Hisano
      Details | PDF
      Classification 1

      Topic models are frequently used in machine learning owing to their high interpretability and modular structure. However, extending a topic model to include a supervisory signal, to incorporate pre-trained word embedding vectors and to include a nonlinear output function is not an easy task because one has to resort to a highly intricate approximate inference procedure. The present paper shows that topic modeling with pre-trained word embedding vectors can be viewed as implementing a neighborhood aggregation algorithm where messages are passed through a network defined over words. From the network view of topic models, nodes correspond to words in a document and edges correspond to either a relationship describing co-occurring words in a document or a relationship describing the same word in the corpus. The network view allows us to extend the model to include supervisory signals, incorporate pre-trained word embedding vectors and include a nonlinear output function in a simple manner. In experiments, we show that our approach outperforms the state-of-the-art supervised Latent Dirichlet Allocation implementation in terms of held-out document classification tasks.

    • #2676
      Partial Label Learning with Unlabeled Data
      Qian-Wei Wang, Yu-Feng Li, Zhi-Hua Zhou
      Details | PDF
      Classification 1

      Partial label learning deals with training examples each associated with a set of candidate labels, among which only one label is valid. Previous studies typically assume that the candidate label sets are provided for all training examples. In many real-world applications such as video character classification, however, it is generally difficult to label a large number of instances and there exists much data left to be unlabeled. We call this kind of problem semi-supervised partial label learning. In this paper, we propose the SSPL method to address this problem. Specifically, an iterative label propagation procedure between partial label examples and unlabeled instances is employed to disambiguate the candidate label sets of partial label examples as well as assign valid labels to unlabeled instances. The importance of unlabeled instances increases adaptively as the number of iteration increases, since they carry richer labeling information. Finally, unseen instances are classified based on the minimum reconstruction error on both partial label and unlabeled instances. Experiments on real-world data sets clearly validate the effectiveness of the proposed SSPL method.

    • #3265
      Zero-shot Learning with Many Classes by High-rank Deep Embedding Networks
      Yuchen Guo, Guiguang Ding, Jungong Han, Hang Shao, Xin Lou, Qionghai Dai
      Details | PDF
      Classification 1

      Zero-shot learning (ZSL) is a recently emerging research topic which aims to build classification models for unseen classes with knowledge from auxiliary seen classes. Though many ZSL works have shown promising results on small-scale datasets by utilizing a bilinear compatibility function, the ZSL performance on large-scale datasets with many classes (say, ImageNet) is still unsatisfactory. We argue that the bilinear compatibility function is a low-rank approximation of the true compatibility function such that it is not expressive enough especially when there are a large number of classes because of the rank limitation. To address this issue, we propose a novel approach, termed as High-rank Deep Embedding Networks (GREEN), for ZSL with many classes. In particular, we propose a feature-dependent mixture of softmaxes as the image-class compatibility function, which is a simple extension of the bilinear compatibility function, but yields much better results. It utilizes a mixture of non-linear transformations with feature-dependent latent variables to approximate the true function in a high-rank way, which makes GREEN more expressive. Experiments on several datasets including ImageNet demonstrate GREEN significantly outperforms the state-of-the-art approaches.

    • #4379
      Submodular Batch Selection for Training Deep Neural Networks
      K J Joseph, Vamshi Teja R, Krishnakant Singh, Vineeth N Balasubramanian
      Details | PDF
      Classification 1

      Mini-batch gradient descent based methods are the de facto algorithms for training neural network architectures today.We introduce a mini-batch selection strategy based on submodular function maximization. Our novel submodular formulation captures the informativeness of each sample and diversity of the whole subset. We design an efficient, greedy algorithm which can give high-quality solutions to this NP-hard combinatorial optimization problem. Our extensive experiments on standard datasets show that the deep models trained using the proposed batch selection strategy provide better generalization than Stochastic Gradient Descent as well as a popular baseline sampling strategy across different learning rates, batch sizes, and distance metrics.

    • #6379
      Extensible Cross-Modal Hashing
      Tian-yi Chen, Lan Zhang, Shi-cong Zhang, Zi-long Li, Bai-chuan Huang
      Details | PDF
      Classification 1

      Cross-modal hashing (CMH) models are introduced to significantly reduce the cost of large-scale cross-modal data retrieval systems. In many real-world applications, however, data of new categories arrive continuously, which requires the model has good extensibility. That is the model should be updated to accommodate data of new categories but still retain good performance for the old categories with minimum computation cost. Unfortunately, existing CMH methods fail to satisfy the extensibility requirements. In this work, we propose a novel extensible cross-modal hashing (ECMH) to enable highly efficient and low-cost model extension. Our proposed ECMH has several desired features: 1) it has good forward compatibility, so there is no need to update old hash codes; 2) the ECMH model is extended to support new data categories using only new data by a well-designed ``weak constraint incremental learning'' algorithm, which saves up to 91\% time cost comparing with retraining the model with both new and old data; 3) the extended model achieves high precision and recall on both old and new tasks. Our extensive experiments show the effectiveness of our design.

    • #4735
      Semi-supervised User Profiling with Heterogeneous Graph Attention Networks
      Weijian Chen, Yulong Gu, Zhaochun Ren, Xiangnan He, Hongtao Xie, Tong Guo, Dawei Yin, Yongdong Zhang
      Details | PDF
      Classification 1

      Aiming to represent user characteristics and personal interests, the task of user profiling is playing an increasingly important role for many real-world applications, e.g., e-commerce and social networks platforms. By exploiting the data like texts and user behaviors, most existing solutions address user profiling as a classification task, where each user is formulated as an individual data instance. Nevertheless, a user's profile is not only reflected from her/his affiliated data, but also can be inferred from other users, e.g., the users that have similar co-purchase behaviors in e-commerce, the friends in social networks, etc. In this paper, we approach user profiling in a semi-supervised manner, developing a generic solution based on heterogeneous graph learning. On the graph, nodes represent the entities of interest (e.g., users, items, attributes of items, etc.), and edges represent the interactions between entities. Our heterogeneous graph attention networks (HGAT) method learns the representation for each entity by accounting for the graph structure, and exploits the attention mechanism to discriminate the importance of each neighbor entity. Through such a learning scheme, HGAT can leverage both unsupervised information and limited labels of users to build the predictor. Extensive experiments on a real-world e-commerce dataset verify the effectiveness and rationality of our HGAT for user profiling.

    • #307
      Deterministic Routing between Layout Abstractions for Multi-Scale Classification of Visually Rich Documents
      Ritesh Sarkhel, Arnab Nandi
      Details | PDF
      Classification 1

      Classifying heterogeneous visually rich documents is a challenging task. Difficulty of this task increases even more if the maximum allowed inference turnaround time is constrained by a threshold. The increased overhead in inference cost, compared to the limited gain in classification capabilities make current multi-scale approaches infeasible in such scenarios. There are two major contributions of this work. First, we propose a spatial pyramid model to extract highly discriminative multi-scale feature descriptors from a visually rich document by leveraging the inherent hierarchy of its layout. Second, we propose a deterministic routing scheme for accelerating end-to-end inference by utilizing the spatial pyramid model. A depth-wise separable multi-column convolutional network is developed to enable our method. We evaluated the proposed approach on four publicly available, benchmark datasets of visually rich documents. Results suggest that our proposed approach demonstrates robust performance compared to the state-of-the-art methods in both classification accuracy and total inference turnaround.

    Tuesday 13 10:50 - 12:35 ML|DM - Data Mining 1 (2505-2506)

    Chair: Junming Shao
    • #708
      Inferring Substitutable Products with Deep Network Embedding
      Shijie Zhang, Hongzhi Yin, Qinyong Wang, Tong Chen, Hongxu Chen, Quoc Viet Hung Nguyen
      Details | PDF
      Data Mining 1

      On E-commerce platforms, understanding the relationships (e.g., substitute and complement) among products from user's explicit feedback, such as users' online transactions, is of great importance to boost extra sales. However, the significance of such relationships is usually neglected by existing recommender systems. In this paper, we propose a semisupervised deep embedding model, namely, Substitute Products Embedding Model (SPEM), which models the substitutable relationships between products by preserving the second-order proximity, negative first-order proximity and semantic similarity in a product co-purchasing graph based on user's purchasing behaviours. With SPEM, the learned representations of two substitutable products align closely in the latent embedding space. Extensive experiments on real-world datasets are conducted, and the results verify that our model outperforms state-of-the-art baselines.

    • #1663
      Low-Bit Quantization for Attributed Network Representation Learning
      Hong Yang, Shirui Pan, Ling Chen, Chuan Zhou, Peng Zhang
      Details | PDF
      Data Mining 1

      Attributed network embedding plays an important role in transferring network data into compact vectors for effective network analysis. Existing attributed network embedding models are designed either in continuous Euclidean spaces which introduce data redundancy or in binary coding spaces which incur significant loss of representation accuracy. To this end, we present a new Low-Bit Quantization for Attributed Network Representation Learning model (LQANR for short) that can learn compact node representations with low bitwidth values while preserving high representation accuracy. Specifically, we formulate a new representation learning function based on matrix factorization that can jointly learn the low-bit node representations and the layer aggregation weights under the low-bit quantization constraint. Because the new learning function falls into the category of mixed integer optimization, we propose an efficient mixed-integer based alternating direction method of multipliers (ADMM) algorithm as the solution. Experiments on real-world node classification and link prediction tasks validate the promising results of the proposed LQANR model.

    • #2991
      iDev: Enhancing Social Coding Security by Cross-platform User Identification Between GitHub and Stack Overflow
      Yujie Fan, Yiming Zhang, Shifu Hou, Lingwei Chen, Yanfang Ye, Chuan Shi, Liang Zhao, Shouhuai Xu
      Details | PDF
      Data Mining 1

      As modern social coding platforms such as GitHub and Stack Overflow become increasingly popular, their potential security risks increase as well (e.g., risky or malicious codes could be easily embedded and distributed). To enhance the social coding security, in this paper, we propose to automate cross-platform user identification between GitHub and Stack Overflow to combat the attackers who attempt to poison the modern software programming ecosystem. To solve this problem, an important insight brought by this work is to leverage social coding properties in addition to user attributes for cross-platform user identification. To depict users in GitHub and Stack Overflow (attached with attributed information), projects, questions and answers as well as the rich semantic relations among them, we first introduce an attributed heterogeneous information network (AHIN) for modeling. Then, we propose a novel AHIN representation learning model AHIN2Vec to efficiently learn node (i.e., user) representations in AHIN for cross-platform user identification. Comprehensive experiments on the data collections from GitHub and Stack Overflow are conducted to validate the effectiveness of our developed system iDev integrating our proposed method in cross-platform user identification by comparisons with other baselines.

    • #3985
      Outlier-Robust Multi-Aspect Streaming Tensor Completion and Factorization
      Mehrnaz Najafi, Lifang He, Philip S. Yu
      Details | PDF
      Data Mining 1

      With the increasing popularity of streaming tensor data such as videos and audios, tensor factorization and completion have attracted much attention recently in this area. Existing work usually assume that streaming tensors only grow in one mode. However, in many real-world scenarios, tensors may grow in multiple modes (or dimensions), i.e., multi-aspect streaming tensors. Standard streaming methods cannot directly handle this type of data elegantly. Moreover, due to inevitable system errors, data may be contaminated by outliers, which cause significant deviations from real data values and make such research particularly challenging. In this paper, we propose a novel method for Outlier-Robust Multi-Aspect Streaming Tensor Completion and Factorization (OR-MSTC), which is a technique capable of dealing with missing values and outliers in multi-aspect streaming tensor data. The key idea is to decompose the tensor structure into an underlying low-rank clean tensor and a structured-sparse error (outlier) tensor, along with a weighting tensor to mask missing data. We also develop an efficient algorithm to solve the non-convex and non-smooth optimization problem of OR-MSTC. Experimental results on various real-world datasets show the superiority of the proposed method over the baselines and its robustness against outliers.

    • #5069
      Convolutional Gaussian Embeddings for Personalized Recommendation with Uncertainty
      Junyang Jiang, Deqing Yang, Yanghua Xiao, Chenlu Shen
      Details | PDF
      Data Mining 1

      Most of existing embedding based recommendation models use embeddings (vectors) to represent users and items which contain latent features of users and items. Each of such embeddings corresponds to a single fixed point in low-dimensional space, thus fails to precisely represent the users/items with uncertainty which are often observed in recommender systems. Addressing this problem, we propose a unified deep recommendation framework employing Gaussian embeddings, which are proven adaptive to uncertain preferences exhibited by some users, resulting in better user representations and recommendation performance. Furthermore, our framework adopts Monte-Carlo sampling and convolutional neural networks to compute the correlation between the objective user and the candidate item, based on which precise recommendations are achieved. Our extensive experiments on two benchmark datasets not only justify that our proposed Gaussian embeddings capture the uncertainty of users very well, but also demonstrate its superior performance over the state-of-the-art recommendation models.

    • #609
      Fine-grained Event Categorization with Heterogeneous Graph Convolutional Networks
      Hao Peng, Jianxin Li, Qiran Gong, Yangqiu Song, Yuanxin Ning, Kunfeng Lai, Philip S. Yu
      Details | PDF
      Data Mining 1

      Events are happening in real-world and real-time, which can be planned and organized occasions involving multiple people and objects. Social media platforms publish a lot of text messages containing public events with comprehensive topics. However, mining social events is challenging due to the heterogeneous event elements in texts and explicit and implicit social network structures. In this paper, we design an event meta-schema to characterize the semantic relatedness of social events and build an event-based heterogeneous information network (HIN) integrating information from external knowledge base, and propose a novel Pairwise Popularity Graph Convolutional Network (PP-GCN) based fine-grained social event categorization model. We propose a Knowledgeable meta-paths Instances based social Event Similarity (KIES) between events and build a weighted adjacent matrix as input to the PP-GCN model. Comprehensive experiments on real data collections are conducted to compare various social event detection and clustering tasks. Experimental results demonstrate that our proposed framework outperforms other alternative social event categorization techniques.

    • #1716
      Topology Optimization based Graph Convolutional Network
      Liang Yang, Zesheng Kang, Xiaochun Cao, Di Jin, Bo Yang, Yuanfang Guo
      Details | PDF
      Data Mining 1

      In the past few years, semi-supervised node classification in attributed network has been developed rapidly. Inspired by the success of deep learning, researchers adopt the convolutional neural network to develop the Graph Convolutional Networks (GCN), and they have achieved surprising classification accuracy by considering the topological information and employing the fully connected network (FCN). However, the given network topology may also induce a performance degradation if it is directly employed in classification, because it may possess high sparsity and certain noises. Besides, the lack of learnable filters in GCN also limits the performance. In this paper, we propose a novel Topology Optimization based Graph Convolutional Networks (TO-GCN) to fully utilize the potential information by jointly refining the network topology and learning the parameters of the FCN. According to our derivations, TO-GCN is more flexible than GCN, in which the filters are fixed and only the classifier can be updated during the learning process. Extensive experiments on real attributed networks demonstrate the superiority of the proposed TO-GCN against the state-of-the-art approaches.

    Tuesday 13 10:50 - 12:35 AMS|MP - Multi-agent Planning (2401-2402)

    Chair: Sven Koenig
    • #344
      Multi-Agent Pathfinding with Continuous Time
      Anton Andreychuk, Konstantin Yakovlev, Dor Atzmon, Roni Stern
      Details | PDF
      Multi-agent Planning

      Multi-Agent Pathfinding (MAPF) is the problem of finding paths for multiple agents such that every agent reaches its goal and the agents do not collide. Most prior work on MAPF were on grids, assumed agents' actions have uniform duration, and that time is discretized into timesteps. In this work, we propose a MAPF algorithm that do not assume any of these assumptions, is complete, and provides provably optimal solutions. This algorithm is based on a novel combination of Safe Interval Path Planning (SIPP), a continuous time single agent planning algorithms, and Conflict-Based Search (CBS). We analyze this algorithm, discuss its pros and cons, and evaluate it experimentally on several standard benchmarks.

    • #2201
      Priority Inheritance with Backtracking for Iterative Multi-agent Path Finding
      Keisuke Okumura, Manao Machida, Xavier Défago, Yasumasa Tamura
      Details | PDF
      Multi-agent Planning

      The Multi-agent Path Finding (MAPF) problem consists in all agents having to move to their own destinations while avoiding collisions. In practical applications to the problem, such as for navigation in an automated warehouse, MAPF must be solved iteratively. We present here a novel approach to iterative MAPF, that we call Priority Inheritance with Backtracking (PIBT). PIBT gives a unique priority to each agent every timestep, so that all movements are prioritized. Priority inheritance, which aims at dealing effectively with priority inversion in path adjustment within a small time window, can be applied iteratively and a backtracking protocol prevents agents from being stuck. We prove that, regardless of their number, all agents are guaranteed to reach their destination within finite time, when the environment is a graph such that all pairs of adjacent nodes belong to a simple cycle of length 3 or more (e.g., biconnected). Our implementation of PIBT can be fully decentralized without global communication. Experimental results over various scenarios confirm that PIBT is adequate both for finding paths in large environments with many agents, as well as for conveying packages in an automated warehouse.

    • #2971
      Improved Heuristics for Multi-Agent Path Finding with Conflict-Based Search
      Jiaoyang Li, Ariel Felner, Eli Boyarski, Hang Ma, Sven Koenig
      Details | PDF
      Multi-agent Planning

      Conflict-Based Search (CBS) and its enhancements are among the strongest algorithms for Multi-Agent Path Finding. Recent work introduced an admissible heuristic to guide the high-level search of CBS. In this work, we prove the limitation of this heuristic, as it is based on cardinal conflicts only. We then introduce two new admissible heuristics by reasoning about the pairwise dependencies between agents. Empirically, CBS with either new heuristic significantly improves the success rate over CBS with the recent heuristic and reduces the number of expanded nodes and runtime by up to a factor of 50.

    • #5401
      Multi-Robot Planning Under Uncertain Travel Times and Safety Constraints
      Masoumeh Mansouri, Bruno Lacerda, Nick Hawes, Federico Pecora
      Details | PDF
      Multi-agent Planning

      We present a novel modelling and planning approach for multi-robot systems under uncertain travel times. The approach uses generalised stochastic Petri nets (GSPNs) to model desired team behaviour, and allows to specify safety constraints and rewards. The GSPN is interpreted as a Markov decision process (MDP) for which we can generate policies that optimise the requirements. This representation is more compact than the equivalent multi-agent MDP, allowing us to scale better. Furthermore, it naturally allows for asynchronous execution of the generated policies across the robots, yielding smoother team behaviour. We also describe how the integration of the GSPN with a lower-level team controller allows for accurate expectations on team performance. We evaluate our approach on an industrial scenario, showing that it outperforms hand-crafted policies used in current practice.

    • #10985
      (Journal track) Implicitly Coordinated Multi-Agent Path Finding under Destination Uncertainty: Success Guarantees and Computational Complexity
      Bernhard Nebel, Thomas Bolander, Thorsten Engesser, Robert Mattmüller
      Details | PDF
      Multi-agent Planning

      In multi-agent path finding, it is usually assumed that planning is performed centrally and that the destinations of the agents are common knowledge. We will drop both assumptions and analyze under which conditions it can be guaranteed that the agents reach their respective destinations using implicitly coordinated plans without communication.

    • #2916
      Unifying Search-based and Compilation-based Approaches to Multi-agent Path Finding through Satisfiability Modulo Theories
      Pavel Surynek
      Details | PDF
      Multi-agent Planning

      We unify search-based and compilation-based approaches to multi-agent path finding (MAPF) through satisfiability modulo theories (SMT). The task in MAPF is to navigate agents in an undirected graph to given goal vertices so that they do not collide. We rephrase Conflict-Based Search (CBS), one of the state-of-the-art algorithms for optimal MAPF solving, in the terms of SMT. This idea combines SAT-based solving known from MDD-SAT, a SAT-based optimal MAPF solver, at the low-level with conflict elimination of CBS at the high-level. Where the standard CBS branches the search after a conflict, we refine the propositional model with a disjunctive constraint. Our novel algorithm called SMT-CBS hence does not branch at the high-level but incrementally extends the propositional model. We experimentally compare SMT-CBS with CBS, ICBS, and MDD-SAT.

    • #2472
      Reachability Games in Dynamic Epistemic Logic
      Bastien Maubert, Sophie Pinchinat, François Schwarzentruber
      Details | PDF
      Multi-agent Planning

      We define reachability games based on Dynamic Epistemic Logic (DEL), where the players? actions are finely described as DEL action models. We first consider the setting where a controller with perfect information interacts with an environment and aims at reaching some desired state of knowledge regarding the observers of the system. We study the problem of existence of a strategy for the controller, which generalises the classic epistemic planning problem, and we solve it for several types of actions such as public announcements and public actions. We then consider a yet richer setting where observers themselves are players, whose strategies must be based on their observations. We establish several decidability and undecidability results for the problem of existence of a distributed strategy, depending on the type of actions the players can use, and relate them to results from the literature on multiplayer games with imperfect information.

    Tuesday 13 10:50 - 12:35 DemoT1 - Demo Talks 1 (2403-2404)

    Chair: Matjaz Gams
    • #11022
      Fair and Explainable Dynamic Engagement of Crowd Workers
      Han Yu, Yang Liu, Xiguang Wei, Chuyu Zheng, Tianjian Chen, Qiang Yang, Xiong Peng
      Details | PDF
      Demo Talks 1

      Years of rural-urban migration has resulted in a significant population in China seeking ad-hoc work in large urban centres. At the same time, many businesses face large fluctuations in demand for manpower and require more efficient ways to satisfy such demands. This paper outlines AlgoCrowd, an artificial intelligence (AI)-empowered algorithmic crowdsourcing platform. Equipped with an efficient explainable task-worker matching optimization approach designed to focus on fair treatment of workers while maximizing collective utility, the platform provides explainable task recommendations to workers' personal work management mobile apps which are becoming popular, with the aim to address the above societal challenge.

    • #11024
      Multi-Agent Visualization for Explaining Federated Learning
      Xiguang Wei, Quan Li, Yang Liu, Han Yu, Tianjian Chen, Qiang Yang
      Details | PDF
      Demo Talks 1

      As an alternative decentralized training approach, Federated Learning enables distributed agents to collaboratively learn a machine learning model while keeping personal/private information on local devices. However, one significant issue of this framework is the lack of transparency, thus obscuring understanding of the working mechanism of Federated Learning systems. This paper proposes a multi-agent visualization system that illustrates what is Federated Learning and how it supports multi-agents coordination. To be specific, it allows users to participate in the Federated Learning empowered multi-agent coordination. The input and output of Federated Learning are visualized simultaneously, which provides an intuitive explanation of Federated Learning for users in order to help them gain deeper understanding of the technology.

    • #11028
      AiD-EM: Adaptive Decision Support for Electricity Markets Negotiations
      Tiago Pinto, Zita Vale
      Details | PDF
      Demo Talks 1

      This paper presents the Adaptive Decision Support for Electricity Markets Negotiations (AiD-EM) system. AiD-EM is a multi-agent system that provides decision support to market players by incorporating multiple sub-(agent-based) systems, directed to the decision support of specific problems. These sub-systems make use of different artificial intelligence methodologies, such as machine learning and evolutionary computing, to enable players adaptation in the planning phase and in actual negotiations in auction-based markets and bilateral negotiations. AiD-EM demonstration is enabled by its connection to MASCEM (Multi-Agent Simulator of Competitive Electricity Markets).

    • #11032
      Embodied Conversational AI Agents in a Multi-modal Multi-agent Competitive Dialogue
      Rahul R. Divekar, Xiangyang Mou, Lisha Chen, Maíra Gatti de Bayser, Melina Alberio Guerra, Hui Su
      Details | PDF
      Demo Talks 1

      In a setting where two AI agents embodied as animated humanoid avatars are engaged in a conversation with one human and each other, we see two challenges. One, determination by the AI agents about which one of them is being addressed. Two, determination by the AI agents if they may/could/should speak at the end of a turn. In this work we bring these two challenges together and explore the participation of AI agents in multi-party conversations. Particularly, we show two embodied AI shopkeeper agents who sell similar items aiming to get the business of a user by competing with each other on the price. In this scenario, we solve the first challenge by using headpose (estimated by deep learning techniques) to determine who the user is talking to. For the second challenge we use deontic logic to model rules of a negotiation conversation.

    • #11043
      Multi-Agent Path Finding on Ozobots
      Roman Barták, Ivan Krasičenko, Jiří Švancara
      Details | PDF
      Demo Talks 1

      Multi-agent path finding (MAPF) is the problem to find collision-free paths for a set of agents (mobile robots) moving on a graph. There exists several abstract models describing the problem with various types of constraints. The demo presents software to evaluate the abstract models when the plans are executed on Ozobots, small mobile robots developed for teaching programming. The software allows users to design the grid-like maps, to specify initial and goal locations of robots, to generate plans using various abstract models implemented in the Picat programming language, to simulate and to visualise execution of these plans, and to translate the plans to command sequences for Ozobots.

    • #11050
      Reagent: Converting Ordinary Webpages into Interactive Software Agents
      Matthew Peveler, Jeffrey O. Kephart, Hui Su
      Details | PDF
      Demo Talks 1

      We introduce Reagent, a technology that can be used in conjunction with automated speech recognition to allow users to query and manipulate ordinary webpages via speech and pointing. Reagent can be used out-of-the-box with third-party websites, as it requires neither special instrumentation from website developers nor special domain knowledge to capture semantically-meaningful mouse interactions with structured elements such as tables and plots. When it is unable to infer mappings between domain vocabulary and visible webpage content on its own, Reagent proactively seeks help by engaging in a voice-based interaction with the user.

    • #11029
      Deep Reinforcement Learning for Ride-sharing Dispatching and Repositioning
      Zhiwei (Tony) Qin, Xiaocheng Tang, Yan Jiao, Fan Zhang, Chenxi Wang, Qun (Tracy) Li
      Details | PDF
      Demo Talks 1

      In this demo, we will present a simulation-based human-computer interaction of deep reinforcement learning in action on order dispatching and driver repositioning for ride-sharing.  Specifically, we will demonstrate through several specially designed domains how we use deep reinforcement learning to train agents (drivers) to have longer optimization horizon and to cooperate to achieve higher objective values collectively. 

    • #11041
      Contextual Typeahead Sticker Suggestions on Hike Messenger
      Mohamed Hanoosh, Abhishek Laddha, Debdoot Mukherjee
      Details | PDF
      Demo Talks 1

      In this demonstration, we present Hike's sticker recommendation system, which helps users choose the right sticker to substitute the next message that they intend to send in a chat. We describe how the system addresses the issue of numerous orthographic variations for chat messages and operates under 20 milliseconds with low CPU and memory footprint on device.

    • #11023
      InterSpot: Interactive Spammer Detection in Social Media
      Kaize Ding, Jundong Li, Shivam Dhar, Shreyash Devan, Huan Liu
      Details | PDF
      Demo Talks 1

      Spammer detection in social media has recently received increasing attention due to the rocketing growth of user-generated data. Despite the empirical success of existing systems, spammers may continuously evolve over time to impersonate normal users while new types of spammers may also emerge to combat with the current detection system, leading to the fact that a built system will gradually lose its efficacy in spotting spammers. To address this issue, grounded on the contextual bandit model, we present a novel system for conducting interactive spammer detection. We demonstrate our system by showcasing the interactive learning process, which allows the detection model to keep optimizing its detection strategy through incorporating the feedback information from human experts.

    Tuesday 13 10:50 - 12:35 Early Career 1 - Early Career Spotlight 1 (2405-2406)

    Chair: Louise Trave
    • #11058
      From Data to Knowledge Engineering for Cybersecurity
      Gerardo I. Simari
      Details | PDF
      Early Career Spotlight 1

      Data present in a wide array of platforms that are part of today's information systems lies at the foundation of many decision making processes, as we have now come to depend on social media, videos, news, forums, chats, ads, maps, and many other data sources for our daily lives. In this article, we first discuss how such data sources are involved in threats to systems' integrity, and then how they can be leveraged along with knowledge-based tools to tackle a set of challenges in the cybersecurity domain. Finally, we present a brief discussion of our roadmap for research and development in the near future to address the set of ever-evolving cyber threats that our systems face every day.

    • #11060
      The Quest For "Always-On" Autonomous Mobile Robots
      Joydeep Biswas
      Details | PDF
      Early Career Spotlight 1

      Building ``always-on'' robots to be deployed over extended periods of time in real human environments is challenging for several reasons. Some fundamental questions that arise in the process include: 1) How can the robot reconcile unexpected differences between its observations and its outdated map of the world? 2) How can we scalably test robots for long-term autonomy? 3) Can a robot learn to predict its own failures, and their corresponding causes? 4) When the robot fails and is unable to recover autonomously, can it utilize partially specified, approximate human corrections to overcome its failures? We summarize our research towards addressing all of these questions. We present 1) Episodic non-Markov Localization to maintain the belief of the robot's location while explicitly reasoning about unmapped observations; 2) a 1,000km challenge to test for long-term autonomy; 3) feature-based and learning-based approaches to predicting failures; and 4) human-in-the-loop SLAM to overcome robot mapping errors, and SMT-based robot transition repair to overcome state machine failures.

    • #11055
      Multiagent Decision Making and Learning in Urban Environments
      Akshat Kumar
      Details | PDF
      Early Career Spotlight 1

      Our increasingly interconnected urban environments provide several opportunities to deploy intelligent agents---from self-driving cars, ships to aerial drones---that promise to radically improve productivity and safety. Achieving coordination among agents in such urban settings presents several algorithmic challenges---ability to scale to thousands of agents, addressing uncertainty, and partial observability in the environment. In addition, accurate domain models need to be learned from data that is often noisy and available only at an aggregate level. In this paper, I will overview some of our recent contributions towards developing planning and reinforcement learning strategies to address several such challenges present in large-scale urban multiagent systems.

    • #11056
      What Does the Evidence Say? Models to Help Make Sense of the Biomedical Literature
      Byron C. Wallace
      Details | PDF
      Early Career Spotlight 1

      Ideally decisions regarding medical treatments would be informed by the totality of the available evidence. The best evidence we currently have is in published natural language articles describing the conduct and results of clinical trials. Because these are unstructured, it is difficult for domain experts (e.g., physicians) to sort through and appraise the evidence pertaining to a given clinical question. Natural language technologies have the potential to improve access to the evidence via semi-automated processing of the biomedical literature. In this brief paper I highlight work on developing tasks, corpora, and models to support semi-automated evidence retrieval and extraction. The aim is to design models that can consume articles describing clinical trials and automatically extract from these key clinical variables and findings, and estimate their reliability. Completely automating `machine reading' of evidence remains a distant aim given current technologies; the more immediate hope is to use such technologies to help domain experts access and make sense of unstructured biomedical evidence more efficiently, with the ultimate aim of improving patient care. Aside from their practical importance, these tasks pose core NLP challenges that directly motivate methodological innovation.

    Tuesday 13 14:00 - 14:50 Invited Talk (D-I)

    Chair: Thomas Eiter
  • Reasoning About The Behavior of AI Systems
    Adnan Darwiche
    Invited Talk
  • Tuesday 13 15:00 - 16:00 IJCAI-JAIR Best Paper Prize Session (K)


  • IJCAI-JAIR
    IJCAI-JAIR Best Paper Prize Session
  • Tuesday 13 15:00 - 16:00 ST: Human AI & ML 1 - Special Track on Human AI and Machine Learning 1 (J)

    Chair: Chen Gong
    • #1462
      Playgol: Learning Programs Through Play
      Andrew Cropper
      Details | PDF
      Special Track on Human AI and Machine Learning 1

      Children learn though play. We introduce the analogous idea of learning programs through play. In this approach, a program induction system (the learner) is given a set of user-supplied build tasks and initial background knowledge (BK). Before solving the build tasks, the learner enters an unsupervised playing stage where it creates its own play tasks to solve, tries to solve them, and saves any solutions (programs) to the BK. After the playing stage is finished, the learner enters the supervised building stage where it tries to solve the build tasks and can reuse solutions learnt whilst playing. The idea is that playing allows the learner to discover reusable general programs on its own which can then help solve the build tasks. We claim that playing can improve learning performance. We show that playing can reduce the textual complexity of target concepts which in turn reduces the sample complexity of a learner. We implement our idea in Playgol, a new inductive logic programming system. We experimentally test our claim on two domains: robot planning and real-world string transformations. Our experimental results suggest that playing can substantially improve learning performance.

    • #1545
      EL Embeddings: Geometric Construction of Models for the Description Logic EL++
      Maxat Kulmanov, Wang Liu-Wei, Yuan Yan, Robert Hoehndorf
      Details | PDF
      Special Track on Human AI and Machine Learning 1

      An embedding is a function that maps entities from one algebraic structure into another while preserving certain characteristics. Embeddings are being used successfully for mapping relational data or text into vector spaces where they can be used for machine learning, similarity search, or similar tasks. We address the problem of finding vector space embeddings for theories in the Description Logic ??⁺⁺ that are also models of the TBox. To find such embeddings, we define an optimization problem that characterizes the model-theoretic semantics of the operators in ??⁺⁺ within ℝⁿ, thereby solving the problem of finding an interpretation function for an ??⁺⁺ theory given a particular domain Δ. Our approach is mainly relevant to large ??⁺⁺ theories and knowledge bases such as the ontologies and knowledge graphs used in the life sciences. We demonstrate that our method can be used for improved prediction of protein--protein interactions when compared to semantic similarity measures or knowledge graph embeddings.

    • #5010
      A Comparative Study of Distributional and Symbolic Paradigms for Relational Learning
      Sebastijan Dumancic, Alberto Garcia-Duran, Mathias Niepert
      Details | PDF
      Special Track on Human AI and Machine Learning 1

      Many real-world domains can be expressed as graphs and, more generally, as multi-relational knowledge graphs. Though reasoning and learning with knowledge graphs has traditionally been addressed by symbolic approaches such as Statistical relational learning, recent methods in (deep) representation learning have shown promising results for specialised tasks such as knowledge base completion. These approaches, also known as distributional, abandon the traditional symbolic paradigm by replacing symbols with vectors in Euclidean space. With few exceptions, symbolic and distributional approaches are explored in different communities and little is known about their respective strengths and weaknesses. In this work, we compare distributional and symbolic relational learning approaches on various standard relational classification and knowledge base completion tasks. Furthermore, we analyse the properties of the datasets and relate them to the performance of the methods in the comparison. The results reveal possible indicators that could help in choosing one approach over the other for particular knowledge graphs.

    • #5666
      Synthesizing Datalog Programs using Numerical Relaxation
      Xujie Si, Mukund Raghothaman, Kihong Heo, Mayur Naik
      Details | PDF
      Special Track on Human AI and Machine Learning 1

      The problem of learning logical rules from examples arises in diverse fields, including program synthesis, logic programming, and machine learning. Existing approaches either involve solving computationally difficult combinatorial problems, or performing parameter estimation in complex statistical models. In this paper, we present Difflog, a technique to extend the logic programming language Datalog to the continuous setting. By attaching real-valued weights to individual rules of a Datalog program, we naturally associate numerical values with individual conclusions of the program. Analogous to the strategy of numerical relaxation in optimization problems, we can now first determine the rule weights which cause the best agreement between the training labels and the induced values of output tuples, and subsequently recover the classical discrete-valued target program from the continuous optimum. We evaluate Difflog on a suite of 34~benchmark problems from recent literature in knowledge discovery, formal verification, and database query-by-example, and demonstrate significant improvements in learning complex programs with recursive rules, invented predicates, and relations of arbitrary arity.

    Tuesday 13 15:00 - 16:00 ML|DL - Deep Learning 2 (L)

    Chair: Yahong Han
    • #2033
      Position Focused Attention Network for Image-Text Matching
      Yaxiong Wang, Hao Yang, Xueming Qian, Lin Ma, Jing Lu, Biao Li, Xin Fan
      Details | PDF
      Deep Learning 2

      Image-text matching tasks have recently attracted a lot of attention in the computer vision field. The key point of this cross-domain problem is how to accurately measure the similarity between the visual and the textual contents, which demands a fine understanding of both modalities. In this paper, we propose a novel position focused attention network (PFAN) to investigate the relation between the visual and the textual views. In this work, we integrate the object position clue to enhance the visual-text joint-embedding learning. We first split the images into blocks, by which we infer the relative position of region in the image. Then, an attention mechanism is proposed to model the relations between the image region and blocks and generate the valuable position feature, which will be further utilized to enhance the region expression and model a more reliable relationship between the visual image and the textual sentence. Experiments on the popular datasets Flickr30K and MS-COCO show the effectiveness of the proposed method. Besides the public datasets, we also conduct experiments on our collected practical news dataset (Tencent-News) to validate the practical application value of proposed method. As far as we know, this is the first attempt to test the performance on the practical application. Our method can achieve the state-of-art performance on all of these three datasets.

    • #3819
      Hierarchical Representation Learning for Bipartite Graphs
      Chong Li, Kunyang Jia, Dan Shen, C.J. Richard Shi, Hongxia Yang
      Details | PDF
      Deep Learning 2

      Recommender systems on E-Commerce platforms track users' online behaviors and recommend relevant items according to each user’s interests and needs. Bipartite graphs that capture both user/item feature and use-item interactions have been demonstrated to be highly effective for this purpose. Recently, graph neural network (GNN) has been successfully applied in representation of bipartite graphs in industrial recommender systems. Providing individualized recommendation on a dynamic platform with billions of users is extremely challenging. A key observation is that the users of an online E-Commerce platform can be naturally clustered into a set of communities. We propose to cluster the users into a set of communities and make recommendations based on the information of the users in the community collectively. More specifically, embeddings are assigned to the communities and the user embedding is decomposed into two parts, each of which captures the community-level generalizations and individualized preferences respectively. The community embedding can be considered as an enhancement to the GNN methods that are inherently flat and do not learn hierarchical representations of graphs. The performance of the proposed algorithm is demonstrated on a public dataset and a world-leading E-Commerce company dataset.

    • #4010
      COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning
      Wenxiao Wang, Cong Fu, Jishun Guo, Deng Cai, Xiaofei He
      Details | PDF
      Deep Learning 2

      Neural network compression empowers the effective yet unwieldy deep convolutional neural networks (CNN) to be deployed in resource-constrained scenarios. Most state-of-the-art approaches prune the model in filter-level according to the "importance" of filters. Despite their success, we notice they suffer from at least two of the following problems: 1) The redundancy among filters is not considered because the importance is evaluated independently. 2) Cross-layer filter comparison is unachievable since the importance is defined locally within each layer. Consequently, we must manually specify layer-wise pruning ratios. 3) They are prone to generate sub-optimal solutions because they neglect the inequality between reducing parameters and reducing computational cost. Reducing the same number of parameters in different positions in the network may reduce different computational cost. To address the above problems, we develop a novel algorithm named as COP (correlation-based pruning), which can detect the redundant filters efficiently. We enable the cross-layer filter comparison through global normalization. We add parameter-quantity and computational-cost regularization terms to the importance, which enables the users to customize the compression according to their preference (smaller or faster). Extensive experiments have shown COP outperforms the others significantly. The code is released at https://github.com/ZJULearning/COP.

    Tuesday 13 15:00 - 16:00 MTA|RS - Recommender Systems 2 (2701-2702)

    Chair: Yong Li
    • #3499
      Explainable Fashion Recommendation: A Semantic Attribute Region Guided Approach
      Min Hou, Le Wu, Enhong Chen, Zhi Li, Vincent W. Zheng, Qi Liu
      Details | PDF
      Recommender Systems 2

      In fashion recommender systems, each product usually consists of multiple semantic attributes (e.g., sleeves, collar, etc). When making cloth decisions, people usually show preferences for different semantic attributes (e.g., the clothes with v-neck collar). Nevertheless, most previous fashion recommendation models comprehend the clothing images with a global content representation and lack detailed understanding of users' semantic preferences, which usually leads to inferior recommendation performance. To bridge this gap, we propose a novel Semantic Attribute Explainable Recommender System (SAERS). Specifically, we first introduce a fine-grained interpretable semantic space. We then develop a Semantic Extraction Network (SEN) and Fine-grained Preferences Attention (FPA) module to project users and items into this space, respectively. With SAERS, we are capable of not only providing cloth recommendations for users, but also explaining the reason why we recommend the cloth through intuitive visual attribute semantic highlights in a personalized manner. Extensive experiments conducted on real-world datasets clearly demonstrate the effectiveness of our approach compared with the state-of-the-art methods.

    • #10969
      (Sister Conferences Best Papers Track) Impact of Consuming Suggested Items on the Assessment of Recommendations in User Studies on Recommender Systems
      Benedikt Loepp, Tim Donkers, Timm Kleemann, Jürgen Ziegler
      Details | PDF
      Recommender Systems 2

      User studies are increasingly considered important in research on recommender systems. Although participants typically cannot consume any of the recommended items, they are often asked to assess the quality of recommendations and of other aspects related to user experience by means of questionnaires. Not being able to listen to recommended songs or to watch suggested movies, might however limit the validity of the obtained results. Consequently, we have investigated the effect of consuming suggested items. In two user studies conducted in different domains, we showed that consumption may lead to differences in the assessment of recommendations and in questionnaire answers. Apparently, adequately measuring user experience is in some cases not possible without allowing users to consume items. On the other hand, participants sometimes seem to approximate the actual value of recommendations reasonably well depending on domain and provided information.

    • #5041
      Binarized Collaborative Filtering with Distilling Graph Convolutional Network
      Haoyu Wang, Defu Lian, Yong Ge
      Details | PDF
      Recommender Systems 2

      The efficiency of top-K item recommendation based on implicit feedback are vital to recommender systems in real world, but it is very challenging due to the lack of negative samples and the large number of candidate items. To address the challenges, we firstly introduce an improved Graph Convolutional Network~(GCN) model with high-order feature interaction considered. Then we distill the ranking information derived from GCN into binarized collaborative filtering, which makes use of binary representation to improve the efficiency of online recommendation. However, binary codes are not only hard to be optimized but also likely to incur the loss of information during the training processing. Therefore, we propose a novel framework to convert the binary constrained optimization problem into an equivalent continuous optimization problem with a stochastic penalty. The binarized collaborative filtering model is then easily optimized by many popular solvers like SGD and Adam. The proposed algorithm is finally evaluated on three real-world datasets and shown the superiority to the competing baselines.

    • #3878
      Co-Attentive Multi-Task Learning for Explainable Recommendation
      Zhongxia Chen, Xiting Wang, Xing Xie, Tong Wu, Guoqing Bu, Yining Wang, Enhong Chen
      Details | PDF
      Recommender Systems 2

      Despite widespread adoption, recommender systems remain mostly black boxes. Recently, providing explanations about why items are recommended has attracted increasing attention due to its capability to enhance user trust and satisfaction. In this paper, we propose a co-attentive multi-task learning model for explainable recommendation. Our model improves both prediction accuracy and explainability of recommendation by fully exploiting the correlations between the recommendation task and the explanation task. In particular, we design an encoder-selector-decoder architecture inspired by human's information-processing model in cognitive psychology. We also propose a hierarchical co-attentive selector to effectively model the cross knowledge transferred for both tasks. Our model not only enhances prediction accuracy of the recommendation task, but also generates linguistic explanations that are fluent, useful, and highly personalized. Experiments on three public datasets demonstrate the effectiveness of our model.

    Tuesday 13 15:00 - 16:00 HAI|HCC - Human Computation and Crowdsourcing (2703-2704)

    Chair: Chengqi Zhang
    • #630
      MiSC: Mixed Strategies Crowdsourcing
      Ching Yun Ko, Rui Lin, Shu Li, Ngai Wong
      Details | PDF
      Human Computation and Crowdsourcing

      Popular crowdsourcing techniques mostly focus on evaluating workers' labeling quality before adjusting their weights during label aggregation. Recently, another cohort of models regard crowdsourced annotations as incomplete tensors and recover unfilled labels by tensor completion. However, mixed strategies of the two methodologies have never been comprehensively investigated, leaving them as rather independent approaches. In this work, we propose MiSC ( Mixed Strategies Crowdsourcing), a versatile framework integrating arbitrary conventional crowdsourcing and tensor completion techniques. In particular, we propose a novel iterative Tucker label aggregation algorithm that outperforms state-of-the-art methods in extensive experiments.

    • #1477
      Multiple Noisy Label Distribution Propagation for Crowdsourcing
      Hao Zhang, Liangxiao Jiang, Wenqiang Xu
      Details | PDF
      Human Computation and Crowdsourcing

      Crowdsourcing services provide a fast, efficient, and cost-effective means of obtaining large labeled data for supervised learning. Ground truth inference, also called label integration, designs proper aggregation strategies to infer the unknown true label of each instance from the multiple noisy label set provided by ordinary crowd workers. However, to the best of our knowledge, nearly all existing label integration methods focus solely on the multiple noisy label set itself of the individual instance while totally ignoring the intercorrelation among multiple noisy label sets of different instances. To solve this problem, a multiple noisy label distribution propagation (MNLDP) method is proposed in this study. MNLDP first transforms the multiple noisy label set of each instance into its multiple noisy label distribution and then propagates its multiple noisy label distribution to its nearest neighbors. Consequently, each instance absorbs a fraction of the multiple noisy label distributions from its nearest neighbors and yet simultaneously maintains a fraction of its own original multiple noisy label distribution. Promising experimental results on simulated and real-world datasets validate the effectiveness of our proposed method.

    • #10968
      (Sister Conferences Best Papers Track) Quality Control Attack Schemes in Crowdsourcing
      Alessandro Checco, Jo Bates, Gianluca Demartini
      Details | PDF
      Human Computation and Crowdsourcing

      An important precondition to build effective AI models is the collection of training data at scale. Crowdsourcing is a popular methodology to achieve this goal. Its adoption  introduces novel challenges in data quality control, to deal with under-performing and malicious annotators. One of the most popular quality assurance mechanisms, especially in paid micro-task crowdsourcing, is the use of a small set of pre-annotated tasks as gold standard, to assess in real time the annotators quality. In this paper, we highlight a set of vulnerabilities this scheme suffers: a group of colluding crowd workers can easily implement and deploy a decentralised machine learning inferential system to  detect and signal which parts of the task are more likely to be gold questions, making them ineffective as a quality control tool. Moreover, we demonstrate how the most common countermeasures against this attack are ineffective in practical scenarios. The basic architecture of the inferential system is composed of a browser plug-in and an external server where the colluding workers can share information. We implement and validate the attack scheme, by means of experiments on real-world data from a popular crowdsourcing platform.

    • #4194
      Boosting for Comparison-Based Learning
      Michael Perrot, Ulrike von Luxburg
      Details | PDF
      Human Computation and Crowdsourcing

      We consider the problem of classification in a comparison-based setting: given a set of objects, we only have access to triplet comparisons of the form ``object A is closer to object B than to object C.'' In this paper we introduce TripletBoost, a new method that can learn a classifier just from such triplet comparisons. The main idea is to aggregate the triplets information into weak classifiers, which can subsequently be boosted to a strong classifier. Our method has two main advantages: (i) it is applicable to data from any metric space, and (ii) it can deal with large scale problems using only passively obtained and noisy triplets. We derive theoretical generalization guarantees and a lower bound on the number of necessary triplets, and we empirically show that our method is both competitive with state of the art approaches and resistant to noise.

    Tuesday 13 15:00 - 16:00 AMS|TR - Trust and Reputation (2705-2706)

    Chair: Catholijn Jonker
    • #486
      A Value-based Trust Assessment Model for Multi-agent Systems
      Kinzang Chhogyal, Abhaya Nayak, Aditya Ghose, Hoa K. Dam
      Details | PDF
      Trust and Reputation

      An agent's assessment of its trust in another agent is commonly taken to be a measure of the reliability/predictability of the latter's actions. It is based on the trustor's past observations of the behaviour of the trustee and requires no knowledge of the inner-workings of the trustee. However, in situations that are new or unfamiliar, past observations are of little help in assessing trust. In such cases, knowledge about the trustee can help. A particular type of knowledge is that of values - things that are important to the trustor and the trustee. In this paper, based on the premise that the more values two agents share, the more they should trust one another, we propose a simple approach to trust assessment between agents based on values, taking into account if agents trust cautiously or boldly, and if they depend on others in carrying out a task.

    • #4579
      Spotting Collective Behaviour of Online Frauds in Customer Reviews
      Sarthika Dhawan, Siva Charan Reddy Gangireddy, Shiv Kumar, Tanmoy Chakraborty
      Details | PDF
      Trust and Reputation

      Online reviews play a crucial role in deciding the quality before purchasing any product. Unfortunately, spammers often take advantage of online review forums by writing fraud reviews to promote/demote certain products. It may turn out to be more detrimental when such spammers collude and collectively inject spam reviews as they can take complete control of users' sentiment due to the volume of fraud reviews they inject. Group spam detection is thus more challenging than individual-level fraud detection due to unclear definition of a group, variation of inter-group dynamics, scarcity of labeled group-level spam data, etc. Here, we propose DeFrauder, an unsupervised method to detect online fraud reviewer groups. It first detects candidate fraud groups by leveraging the underlying product review graph and incorporating several behavioral signals which model multi-faceted collaboration among reviewers. It then maps reviewers into an embedding space and assigns a spam score to each group such that groups comprising spammers with highly similar behavioral traits achieve high spam score. While comparing with five baselines on four real-world datasets (two of them were curated by us), DeFrauder shows superior performance by outperforming the best baseline with 17.11% higher NDCG@50 (on average) across datasets.

    • #5274
      FaRM: Fair Reward Mechanism for Information Aggregation in Spontaneous Localized Settings
      Moin Hussain Moti, Dimitris Chatzopoulos, Pan Hui, Sujit Gujar
      Details | PDF
      Trust and Reputation

      Although peer prediction markets are widely used in crowdsourcing to aggregate information from agents, they often fail to reward the participating agents equitably. Honest agents can be wrongly penalized if randomly paired with dishonest ones. In this work, we introduce selective and cumulative fairness. We characterize a mechanism as fair if it satisfies both notions and present FaRM, a representative mechanism we designed. FaRM is a Nash incentive mechanism that focuses on information aggregation for spontaneous local activities which are accessible to a limited number of agents without assuming any prior knowledge of the event. All the agents in the vicinity observe the same information. FaRM uses (i) a report strength score to remove the risk of random pairing with dishonest reporters, (ii) a consistency score to measure an agent's history of accurate reports and distinguish valuable reports, (iii) a reliability score to estimate the probability of an agent to collude with nearby agents and prevents agents from getting swayed, and (iv) a location robustness score to filter agents who try to participate without being present in the considered setting. Together, report strength, consistency, and reliability represent a fair reward given to agents based on their reports.

    • #5745
      Identifying vulnerabilities in trust and reputation systems
      Taha D. Güneş, Long Tran-Thanh, Timothy J. Norman
      Details | PDF
      Trust and Reputation

      Online communities use trust and reputation systems to assist their users in evaluating other parties. Due to the preponderance of these systems, malicious entities have a strong incentive to attempt to influence them, and strategies employed are increasingly sophisticated. Current practice is to evaluate trust and reputation systems against known attacks, and hence are heavily reliant on expert analysts. We present a novel method for automatically identifying vulnerabilities in such systems by formulating the problem as a derivative-free optimisation problem and applying efficient sampling methods. We illustrate the application of this method for attacks that involve the injection of false evidence, and identify vulnerabilities in existing trust models. In this way, we provide reliable and objective means to assess how robust trust and reputation systems are to different kinds of attacks.

    Tuesday 13 15:00 - 16:00 PS|S - Scheduling (2601-2602)

    Chair: Gerhard Friedrich
    • #253
      Faster Dynamic Controllability Checking in Temporal Networks with Integer Bounds
      Nikhil Bhargava, Brian C. Williams
      Details | PDF
      Scheduling

      Simple Temporal Networks with Uncertainty (STNUs) provide a useful formalism with which to reason about events and the temporal constraints that apply to them. STNUs are in particular notable because they facilitate reasoning over stochastic, or uncontrollable, actions and their corresponding durations. To evaluate the feasibility of a set of constraints associated with an STNU, one checks the network's \textit{dynamic controllability}, which determines whether an adaptive schedule can be constructed on-the-fly. Our work improves the runtime of checking the dynamic controllability of STNUs with integer bounds to O(min(mn, m sqrt(n) log N) + km + k^2n + kn log n). Our approach pre-processes the STNU using an existing O(n^3) dynamic controllability checking algorithm and provides tighter bounds on its runtime. This makes our work easily adaptable to other algorithms that rely on checking variants of dynamic controllability.

    • #2658
      Scheduling Jobs with Stochastic Processing Time on Parallel Identical Machines
      Richard Stec, Antonin Novak, Premysl Sucha, Zdenek Hanzalek
      Details | PDF
      Scheduling

      Many real-world scheduling problems are characterized by uncertain parameters. In this paper, we study a classical parallel machine scheduling problem where the processing time of jobs is given by a normal distribution. The objective is to maximize the probability that jobs are completed before a given common due date. This study focuses on the computational aspect of this problem, and it proposes a Branch-and-Price approach for solving it. The advantage of our method is that it scales very well with the increasing number of machines and is easy to implement. Furthermore, we propose an efficient lower bound heuristics. The experimental results show that our method outperforms the existing approaches.

    • #4311
      Fair Online Allocation of Perishable Goods and its Application to Electric Vehicle Charging
      Enrico H. Gerding, Alvaro Perez-Diaz, Haris Aziz, Serge Gaspers, Antonia Marcu, Nicholas Mattei, Toby Walsh
      Details | PDF
      Scheduling

      We consider mechanisms for the online allocation of perishable resources such as energy or computational power. A main application is electric vehicle charging where agents arrive and leave over time. Unlike previous work, we consider mechanisms without money, and a range of objectives including fairness and efficiency. In doing so, we extend the concept of envy-freeness to online settings. Furthermore, we explore the trade-offs between different objectives and analyse their theoretical properties both in online and offline settings. We then introduce novel online scheduling algorithms and compare them in terms of both their theoretical properties and empirical performance.

    • #10979
      (Journal track) Complexity Bounds for the Controllability of Temporal Networks with Conditions, Disjunctions, and Uncertainty
      Nikhil Bhargava, Brian C. Williams
      Details | PDF
      Scheduling

      In temporal planning, many different temporal network formalisms are used to model real world situations. Each of these formalisms has different features which affect how easy it is to determine whether the underlying network of temporal constraints is consistent. While many of the simpler models have been well-studied from a computational complexity perspective, the algorithms developed for advanced models which combine features have very loose complexity bounds. In this work, we provide tight completeness bounds for strong, weak, and dynamic controllability checking of temporal networks that have conditions, disjunctions, and temporal uncertainty. Our work exposes some of the subtle differences between these different structures and, remarkably, establishes a guarantee that all of these problems are computable in PSPACE.

    Tuesday 13 15:00 - 16:00 KRR|RKB - Reasoning about Knowlege and Belief (2603-2604)

    Chair: Gerhard Lakemeyer
    • #4209
      The Complexity of Model Checking Knowledge and Time
      Laura Bozzelli, Bastien Maubert, Aniello Murano
      Details | PDF
      Reasoning about Knowlege and Belief

      We establish the precise complexity of the model checking problem for the main logics of knowledge and time. While this problem was known to be non-elementary for agents with perfect recall, with a number of exponentials that increases with the alternation of knowledge operators, the precise complexity of the problem when the maximum alternation is fixed has been an open problem for twenty years. We close it by establishing improved upper bounds for CTL* with knowledge, and providing matching lower bounds that also apply for epistemic extensions of LTL and CTL.

    • #4563
      Converging on Common Knowledge
      Dominik Klein, Rasmus Kræmmer Rendsvig
      Details | PDF
      Reasoning about Knowlege and Belief

      Common knowledge, as is well known, is not attainable in finite time by unreliable communication, thus hindering perfect coordination. Focusing on the coordinated attack problem modeled using dynamic epistemic logic, this paper discusses unreliable communication protocols from a topological perspective and asks "If the generals may communicate indefinitely, will they then *converge* to a state of common knowledge?" We answer by making precise and showing the following: *common knowledge is attainable if, and only if, we do not care about common knowledge*.

    • #2601
      A Modal Characterization Theorem for a Probabilistic Fuzzy Description Logic
      Paul Wild, Lutz Schröder, Dirk Pattinson, Barbara König
      Details | PDF
      Reasoning about Knowlege and Belief

      The fuzzy modality probably is interpreted over probabilistic type spaces by taking expected truth values. The arising probabilistic fuzzy description logic is invariant under probabilistic bisimilarity; more informatively, it is non-expansive wrt. a suitable notion of behavioural distance. In the present paper, we provide a characterization of the expressive power of this logic based on this observation: We prove a probabilistic analogue of the classical van Benthem theorem, which states that modal logic is precisely the bisimulation-invariant fragment of first-order logic. Specifically, we show that every formula in probabilistic fuzzy first-order logic that is non-expansive wrt. behavioural distance can be approximated by concepts of bounded rank in probabilistic fuzzy description logic.

    • #4270
      Accelerated Inference Framework of Sparse Neural Network Based on Nested Bitmask Structure
      Yipeng Zhang, Bo Du, Lefei Zhang, Rongchun Li, Yong Dou
      Details | PDF
      Reasoning about Knowlege and Belief

      In order to satisfy the ever-growing demand for high-performance processors for neural networks, the state-of-the-art processing units tend to use application-oriented circuits to replace Processing Engine (PE) on the GPU under circumstances where low-power solutions are required. The application-oriented PE is fully optimized in terms of the circuit architecture and eliminates incorrect data dependency and instructional redundancy. In this paper, we propose a novel encoding approach on a sparse neural network after pruning. We partition the weight matrix into numerous blocks and use a low-rank binary map to represent the validation of these blocks. Furthermore, the elements in each nonzero block are also encoded into two submatrices: one is the binary stream discriminating the zero/nonzero position, while the other is the pure nonzero elements stored in the FIFO. In the experimental part, we implement a well pre-trained sparse neural network on the Xilinx FPGA VC707. Experimental results show that our algorithm outperforms the other benchmarks. Our approach has successfully optimized the throughput and the energy efficiency to deal with a single frame. Accordingly, we contend that Nested Bitmask Neural Network (NBNN), is an efficient neural network structure with only minor accuracy loss on the SoC system.

    Tuesday 13 15:00 - 16:00 NLP|MT - Machine Translation (2605-2606)

    Chair: Lemao Liu
    • #1653
      Sharing Attention Weights for Fast Transformer
      Tong Xiao, Yinqiao Li, Jingbo Zhu, Zhengtao Yu, Tongran Liu
      Details | PDF
      Machine Translation

      Recently, the Transformer machine translation system has shown strong results by stacking attention layers on both the source and target-language sides. But the inference of this model is slow due to the heavy use of dot-product attention in auto-regressive decoding. In this paper we speed up Transformer via a fast and lightweight attention model. More specifically, we share attention weights in adjacent layers and enable the efficient re-use of hidden states in a vertical manner. Moreover, the sharing policy can be jointly learned with the MT model. We test our approach on ten WMT and NIST OpenMT tasks. Experimental results show that it yields an average of 1.3X speed-up (with almost no decrease in BLEU) on top of a state-of-the-art implementation that has already adopted a cache for fast inference. Also, our approach obtains a 1.8X speed-up when it works with the AAN model. This is even 16 times faster than the baseline with no use of the attention cache.

    • #1859
      From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots
      Shizhe Chen, Qin Jin, Jianlong Fu
      Details | PDF
      Machine Translation

      The neural machine translation model has suffered from the lack of large-scale parallel corpora. In contrast, we humans can learn multi-lingual translations even without parallel texts by referring our languages to the external world. To mimic such human learning behavior, we employ images as pivots to enable zero-resource translation learning. However, a picture tells a thousand words, which makes multi-lingual sentences pivoted by the same image noisy as mutual translations and thus hinders the translation model learning. In this work, we propose a progressive learning approach for image-pivoted zero-resource machine translation. Since words are less diverse when grounded in the image, we first learn word-level translation with image pivots, and then progress to learn the sentence-level translation by utilizing the learned word translation to suppress noises in image-pivoted multi-lingual sentences. Experimental results on two widely used image-pivot translation datasets, IAPR-TC12 and Multi30k, show that the proposed approach significantly outperforms other state-of-the-art methods.

    • #3742
      Polygon-Net: A General Framework for Jointly Boosting Multiple Unsupervised Neural Machine Translation Models
      Chang Xu, Tao Qin, Gang Wang, Tie-Yan Liu
      Details | PDF
      Machine Translation

      Neural machine translation (NMT) has achieved great success. However, collecting large-scale parallel data for training is costly and laborious.  Recently, unsupervised neural machine translation has attracted more and more attention, due to its demand for monolingual corpus only, which is common and easy to obtain, and its great potentials for the low-resource or even zero-resource machine translation. In this work, we propose a general framework called Polygon-Net, which leverages multi auxiliary languages for jointly boosting unsupervised neural machine translation models. Specifically, we design a novel loss function for multi-language unsupervised neural machine translation. In addition, different from the literature that just updating one or two models individually, Polygon-Net enables multiple unsupervised models in the framework to update in turn and enhance each other for the first time. In this way, multiple unsupervised translation models are associated with each other for training to achieve better performance. Experiments on the benchmark datasets including UN Corpus and WMT show that our approach significantly improves over the two-language based methods, and achieves better performance with more languages introduced to the framework. 

    • #4441
      Correct-and-Memorize: Learning to Translate from Interactive Revisions
      Rongxiang Weng, Hao Zhou, Shujian Huang, Lei Li, Yifan Xia, Jiajun Chen
      Details | PDF
      Machine Translation

      State-of-the-art machine translation models are still not on a par with human translators. Previous work takes human interactions into the neural machine translation process to obtain improved results in target languages. However, not all model--translation errors are equal -- some are critical while others are minor. In the meanwhile, same translation mistakes occur repeatedly in similar context. To solve both issues, we propose CAMIT, a novel method for translating in an interactive environment. Our proposed method works with critical revision instructions, therefore allows human to correct arbitrary words in model-translated sentences. In addition, CAMIT learns from and softly memorizes revision actions based on the context, alleviating the issue of repeating mistakes. Experiments in both ideal and real interactive translation settings demonstrate that our proposed CAMIT enhances machine translation results significantly while requires fewer revision instructions from human compared to previous methods. 

    Tuesday 13 15:00 - 16:00 CV|RDCIMRSI - Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 1 (2501-2502)

    Chair: Yan Shuicheng
    • #3159
      Binarized Neural Networks for Resource-Efficient Hashing with Minimizing Quantization Loss
      Feng Zheng, Cheng Deng, Heng Huang
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 1

      In order to solve the problem of memory consumption and computational requirements, this paper proposes a novel learning binary neural network framework to achieve a resource-efficient deep hashing. In contrast to floating-point (32-bit) full-precision networks, the proposed method achieves a 32x model compression rate. At the same time, computational burden in convolution is greatly reduced due to efficient Boolean operations. To this end, in our framework, a new quantization loss defined between the binary weights and the learned real values is minimized to reduce the model distortion, while, by minimizing a binary entropy function, the discrete optimization is successfully avoided and the stochastic gradient descend method can be used smoothly. More importantly, we provide two theories to demonstrate the necessity and effectiveness of minimizing the quantization losses for both weights and activations. Numerous experiments show that the proposed method can achieve fast code generation without sacrificing accuracy.

    • #3518
      DSRN: A Deep Scale Relationship Network for Scene Text Detection
      Yuxin Wang, Hongtao Xie, Zilong Fu, Yongdong Zhang
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 1

      Nowadays, scene text detection has become increasingly important and popular. However, the large variance of text scale remains the main challenge and limits the detection performance in most previous methods. To address this problem, we propose an end-to-end architecture called Deep Scale Relationship Network (DSRN) to map multi-scale convolution features onto a scale invariant space to obtain uniform activation of multi-size text instances. Firstly, we develop a Scale-transfer module to transfer the multi-scale feature maps to a unified dimension. Due to the heterogeneity of features, simply concatenating feature maps with multi-scale information would limit the detection performance. Thus we propose a Scale Relationship module to aggregate the multi-scale information through bi-directional convolution operations. Finally, to further reduce the miss-detected instances, a novel Recall Loss is proposed to force the network to concern more about miss-detected text instances by up-weighting poor-classified examples. Compared with previous approaches, DSRN efficiently handles the large-variance scale problem without complex hand-crafted hyperparameter settings (e.g. scale of default boxes) and complicated post processing. On standard datasets including ICDAR2015 and MSRA-TD500, the proposed algorithm achieves the state-of-art performance with impressive speed (8.8 FPS on ICDAR2015 and 13.3 FPS on MSRA-TD500).

    • #4861
      Detecting Robust Co-Saliency with Recurrent Co-Attention Neural Network
      Bo Li, Zhengxing Sun, Lv Tang, Yunhan Sun, Jinlong Shi
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 1

      Effective feature representations which should not only express the images individual properties, but also reflect the interaction among group images are essentially crucial for robust co-saliency detection. This paper proposes a novel deep learning co-saliency detection approach which simultaneously learns single image properties and robust group feature in a recurrent manner. Specifically, our network first extracts the semantic features of each image. Then, a specially designed Recurrent Co-Attention Unit (RCAU) will explore all images in the group recurrently to generate the final group representation using the co-attention between images, and meanwhile suppresses noisy information. The group feature which contains complementary synergetic information is later merged with the single image features which express the unique properties to infer robust co-saliency. We also propose a novel co-perceptual loss to make full use of interactive relationships of whole images in the training group as the supervision in our end-to-end training process. Extensive experimental results demonstrate the superiority of our approach in comparison with the state-of-the-art methods.

    • #3859
      Deep Recurrent Quantization for Generating Sequential Binary Codes
      Jingkuan Song, Xiaosu Zhu, Lianli Gao, Xin-Shun Xu, Wu Liu, Heng Tao Shen
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 1

      Quantization has been an effective technology in ANN (approximate nearest neighbour) search due to its high accuracy and fast search speed. To meet the requirement of different applications, there is always a trade-off between retrieval accuracy and speed, reflected by variable code lengths. However, to encode the dataset into different code lengths, existing methods need to train several models, where each model can only produce a specific code length. This incurs a considerable training time cost, and largely reduces the flexibility of quantization methods to be deployed in real applications. To address this issue, we propose a Deep Recurrent Quantization (DRQ) architecture which can generate sequential binary codes. To the end, when the model is trained, a sequence of binary codes can be generated and the code length can be easily controlled by adjusting the number of recurrent iterations. A shared codebook and a scalar factor is designed to be the learnable weights in the deep recurrent quantization block, and the whole framework can be trained in an end-to-end manner. As far as we know, this is the first quantization method that can be trained once and generate sequential binary codes. Experimental results on the benchmark datasets show that our model achieves comparable or even better performance compared with the state-of-the-art for image retrieval. But it requires significantly less number of parameters and training times. Our code is published online: https://github.com/cfm-uestc/DRQ.

    Tuesday 13 15:00 - 16:00 ML|C - Classification 2 (2503-2504)

    Chair: Lianhua Chi
    • #222
      Learning Sound Events from Webly Labeled Data
      Anurag Kumar, Ankit Shah, Alexander Hauptmann, Bhiksha Raj
      Details | PDF
      Classification 2

      In the last couple of years, weakly labeled learning has turned out to be an exciting approach for audio event detection. In this work, we introduce webly labeled learning for sound events which aims to remove human supervision altogether from the learning process. We first develop a method of obtaining labeled audio data from the web (albeit noisy), in which no manual labeling is involved. We then describe methods to efficiently learn from these webly labeled audio recordings. In our proposed system, WeblyNet, two deep neural networks co-teach each other to robustly learn from webly labeled data, leading to around 17% relative improvement over the baseline method. The method also involves transfer learning to obtain efficient representations.

    • #2957
      Persistence Bag-of-Words for Topological Data Analysis
      Bartosz Zieliński, Michał Lipiński, Mateusz Juda, Matthias Zeppelzauer, Paweł Dłotko
      Details | PDF
      Classification 2

      Persistent homology (PH) is a rigorous mathematical theory that provides a robust descriptor of data in the form of persistence diagrams (PDs). PDs exhibit, however, complex structure and are difficult to integrate in today's machine learning workflows. This paper introduces persistence bag-of-words: a novel and stable vectorized representation of PDs that enables the seamless integration with machine learning. Comprehensive experiments show that the new representation achieves state-of-the-art performance and beyond in much less time than alternative approaches.

    • #4977
      Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss
      Pengcheng Li, Jinfeng Yi, Bowen Zhou, Lijun Zhang
      Details | PDF
      Classification 2

      Recent studies have highlighted that deep neural networks (DNNs) are vulnerable to adversarial examples. In this paper, we improve the robustness of DNNs by utilizing techniques of Distance Metric Learning. Specifically, we incorporate Triplet Loss, one of the most popular Distance Metric Learning methods, into the framework of adversarial training. Our proposed algorithm, Adversarial Training with Triplet Loss (AT2L), substitutes the adversarial example against the current model for the anchor of triplet loss to effectively smooth the classification boundary. Furthermore, we propose an ensemble version of AT2L, which aggregates different attack methods and model structures for better defense effects. Our empirical studies verify that the proposed approach can significantly improve the robustness of DNNs without sacrificing accuracy. Finally, we demonstrate that our specially designed triplet loss can also be used as a regularization term to enhance other defense methods.

    • #537
      Graph and Autoencoder Based Feature Extraction for Zero-shot Learning
      Yang Liu, Deyan Xie, Quanxue Gao, Jungong Han, Shujian Wang, Xinbo Gao
      Details | PDF
      Classification 2

      Zero-shot learning (ZSL) aims to build models to recognize novel visual categories that have no associated labelled training samples. The basic framework is to transfer knowledge from seen classes to unseen classes by learning the visual-semantic embedding. However, most of approaches do not preserve the underlying sub-manifold of samples in the embedding space. In addition, whether the mapping can precisely reconstruct the original visual feature is not investigated in-depth. In order to solve these problems, we formulate a novel framework named Graph and Autoencoder Based Feature Extraction (GAFE) to seek a low-rank mapping to preserve the sub-manifold of samples. Taking the encoder-decoder paradigm, the encoder part learns a mapping from the visual feature to the semantic space, while decoder part reconstructs the original features with the learned mapping. In addition, a graph is constructed to guarantee the learned mapping can preserve the local intrinsic structure of the data. To this end, an L21 norm sparsity constraint is imposed on the mapping to identify features relevant to the target domain. Extensive experiments on five attribute datasets demonstrate the effectiveness of the proposed model.

    Tuesday 13 15:00 - 16:00 ML|DM - Data Mining 2 (2505-2506)

    Chair: Ming Li
    • #653
      DeepCU: Integrating both Common and Unique Latent Information for Multimodal Sentiment Analysis
      Sunny Verma, Chen Wang, Liming Zhu, Wei Liu
      Details | PDF
      Data Mining 2

      Multimodal sentiment analysis combines information available from visual, textual, and acoustic representations for sentiment prediction. The recent multimodal fusion schemes combine multiple modalities as a tensor and obtain either; the common information by utilizing neural networks, or the unique information by modeling low-rank representation of the tensor. However, both of these information are essential as they render inter-modal and intra-modal relationships of the data. In this research, we first propose a novel deep architecture to extract the common information from the multi-mode representations. Furthermore, we propose unique networks to obtain the modality-specific information that enhances the generalization performance of our multimodal system. Finally, we integrate these two aspects of information via a fusion layer and propose a novel multimodal data fusion architecture, which we call DeepCU (Deep network with both Common and Unique latent information). The proposed DeepCU consolidates the two networks for joint utilization and discovery of all-important latent information. Comprehensive experiments are conducted to demonstrate the effectiveness of utilizing both common and unique information discovered by DeepCU on multiple real-world datasets. The source code of proposed DeepCU is available at https://github.com/sverma88/DeepCU-IJCAI19.

    • #3577
      Commit Message Generation for Source Code Changes
      Shengbin Xu, Yuan Yao, Feng Xu, Tianxiao Gu, Hanghang Tong, Jian Lu
      Details | PDF
      Data Mining 2

      Commit messages, which summarize the source code changes in natural language, are essential for program comprehension and software evolution understanding. Unfortunately, due to the lack of direct motivation, commit messages are sometimes neglected by developers, making it necessary to automatically generate such messages. State-of-the-art adopts learning based approaches such as neural machine translation models for the commit message generation problem. However, they tend to ignore the code structure information and suffer from the out-of-vocabulary issue. In this paper, we propose CoDiSum to address the above two limitations. In particular, we first extract both code structure and code semantics from the source code changes, and then jointly model these two sources of information so as to better learn the representations of the code changes. Moreover, we augment the model with copying mechanism to further mitigate the out-of-vocabulary issue. Experimental evaluations on real data demonstrate that the proposed approach significantly outperforms the state-of-the-art in terms of accurately generating the commit messages.

    • #6385
      Recommending Links to Maximize the Influence in Social Networks
      Federico Corò, Gianlorenzo D'Angelo, Yllka Velaj
      Details | PDF
      Data Mining 2

      Social link recommendation systems, like "People-you-may-know" on Facebook, "Who-to-follow" on Twitter, and "Suggested-Accounts" on Instagram assist the users of a social network in establishing new connections with other users. While these systems are becoming more and more important in the growth of social media, they tend to increase the popularity of users that are already popular. Indeed, since link recommenders aim at predicting users' behavior, they accelerate the creation of links that are likely to be created in the future, and, as a consequence, they reinforce social biases by suggesting few (popular) users, while giving few chances to the majority of users to build new connections and increase their popularity.In this paper we measure the popularity of a user by means of its social influence, which is its capability to influence other users' opinions, and we propose a link recommendation algorithm that evaluates the links to suggest according to their increment in social influence instead of their likelihood of being created. In detail, we give a constant factor approximation algorithm for the problem of maximizing the social influence of a given set of target users by suggesting a fixed number of new connections. We experimentally show that, with few new links and small computational time, our algorithm is able to increase by far the social influence of the target users. We compare our algorithm with several baselines and show that it is the most effective one in terms of increased influence.

    • #5247
      Fairwalk: Towards Fair Graph Embedding
      Tahleen Rahman, Bartlomiej Surma, Michael Backes, Yang Zhang
      Details | PDF
      Data Mining 2

      Graph embeddings have gained huge popularity in the recent years as a powerful tool to analyze social networks. However, no prior works have studied potential bias issues inherent within graph embedding. In this paper, we make a first attempt in this direction. In particular, we concentrate on the fairness of node2vec, a popular graph embedding method. Our analyses on two real-world datasets demonstrate the existence of bias in node2vec when used for friendship recommendation. We, therefore, propose a fairness-aware embedding method, namely Fairwalk, which extends node2vec. Experimental results demonstrate that Fairwalk reduces bias under multiple fairness metrics while still preserving the utility.

    Tuesday 13 15:00 - 16:00 AMS|ML - Multi-agent Learning 1 (2401-2402)

    Chair: Jianye Hao
    • #1063
      Value Function Transfer for Deep Multi-Agent Reinforcement Learning Based on N-Step Returns
      Yong Liu, Yujing Hu, Yang Gao, Yingfeng Chen, Changjie Fan
      Details | PDF
      Multi-agent Learning 1

      Many real-world problems, such as robot control and soccer game, are naturally modeled as sparse-interaction multi-agent systems. Reutilizing single-agent knowledge in multi-agent systems with sparse interactions can greatly accelerate the multi-agent learning process. Previous works rely on bisimulation metric to define Markov decision process (MDP) similarity for controlling knowledge transfer. However, bisimulation metric is costly to compute and is not suitable for high-dimensional state space problems. In this work, we propose more scalable transfer learning methods based on a novel MDP similarity concept. We start by defining the MDP similarity based on the N-step return (NSR) values of an MDP. Then, we propose two knowledge transfer methods based on deep neural networks called direct value function transfer and NSR-based value function transfer. We conduct experiments in image-based grid world, multi-agent particle environment (MPE) and Ms. Pac-Man game. The results indicate that the proposed methods can significantly accelerate multi-agent reinforcement learning and meanwhile get better asymptotic performance.

    • #2168
      Decentralized Optimization with Edge Sampling
      Chi Zhang, Qianxiao Li, Peilin Zhao
      Details | PDF
      Multi-agent Learning 1

      In this paper, we propose a decentralized distributed algorithm with stochastic communication among nodes, building on a sampling method called "edge sampling''. Such a sampling algorithm allows us to avoid the heavy peer-to-peer communication cost when combining neighboring weights on dense networks while still maintains a comparable convergence rate. In particular, we quantitatively analyze its theoretical convergence properties, as well as the optimal sampling rate over the underlying network. When compared with previous methods, our solution is shown to be unbiased, communication-efficient and suffers from lower sampling variances. These theoretical findings are validated by both numerical experiments on the mixing rates of Markov Chains and distributed machine learning problems.

    • #2581
      Exploring the Task Cooperation in Multi-goal Visual Navigation
      Yuechen Wu, Zhenhuan Rao, Wei Zhang, Shijian Lu, Weizhi Lu, Zheng-Jun Zha
      Details | PDF
      Multi-agent Learning 1

      Learning to adapt to a series of different goals in visual navigation is challenging. In this work, we present a model-embedded actor-critic architecture for the multi-goal visual navigation task. To enhance the task cooperation in multi-goal learning, we introduce two new designs to the reinforcement learning scheme: inverse dynamics model (InvDM) and multi-goal co-learning (MgCl). Specifically, InvDM is proposed to capture the navigation-relevant association between state and goal, and provide additional training signals to relieve the sparse reward issue. MgCl aims at improving the sample efficiency and supports the agent to learn from unintentional positive experiences. Extensive results on the interactive platform AI2-THOR demonstrate that the proposed method converges faster than state-of-the-art methods while producing more direct routes to navigate to the goal. The video demonstration is available at: https://youtube.com/channel/UCtpTMOsctt3yPzXqe_JMD3w/videos.

    • #2679
      Computing Approximate Equilibria in Sequential Adversarial Games by Exploitability Descent
      Edward Lockhart, Marc Lanctot, Julien Pérolat, Jean-Baptiste Lespiau, Dustin Morrill, Finbarr TImbers, Karl Tuyls
      Details | PDF
      Multi-agent Learning 1

      In this paper, we present exploitability descent, a new algorithm to compute approximate equilibria in two-player zero-sum extensive-form games with imperfect information, by direct policy optimization against worst-case opponents. We prove that when following this optimization, the exploitability of a player's strategy converges asymptotically to zero, and hence when both players employ this optimization, the joint policies converge to a Nash equilibrium. Unlike fictitious play (XFP) and counterfactual regret minimization (CFR), our convergence result pertains to the policies being optimized rather than the average policies. Our experiments demonstrate convergence rates comparable to XFP and CFR in four benchmark games in the tabular case. Using function approximation, we find that our algorithm outperforms the tabular version in two of the games, which, to the best of our knowledge, is the first such result in imperfect information games among this class of algorithms.

    Tuesday 13 15:00 - 16:00 NLP|NLPAT - NLP Applications and Tools (2403-2404)

    Chair: Vincent Ng
    • #3015
      Learning Assistance from an Adversarial Critic for Multi-Outputs Prediction
      Yue Deng, Yilin Shen, Hongxia Jin
      Details | PDF
      NLP Applications and Tools

      We introduce an adversarial-critic-and-assistant (ACA) learning framework to improve the performance of existing supervised learning with multiple outputs. The core contribution of our ACA is the innovation of two novel modules, i.e. an `adversarial critic' and a `collaborative assistant', that are jointly designed to provide augmenting information for facilitating general learning tasks. Our approach is not intended to be regarded as an emerging competitor for tons of well-established algorithms in the field. In fact, most existing approaches, while implemented with different learning objectives, can all be adopted as building blocks seamlessly integrated in the ACA framework to accomplish various real-world tasks. We show the performance and generalization ability of ACA on diverse learning tasks including multi-label classification, attributes prediction and sequence-to-sequence generation.

    • #3373
      Answering Binary Causal Questions Through Large-Scale Text Mining: An Evaluation Using Cause-Effect Pairs from Human Experts
      Oktie Hassanzadeh, Debarun Bhattacharjya, Mark Feblowitz, Kavitha Srinivas, Michael Perrone, Shirin Sohrabi, Michael Katz
      Details | PDF
      NLP Applications and Tools

      In this paper, we study the problem of answering questions of type "Could X cause Y?" where X and Y are general phrases without any constraints. Answering such questions will assist with various decision analysis tasks such as verifying and extending presumed causal associations used for decision making. Our goal is to analyze the ability of an AI agent built using state-of-the-art unsupervised methods in answering causal questions derived from collections of cause-effect pairs from human experts. We focus only on unsupervised and weakly supervised methods due to the difficulty of creating a large enough training set with a reasonable quality and coverage. The methods we examine rely on a large corpus of text derived from news articles, and include methods ranging from large-scale application of classic NLP techniques and statistical analysis to the use of neural network based phrase embeddings and state-of-the-art neural language models.

    • #6048
      Aligning Learning Outcomes to Learning Resources: A Lexico-Semantic Spatial Approach
      Swarnadeep Saha, Malolan Chetlur, Tejas Indulal Dhamecha, W M Gayathri K Wijayarathna, Red Mendoza, Paul Gagnon, Nabil Zary, Shantanu Godbole
      Details | PDF
      NLP Applications and Tools

      Aligning Learning Outcomes (LO) to relevant portions of Learning Resources (LR) is necessary to help students quickly navigate within the recommended learning material. In general, the problem can be viewed as finding the relevant sections of a document (LR) that is pertinent to a broad question (LO). In this paper, we introduce the novel problem of aligning LOs (LO is usually a sentence long text) to relevant pages of LRs (LRs are in the form of slide decks). We observe that the set of relevant pages can be composed of multiple chunks (a chunk is a contiguous set of pages) and the same page of an LR might be relevant to multiple LOs. To this end, we develop a novel Lexico-Semantic Spatial approach that captures the lexical, semantic, and spatial aspects of the task, and also alleviates the limited availability of training data. Our approach first identifies the relevancy of a page to an LO by using lexical and semantic features from each page independently. The spatial model at a later stage exploits the dependencies between the sequence of pages in the LR to further improve the alignment task. We empirically establish the importance of the lexical, semantic, and spatial models within the proposed approach. We show that, on average, a student can navigate to a relevant page from the first predicted page by about four clicks within a 38 page slide deck, as compared to two clicks by human experts.

    • #5271
      Modeling Noisy Hierarchical Types in Fine-Grained Entity Typing: A Content-Based Weighting Approach
      Junshuang Wu, Richong Zhang, Yongyi Mao, Hongyu Guo, Jinpeng Huai
      Details | PDF
      NLP Applications and Tools

      Fine-grained entity typing (FET), which annotates the entities in a sentence with a set of finely specified type labels, often serves as the first and critical step towards many natural language processing tasks. Despite great processes have been made, current FET methods have difficulty to cope with the noisy labels which naturally come with the data acquisition processes. Existing FET approaches either pre-process to clean the noise or simply focus on one of the noisy labels, sidestepping the fact that those noises are related and content dependent. In this paper, we directly model the structured, noisy labels with a novel content-sensitive weighting schema. Coupled with a newly devised cost function and a hierarchical type embedding strategy, our method leverages a random walk process to effectively weight out noisy labels during training. Experiments on several benchmark datasets validate the effectiveness of the proposed framework and establish it as a new state of the art strategy for noisy entity typing problem.

    Tuesday 13 15:00 - 16:00 MLA|BM - Bio;Medicine (2405-2406)

    Chair: Guoxian Yu
    • #1408
      FSM: A Fast Similarity Measurement for Gene Regulatory Networks via Genes' Influence Power
      Zhongzhou Liu, Wenbin Hu
      Details | PDF
      Bio;Medicine

      The problem of graph similarity measurement is fundamental in both complex networks and bioinformatics researches. Gene regulatory networks (GRNs) describe the interactions between the molecules in organisms, and are widely studied in the fields of medical AI. By measuring the similarity between GRNs, significant information can be obtained to assist the applications like gene functions prediction, drug development and medical diagnosis. Most of the existing similarity measurements have been focusing on the graph isomorphisms and are usually NP-hard problems. Thus, they are not suitable for applications in biology and clinical research due to the complexity and large-scale features of real-world GRNs. In this paper, a fast similarity measurement method called FSM for GRNs is proposed. Unlike the conventional measurements, it pays more attention to the differences between those influential genes. For the convenience and reliability, a new index defined as influence power is adopted to describe the influential genes which have greater position in a GRN. FSM was applied in nine datasets of various scales and is compared with state-of-art methods. The results demonstrated that it ran significantly faster than other methods without sacrificing measurement performance.

    • #2106
      MLRDA: A Multi-Task Semi-Supervised Learning Framework for Drug-Drug Interaction Prediction
      Xu Chu, Yang Lin, Yasha Wang, Leye Wang, Jiangtao Wang, Jingyue Gao
      Details | PDF
      Bio;Medicine

      Drug-drug interactions (DDIs) are a major cause of preventable hospitalizations and deaths. Recently, researchers in the AI community try to improve DDI prediction in two directions, incorporating multiple drug features to better model the pharmacodynamics and adopting multi-task learning to exploit associations among DDI types. However, these two directions are challenging to reconcile due to the sparse nature of the DDI labels which inflates the risk of overfitting of multi-task learning models when incorporating multiple drug features. In this paper, we propose a multi-task semi-supervised learning framework MLRDA for DDI prediction. MLRDA effectively exploits information that is beneficial for DDI prediction in unlabeled drug data by leveraging a novel unsupervised disentangling loss CuXCov. The CuXCov loss cooperates with the classification loss to disentangle the DDI prediction relevant part from the irrelevant part in a representation learnt by an autoencoder, which helps to ease the difficulty in mining useful information for DDI prediction in both labeled and unlabeled drug data. Moreover, MLRDA adopts a multi-task learning framework to exploit associations among DDI types. Experimental results on real-world datasets demonstrate that MLRDA significantly outperforms state-of-the-art DDI prediction methods by up to 10.3% in AUPR.

    • #3567
      Medical Concept Embedding with Multiple Ontological Representations
      Lihong Song, Chin Wang Cheong, Kejing Yin, William K. Cheung, Benjamin C. M. Fung, Jonathan Poon
      Details | PDF
      Bio;Medicine

      Learning representations of medical concepts from the Electronic Health Records (EHR) has been shown effective for predictive analytics in healthcare. Incorporation of medical ontologies has also been explored to further enhance the accuracy and to ensure better alignment with the known medical knowledge. Most of the existing work assumes that medical concepts under the same ontological category should share similar representations, which however does not always hold. In particular, the categorizations in medical ontologies were established with various factors being considered. Medical concepts even under the same ontological category may not follow similar occurrence patterns in the EHR data, leading to contradicting objectives for the representation learning. In this paper, we propose a deep learning model called MMORE which alleviates this conflicting objective issue by allowing multiple representations to be inferred for each ontological category via an attention mechanism. We apply MMORE to diagnosis prediction and our experimental results show that the representations obtained by MMORE can achieve better predictive accuracy and result in clinically meaningful sub-categorization of the existing ontological categories.

    • #568
      Two-Stage Generative Models of Simulating Training Data at The Voxel Level for Large-Scale Microscopy Bioimage Segmentation
      Deli Wang, Ting Zhao, Nenggan Zheng, Zhefeng Gong
      Details | PDF
      Bio;Medicine

      Bioimage Informatics is a growing area that aims to extract biological knowledge from microscope images of biomedical samples automatically. Its mission is vastly challenging, however, due to the complexity of diverse imaging modalities and big scales of multi-dimensional images. One major challenge is automatic image segmentation, an essential step towards high-level modeling and analysis. While progresses in deep learning have brought the goal of automation much closer to reality, creating training data for producing powerful neural networks is often laborious. To provide a shortcut for this costly step, we propose a novel two-stage generative model for simulating voxel level training data based on a specially designed objective function of preserving foreground labels. Using segmenting neurons from LM (Light Microscopy) image stacks as a testing example, we showed that segmentation networks trained by our synthetic data were able to produce satisfactory results. Unlike other simulation methods available in the field, our method can be easily extended to many other applications because it does not involve sophisticated cell models and imaging mechanisms.

    Tuesday 13 15:00 - 17:00 Competition (2305)


  • Macao AI Challenge for High School Students
    Competition
  • Tuesday 13 16:30 - 17:30 ST: Human AI & ML 1 - Special Track on Human AI and Machine Learning 2 (J)

    Chair: Chao Yu
    • #4017
      How Well Do Machines Perform on IQ tests: a Comparison Study on a Large-Scale Dataset
      Yusen Liu, Fangyuan He, Haodi Zhang, Guozheng Rao, Zhiyong Feng, Yi Zhou
      Details | PDF
      Special Track on Human AI and Machine Learning 2

      AI benchmarking becomes an increasingly important task. As suggested by many researchers, Intelligence Quotient (IQ) tests, which is widely regarded as one of the predominant benchmarks for measuring human intelligence, raises an interesting challenge for AI systems. For better solving IQ tests automatedly by machines, one needs to use, combine and advance many areas in AI including knowledge representation and reasoning, machine learning, natural language processing and image understanding. Also, automated IQ tests provides an ideal testbed for integrating symbolic and sub-symbolic approaches as both are found useful here. Hence, we argue that IQ tests, although not suitable for testing machine intelligence, provides an excellent benchmark for the current development of AI research. Nevertheless, most existing IQ test datasets are not comprehensive enough for this purpose. As a result, the conclusions obtained are not representative. To address this issue, we create IQ10k, a large-scale dataset that contains more than 10,000 IQ test questions. We also conduct a comparison study on IQ10k with a number of state-of-the-art approaches.

    • #4970
      Learning Relational Representations with Auto-encoding Logic Programs
      Sebastijan Dumancic, Tias Guns, Wannes Meert, Hendrik Blockeel
      Details | PDF
      Special Track on Human AI and Machine Learning 2

      Deep learning methods capable of handling relational data have proliferated over the past years. In contrast to traditional relational learning methods that leverage first-order logic for representing such data, these methods aim at re-representing symbolic relational data in Euclidean space. They offer better scalability, but can only approximate rich relational structures and are less flexible in terms of reasoning. This paper introduces a novel framework for relational representation learning that combines the best of both worlds. This framework, inspired by the auto-encoding principle, uses first-order logic as a data representation language, and the mapping between the the original and latent representation is done by means of logic programs instead of neural networks. We show how learning can be cast as a constraint optimisation problem for which existing solvers can be used. The use of logic as a representation language makes the proposed framework more accurate (as the representation is exact, rather than approximate), more flexible, and more interpretable than deep learning methods. We experimentally show that these latent representations are indeed beneficial in relational learning tasks.

    • #5902
      Learning Hierarchical Symbolic Representations to Support Interactive Task Learning and Knowledge Transfer
      James R. Kirk, John E. Laird
      Details | PDF
      Special Track on Human AI and Machine Learning 2

      Interactive Task Learning (ITL) focuses on learning the definition of tasks through online natural language instruction in real time. Learning the correct grounded meaning of the instructions is difficult due to ambiguous words, lack of common ground, and the presence of distractors in the environment and the agent’s knowledge. We present a learning strategy embodied in an ITL agent that interactively learns in one shot the meaning of task concepts for 40 games and puzzles in ambiguous scenarios. Our approach learns hierarchical symbolic representations of task knowledge rather than learning a mapping directly from perceptual representations. These representations enable the agent to transfer and compose knowledge, analyze and debug multiple interpretations, and communicate efficiently with the teacher to resolve ambiguity. We evaluate the efficiency of the learning by examining the number of words required to teach tasks across cases of no transfer, positive transfer, and interference from prior tasks. Our results show that the agent can correctly generalize, disambiguate, and transfer concepts within variations in language descriptions and world representations of the same task, and across variations in different tasks.

    • #6050
      LTL and Beyond: Formal Languages for Reward Function Specification in Reinforcement Learning
      Alberto Camacho, Rodrigo Toro Icarte, Toryn Q. Klassen, Richard Valenzano, Sheila A. McIlraith
      Details | PDF
      Special Track on Human AI and Machine Learning 2

      In Reinforcement Learning (RL), an agent is guided by the rewards it receives from the reward function. Unfortunately, it may take many interactions with the environment to learn from sparse rewards, and it can be challenging to specify reward functions that reflect complex reward-worthy behavior. We propose using reward machines (RMs), which are automata-based representations that expose reward function structure, as a normal form representation for reward functions. We show how specifications of reward in various formal languages, including LTL and other regular languages, can be automatically translated into RMs, easing the burden of complex reward function specification. We then show how the exposed structure of the reward function can be exploited by tailored q-learning algorithms and automated reward shaping techniques in order to improve the sample efficiency of reinforcement learning methods. Experiments show that these RM-tailored techniques significantly outperform state-of-the-art (deep) RL algorithms, solving problems that otherwise cannot reasonably be solved by existing approaches.

    Tuesday 13 16:30 - 18:00 ML|DL - Deep Learning 3 (L)

    Chair: Vineeth N Balasubramanian
    • #61
      Heterogeneous Graph Matching Networks for Unknown Malware Detection
      Shen Wang, Zhengzhang Chen, Xiao Yu, Ding Li, Jingchao Ni, Lu-An Tang, Jiaping Gui, Zhichun Li, Haifeng Chen, Philip S. Yu
      Details | PDF
      Deep Learning 3

      Information systems have widely been the target of malware attacks. Traditional signature-based malicious program detection algorithms can only detect known malware and are prone to evasion techniques such as binary obfuscation, while behavior-based approaches highly rely on the malware training samples and incur prohibitively high training cost. To address the limitations of existing techniques, we propose MatchGNet, a heterogeneous Graph Matching Network model to learn the graph representation and similarity metric simultaneously based on the invariant graph modeling of the program's execution behaviors. We conduct a systematic evaluation of our model and show that it is accurate in detecting malicious program behavior and can help detect malware attacks with less false positives. MatchGNet outperforms the state-of-the-art algorithms in malware detection by generating 50% less false positives while keeping zero false negatives.

    • #3225
      On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Minimization
      Yi Xu, Zhuoning Yuan, Sen Yang, Rong Jin, Tianbao Yang
      Details | PDF
      Deep Learning 3

      Extrapolation is a well-known technique for solving convex optimization and variational inequalities and recently attracts some attention for non-convex optimization. Several recent works have empirically shown its success in some machine learning tasks. However, it has not been analyzed for non-convex minimization and there still remains a gap between the theory and the practice. In this paper, we analyze gradient descent  and stochastic gradient descent with extrapolation for finding an approximate first-order stationary point in smooth non-convex optimization problems. Our convergence upper bounds show that the algorithms with extrapolation can be accelerated than without extrapolation.

    • #4207
      Learning Instance-wise Sparsity for Accelerating Deep Models
      Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu
      Details | PDF
      Deep Learning 3

      Exploring deep convolutional neural networks of high efficiency and low memory usage is very essential for a wide variety of machine learning tasks. Most of existing approaches used to accelerate deep models by manipulating parameters or filters without data, e.g., pruning and decomposition. In contrast, we study this problem from a different perspective by respecting the difference between data. An instance-wise feature pruning is developed by identifying informative features for different instances. Specifically, by investigating a feature decay regularization, we expect intermediate feature maps of each instance in deep neural networks to be sparse while preserving the overall network performance. During online inference, subtle features of input images extracted by intermediate layers of a well-trained neural network can be eliminated to accelerate the subsequent calculations. We further take coefficient of variation as a measure to select the layers that are appropriate for acceleration. Extensive experiments conducted on benchmark datasets and networks demonstrate the effectiveness of the proposed method.

    • #4385
      Quaternion Collaborative Filtering for Recommendation
      Shuai Zhang, Lina Yao, Lucas Vinh Tran, Aston Zhang, Yi Tay
      Details | PDF
      Deep Learning 3

      This paper proposes Quaternion Collaborative Filtering (QCF), a novel representation learning method for recommendation. Our proposed QCF relies on and exploits computation with Quaternion algebra, benefiting from the expressiveness and rich representation learning capability of Hamilton products. Quaternion representations, based on hypercomplex numbers, enable rich inter-latent dependencies between imaginary components. This encourages intricate relations to be captured when learning user-item interactions, serving as a strong inductive bias  as compared with the real-space inner product. All in all, we conduct extensive experiments on six real-world datasets, demonstrating the effectiveness of Quaternion algebra in recommender systems. The results exhibit that QCF outperforms a wide spectrum of strong neural baselines on all datasets. Ablative experiments confirm the effectiveness of Hamilton-based composition over multi-embedding composition in real space. 

    • #10977
      (Sister Conferences Best Papers Track) A Walkthrough for the Principle of Logit Separation
      Gil Keren, Sivan Sabato, Björn Schuller
      Details | PDF
      Deep Learning 3

      We consider neural network training, in applications in which there are many possible classes, but at test-time, the task is a binary classification task of determining whether the given example belongs to a specific class. We define the Single Logit Classification (SLC) task: training the network so that at test-time, it would be possible to accurately identify whether the example belongs to a given class in a computationally efficient manner, based only on the output logit for this class. We propose a natural principle, the Principle of Logit Separation, as a guideline for choosing and designing loss functions that are suitable for SLC. We show that the Principle of Logit Separation is a crucial ingredient for success in the SLC task, and that SLC results in considerable speedups when the number of classes is large.

    • #613
      Dense Transformer Networks for Brain Electron Microscopy Image Segmentation
      Jun Li, Yongjun Chen, Lei Cai, Ian Davidson, Shuiwang Ji
      Details | PDF
      Deep Learning 3

      The key idea of current deep learning methods for dense prediction is to apply a model on a regular patch centered on each pixel to make pixel-wise predictions. These methods are limited in the sense that the patches are determined by network architecture instead of learned from data. In this work, we propose the dense transformer networks, which can learn the shapes and sizes of patches from data. The dense transformer networks employ an encoder-decoder architecture, and a pair of dense transformer modules are inserted into each of the encoder and decoder paths. The novelty of this work is that we provide technical solutions for learning the shapes and sizes of patches from data and efficiently restoring the spatial correspondence required for dense prediction. The proposed dense transformer modules are differentiable, thus the entire network can be trained. We apply the proposed networks on biological image segmentation tasks and show superior performance is achieved in comparison to baseline methods.

    Tuesday 13 16:30 - 18:00 ML|RL - Reinforcement Learning 1 (2701-2702)

    Chair: Regis Sabbadin
    • #132
      Soft Policy Gradient Method for Maximum Entropy Deep Reinforcement Learning
      Wenjie Shi, Shiji Song, Cheng Wu
      Details | PDF
      Reinforcement Learning 1

      Maximum entropy deep reinforcement learning (RL) methods have been demonstrated on a range of challenging continuous tasks. However, existing methods either suffer from severe instability when training on large off-policy data or cannot scale to tasks with very high state and action dimensionality such as 3D humanoid locomotion. Besides, the optimality of desired Boltzmann policy set for non-optimal soft value function is not persuasive enough. In this paper, we first derive soft policy gradient based on entropy regularized expected reward objective for RL with continuous actions. Then, we present an off-policy actor-critic, model-free maximum entropy deep RL algorithm called deep soft policy gradient (DSPG) by combining soft policy gradient with soft Bellman equation. To ensure stable learning while eliminating the need of two separate critics for soft value functions, we leverage double sampling approach to making the soft Bellman equation tractable. The experimental results demonstrate that our method outperforms in performance over off-policy prior methods.

    • #628
      Incremental Learning of Planning Actions in Model-Based Reinforcement Learning
      Jun Hao Alvin Ng, Ronald P. A. Petrick
      Details | PDF
      Reinforcement Learning 1

      The soundness and optimality of a plan depends on the correctness of the domain model. Specifying complete domain models can be difficult when interactions between an agent and its environment are complex. We propose a model-based reinforcement learning (MBRL) approach to solve planning problems with unknown models. The model is learned incrementally over episodes using only experiences from the current episode which suits non-stationary environments. We introduce the novel concept of reliability as an intrinsic motivation for MBRL, and a method to learn from failure to prevent repeated instances of similar failures. Our motivation is to improve the learning efficiency and goal-directedness of MBRL. We evaluate our work with experimental results for three planning domains.

    • #2825
      Autoregressive Policies for Continuous Control Deep Reinforcement Learning
      Dmytro Korenkevych, A. Rupam Mahmood, Gautham Vasan, James Bergstra
      Details | PDF
      Reinforcement Learning 1

      Reinforcement learning algorithms rely on exploration to discover new behaviors, which is typically achieved by following a stochastic policy. In continuous control tasks, policies with a Gaussian distribution have been widely adopted. Gaussian exploration however does not result in smooth trajectories that generally correspond to safe and rewarding behaviors in practical tasks. In addition, Gaussian policies do not result in an effective exploration of an environment and become increasingly inefficient as the action rate increases. This contributes to a low sample efficiency often observed in learning continuous control tasks. We introduce a family of stationary autoregressive (AR) stochastic processes to facilitate exploration in continuous control domains. We show that proposed processes possess two desirable features: subsequent process observations are temporally coherent with continuously adjustable degree of coherence, and the process stationary distribution is standard normal. We derive an autoregressive policy (ARP) that implements such processes maintaining the standard agent-environment interface. We show how ARPs can be easily used with the existing off-the-shelf learning algorithms. Empirically we demonstrate that using ARPs results in improved exploration and sample efficiency in both simulated and real world domains, and, furthermore, provides smooth exploration trajectories that enable safe operation of robotic hardware.

    • #4369
      Sharing Experience in Multitask Reinforcement Learning
      Tung-Long Vuong, Do-Van Nguyen, Tai-Long Nguyen, Cong-Minh Bui, Hai-Dang Kieu, Viet-Cuong Ta, Quoc-Long Tran, Thanh-Ha Le
      Details | PDF
      Reinforcement Learning 1

      In multitask reinforcement learning, tasks often have sub-tasks that share the same solution, even though the overall tasks are different. If the shared-portions could be effectively identified, then the learning process could be improved since all the samples between tasks in the shared space could be used. In this paper, we propose a Sharing Experience Framework (SEF) for simultaneously training of multiple tasks. In SEF, a confidence sharing agent uses task-specific rewards from the environment to identify similar parts that should be shared across tasks and defines those parts as shared-regions between tasks. The shared-regions are expected to guide task-policies sharing their experience during the learning process. The experiments highlight that our framework improves the performance and the stability of learning task-policies, and is possible to help task-policies avoid local optimums.

    • #4757
      Adversarial Imitation Learning from Incomplete Demonstrations
      Mingfei Sun, Xiaojuan Ma
      Details | PDF
      Reinforcement Learning 1

      Imitation learning targets deriving a mapping from states to actions, a.k.a. policy, from expert demonstrations. Existing methods for imitation learning typically require any actions in the demonstrations to be fully available, which is hard to ensure in real applications. Though algorithms for learning with unobservable actions have been proposed, they focus solely on state information and over- look the fact that the action sequence could still be partially available and provide useful information for policy deriving. In this paper, we propose a novel algorithm called Action-Guided Adversarial Imitation Learning (AGAIL) that learns a pol- icy from demonstrations with incomplete action sequences, i.e., incomplete demonstrations. The core idea of AGAIL is to separate demonstrations into state and action trajectories, and train a policy with state trajectories while using actions as auxiliary information to guide the training whenever applicable. Built upon the Generative Adversarial Imitation Learning, AGAIL has three components: a generator, a discriminator, and a guide. The generator learns a policy with rewards provided by the discriminator, which tries to distinguish state distributions between demonstrations and samples generated by the policy. The guide provides additional rewards to the generator when demonstrated actions for specific states are available. We com- pare AGAIL to other methods on benchmark tasks and show that AGAIL consistently delivers com- parable performance to the state-of-the-art methods even when the action sequence in demonstrations is only partially available.

    • #4947
      A Restart-based Rank-1 Evolution Strategy for Reinforcement Learning
      Zefeng Chen, Yuren Zhou, Xiao-yu He, Siyu Jiang
      Details | PDF
      Reinforcement Learning 1

      Evolution strategies have been demonstrated to have the strong ability to roughly train deep neural networks and well accomplish reinforcement learning tasks. However, existing evolution strategies designed specially for deep reinforcement learning only involve the plain variants which can not realize the adaptation of mutation strength or other advanced techniques. The research of applying advanced and effective evolution strategies to reinforcement learning in an efficient way is still a gap. To this end, this paper proposes a restart-based rank-1 evolution strategy for reinforcement learning. When training the neural network, it adapts the mutation strength and updates the principal search direction in a way similar to the momentum method, which is an ameliorated version of stochastic gradient ascent. Besides, two mechanisms, i.e., the adaptation of the number of elitists and the restart procedure, are integrated to deal with the issue of local optima. Experimental results on classic control problems and Atari games show that the proposed algorithm is superior to or competitive with state-of-the-art algorithms for reinforcement learning, demonstrating the effectiveness of the proposed algorithm.

    Tuesday 13 16:30 - 18:00 AMS|CSC - Computational Social Choice 1 (2703-2704)

    Chair: Tamir Tassa
    • #748
      Flexible Representative Democracy: An Introduction with Binary Issues
      Ben Abramowitz, Nicholas Mattei
      Details | PDF
      Computational Social Choice 1

      We introduce Flexible Representative Democracy (FRD), a novel hybrid of Representative Democracy (RD) and Direct Democracy (DD), in which voters can alter the issue-dependent weights of a set of elected representatives. In line with the literature on Interactive Democracy, our model allows the voters to actively determine the degree to which the system is direct versus representative. However, unlike Liquid Democracy, FRD uses strictly non-transitive delegations, making delegation cycles impossible, preserving privacy and anonymity, and maintaining a fixed set of accountable elected representatives. We present FRD and analyze it using a computational approach with issues that are independent, binary, and symmetric; we compare the outcomes of various democratic systems using Direct Democracy with majority voting and full participation as an ideal baseline. We find through theoretical and empirical analysis that FRD can yield significant improvements over RD for emulating DD with full participation.

    • #1527
      On Strategyproof Conference Peer Review
      Yichong Xu, Han Zhao, Xiaofei Shi, Nihar B. Shah
      Details | PDF
      Computational Social Choice 1

      We consider peer review under a conference setting where there are conflicts between the reviewers and the submissions. Under such conflicts, reviewers can manipulate their reviews in a strategic manner to influence the final rankings of their own papers. Present-day peer-review systems are not designed to guard against such strategic behavior, beyond minimal (and insufficient) checks such as not assigning a paper to a conflicted reviewer. In this work, we address this problem through the lens of social choice, and present a theoretical framework for strategyproof and efficient peer review. Given the conflict graph which satisfies a simple property, we first present and analyze a flexible framework for reviewer-assignment and aggregation for the reviews that guarantees not only strategyproofness but also a natural efficiency property (unanimity). Our framework is based on the so-called partitioning method, and can be treated as a generalization of this type of method to conference peer review settings. We then empirically show that the requisite property on the (authorship) conflict graph is indeed satisfied in the ICLR-17 submissions data, and further demonstrate a simple trick to make the partitioning method more practically appealing under conference peer-review settings. Finally, we complement our positive results with negative theoretical results where we prove that under slightly stronger requirements, it is impossible for any algorithm to be both strategyproof and efficient.

    • #5143
      A Contribution to the Critique of Liquid Democracy
      Ioannis Caragiannis, Evi Micha
      Details | PDF
      Computational Social Choice 1

      Liquid democracy, which combines features of direct and representative democracy has been proposed as a modern practice for collective decision making. Its advocates support that by allowing voters to delegate their vote to more informed voters can result in better decisions. In an attempt to evaluate the validity of such claims, we study liquid democracy as a means to discover an underlying ground truth. We revisit a recent model by Kahng et al. [2018] and conclude with three negative results, criticizing an important assumption of their modeling, as well as liquid democracy more generally. In particular, we first identify cases where natural local mechanisms are much worse than either direct voting or the other extreme of full delegation to a common dictator. We then show that delegating to less informed voters may considerably increase the chance of discovering the ground truth. Finally, we show that deciding delegations that maximize the probability to find the ground truth is a computationally hard problem.

    • #5250
      Protecting Elections by Recounting Ballots
      Edith Elkind, Jiarui Gan, Svetlana Obraztsova, Zinovi Rabinovich, Alexandros A. Voudouris
      Details | PDF
      Computational Social Choice 1

      Complexity of voting manipulation is a prominent topic in computational social choice. In this work, we consider a two-stage voting manipulation scenario. First, a malicious party (an attacker) attempts to manipulate the election outcome in favor of a preferred candidate by changing the vote counts in some of the voting districts. Afterwards, another party (a defender), which cares about the voters' wishes, demands a recount in a subset of the manipulated districts, restoring their vote counts to their original values. We investigate the resulting Stackelberg game for the case where votes are aggregated using two variants of the Plurality rule, and obtain an almost complete picture of the complexity landscape, both from the attacker's and from the defender's perspective.

    • #59
      Fair Allocation of Indivisible Goods and Chores
      Haris Aziz, Ioannis Caragiannis, Ayumi Igarashi, Toby Walsh
      Details | PDF
      Computational Social Choice 1

      We consider the problem of fairly dividing a set of items. Much of the fair division literature assumes that the items are ``goods'' i.e., they yield positive utility for the agents. There is also some work where the items are ``chores'' that yield negative utility for the agents. In this paper, we consider a more general scenario where an agent may have negative or positive utility for each item. This framework captures, e.g., fair task assignment, where agents can have both positive and negative utilities for each task. We show that whereas some of the positive axiomatic and computational results extend to this more general setting, others do not. We present several new and efficient algorithms for finding fair allocations in this general setting. We also point out several gaps in the literature regarding the existence of allocations satisfying certain fairness and efficiency properties and further study the  complexity of computing such allocations.

    • #255
      Weighted Maxmin Fair Share Allocation of Indivisible Chores
      Haris Aziz, Hau Chan, Bo Li
      Details | PDF
      Computational Social Choice 1

      We initiate the study of indivisible chore allocation for agents with asymmetric shares. The fairness concept we focus on is the weighted natural generalization of maxmin share: WMMS fairness and OWMMS fairness. We first highlight the fact that commonly-used algorithms that work well for allocation of goods to asymmetric agents, and even for chores to symmetric agents do not provide good approximations for allocation of chores to asymmetric agents under WMMS. As a consequence, we present a novel polynomial-time constant-approximation algorithm, via linear program, for OWMMS. For two special cases: the binary valuation case and the 2-agent case, we provide exact or better constant-approximation algorithms.

    Tuesday 13 16:30 - 18:00 KRR|NR - Non-monotonic Reasoning (2705-2706)

    Chair: Eduardo Ferme
    • #2953
      Rational Inference Relations from Maximal Consistent Subsets Selection
      Sébastien Konieczny, Pierre Marquis, Srdjan Vesic
      Details | PDF
      Non-monotonic Reasoning

      When one wants to draw non-trivial inferences from an inconsistent belief base, a very  natural approach is to take advantage of the maximal consistent subsets of the base. But few inference relations from maximal consistent subsets exist. In this paper we point out new such relations based on selection of some of the maximal consistent subsets, leading thus to inference relations with a stronger inferential power. The selection process must obey some principles to ensure that it leads to an inference relation which is rational. We define a general class of monotonic selection relations for comparing maximal consistent sets. And we show that it corresponds to the class of rational inference relations.

    • #4262
      On the Integration of CP-nets in ASPRIN
      Mario Alviano, Javier Romero, Torsten Schaub
      Details | PDF
      Non-monotonic Reasoning

      Conditional preference networks (CP-nets) express qualitative preferences over features of interest.A Boolean CP-net can express that a feature is preferable under some conditions, as long as all other features have the same value.This is often a convenient representation, but sometimes one would also like to express a preference for maximizing a set of features, or some other objective function on the features of interest.ASPRIN is a flexible framework for preferences in ASP, where one can mix heterogeneous preference relations, and this paper reports on the integration of Boolean CP-nets.In general, we extend ASPRIN with a preference program for CP-nets in order to compute most preferred answer sets via an iterative algorithm.For the specific case of acyclic CP-nets, we provide an approximation by partially ordered set preferences, which are in turn normalized by ASPRIN to take advantage of several highly optimized algorithms implemented by ASP solvers for computing optimal solutions.Finally, we take advantage of a linear-time computable function to address dominance testing for tree-shaped CP-nets.

    • #4681
      Simple Conditionals with Constrained Right Weakening
      Giovanni Casini, Thomas Meyer, Ivan Varzinczak
      Details | PDF
      Non-monotonic Reasoning

      In this paper we introduce and investigate a very basic semantics for conditionals that can be used to define a broad class of conditional reasoning. We show that it encompasses the most popular kinds of conditional reasoning developed in logic-based KR. It turns out that the semantics we propose is appropriate for a structural analysis of those conditionals that do not satisfy the property of Right Weakening. We show that it can be used for the further development of an analysis of the notion of relevance in conditional reasoning.

    • #5424
      Out of Sight But Not Out of Mind: An Answer Set Programming Based Online Abduction Framework for Visual Sensemaking in Autonomous Driving
      Jakob Suchan, Mehul Bhatt, Srikrishna Varadarajan
      Details | PDF
      Non-monotonic Reasoning

      We demonstrate the need and potential of systematically integrated vision and semantics solutions for visual sensemaking (in the backdrop of autonomous driving). A general method for online visual sensemaking using answer set programming is systematically formalised and fully implemented. The method integrates state of the art in visual computing, and is developed as a modular framework usable within hybrid architectures for perception & control. We evaluate and demo with community established benchmarks KITTIMOD and MOT. As use-case, we focus on the significance of human-centred visual sensemaking ---e.g., semantic representation and explainability, question-answering, commonsense interpolation--- in safety-critical autonomous driving situations.

    • #6324
      What Has Been Said? Identifying the Change Formula in a Belief Revision Scenario
      Nicolas Schwind, Katsumi Inoue, Sébastien Konieczny, Jean-Marie Lagniez, Pierre Marquis
      Details | PDF
      Non-monotonic Reasoning

      We consider the problem of identifying the change formula in a belief revision scenario: given that an unknown announcement (a formula mu) led a set of agents to revise their beliefs and given the prior beliefs and the revised beliefs of the agents, what can be said about mu? We show that under weak conditions about the rationality of the revision operators used by the agents, the set of candidate formulae has the form of a logical interval. We explain how the bounds of this interval can be tightened when the revision operators used by the agents are known and/or when mu is known to be independent from a given set of variables. We also investigate the completeness issue, i.e., whether mu can be exactly identified. We present some sufficient conditions for it, identify its computational complexity, and report the results of some experiments about it.

    • #10975
      (Sister Conferences Best Papers Track) Meta-Interpretive Learning Using HEX-Programs
      Tobias Kaminski, Thomas Eiter, Katsumi Inoue
      Details | PDF
      Non-monotonic Reasoning

      Meta-Interpretive Learning (MIL) is a recent approach for Inductive Logic Programming (ILP) implemented in Prolog. Alternatively, MIL-problems can be solved by using Answer Set Programming (ASP), which may result in performance gains due to efficient conflict propagation. However, a straightforward MIL-encoding results in a huge size of the ground program and search space. To address these challenges, we encode MIL in the HEX-extension of ASP, which mitigates grounding issues, and we develop novel pruning techniques.

    Tuesday 13 16:30 - 18:00 CV|BFGR - Biometrics, Face and Gesture Recognition (2601-2602)

    Chair: Jingyi Yu
    • #2347
      Face Photo-Sketch Synthesis via Knowledge Transfer
      Mingrui Zhu, Nannan Wang, Xinbo Gao, Jie Li, Zhifeng Li
      Details | PDF
      Biometrics, Face and Gesture Recognition

      Despite deep neural networks have demonstrated strong power in face photo-sketch synthesis task, their performance, however, are still limited by the lack of training data (photo-sketch pairs). Knowledge Transfer (KT), which aims at training a smaller and fast student network with the information learned from a larger and accurate teacher network, has attracted much attention recently due to its superior performance in the acceleration and compression of deep neural networks. This work has brought us great inspiration that we can train a relatively small student network on very few training data by transferring knowledge from a larger teacher model trained on enough training data for other tasks. Therefore, we propose a novel knowledge transfer framework to synthesize face photos from face sketches or synthesize face sketches from face photos. Particularly, we utilize two teacher networks trained on large amount of data in related task to learn the knowledge of face photos and face sketches separately and transfer them to two student networks simultaneously. In addition, the two student networks, one for photo ? sketch task and the other for sketch ? photo task, can transfer their knowledge mutually. With the proposed method, we can train our model which has superior performance using a small set of photo-sketch pairs. We validate the effectiveness of our method across several datasets. Quantitative and qualitative evaluations illustrate that our model outperforms other state-of-the-art methods in generating face sketches (or photos) with high visual quality and recognition ability.

    • #2386
      Pose-preserving Cross Spectral Face Hallucination
      Junchi Yu, Jie Cao, Yi Li, Xiaofei Jia, Ran He
      Details | PDF
      Biometrics, Face and Gesture Recognition

      To narrow the inherent sensing gap in heterogeneous face recognition (HFR), recent methods have resorted to generative models and explored the ?recognition via generation? framework. Even though, it remains a very challenging task to synthesize photo-realistic visible faces (VIS) from near-infrared (NIR) images especially when paired training data are unavailable. We present an approach to avert the data misalignment problem and faithfully preserve pose, expression and identity information during cross-spectral face hallucination. At the pixel level, we introduce an unsupervised attention mechanism to warping that is jointly learned with the generator to derive pixel-wise correspondence from unaligned data. At the image level, an auxiliary generator is employed to facilitate the learning of mapping from NIR to VIS domain. At the domain level, we first apply the mutual information constraint to explicitly measure the correlation between domains and thus benefit synthesis. Extensive experiments on three heterogeneous face datasets demonstrate that our approach not only outperforms current state-of-the-art HFR methods but also produce visually appealing results at a high resolution.

    • #2439
      Multi-Margin based Decorrelation Learning for Heterogeneous Face Recognition
      Bing Cao, Nannan Wang, Xinbo Gao, Jie Li, Zhifeng Li
      Details | PDF
      Biometrics, Face and Gesture Recognition

      Heterogeneous face recognition (HFR) refers to matching face images acquired from different domains with wide applications in security scenarios. However, HFR is still a challenging problem due to the significant cross-domain discrepancy and the lacking of sufficient training data in different domains. This paper presents a deep neural network approach namely Multi-Margin based Decorrelation Learning (MMDL) to extract decorrelation representations in a hyperspherical space for cross-domain face images. The proposed framework can be divided into two components: heterogeneous representation network and decorrelation representation learning. First, we employ a large scale of accessible visual face images to train heterogeneous representation network. The decorrelation layer projects the output of the first component into decorrelation latent subspace and obtain decorrelation representation. In addition, we design a multi-margin loss (MML), which consists of tetradmargin loss (TML) and heterogeneous angular margin loss (HAML), to constrain the proposed framework. Experimental results on two challenging heterogeneous face databases show that our approach achieves superior performance on both verification and recognition tasks, comparing with state-of-the-art methods.

    • #3952
      High Performance Gesture Recognition via Effective and Efficient Temporal Modeling
      Yang Yi, Feng Ni, Yuexin Ma, Xinge Zhu, Yuankai Qi, Riming Qiu, Shijie Zhao, Feng Li, Yongtao Wang
      Details | PDF
      Biometrics, Face and Gesture Recognition

      State-of-the-art hand gesture recognition methods have investigated the spatiotemporal features based on 3D convolutional neural networks (3DCNNs) or convolutional long short-term memory (ConvLSTM). However, they often suffer from the inefficiency due to the high computational complexity of their network structures. In this paper, we focus instead on the 1D convolutional neural networks and propose a simple and efficient architectural unit, Multi-Kernel Temporal Block (MKTB), that models the multi-scale temporal responses by explicitly applying different temporal kernels. Then, we present a Global Refinement Block (GRB), which is an attention module for shaping the global temporal features based on the cross-channel similarity. By incorporating the MKTB and GRB, our architecture can effectively explore the spatiotemporal features within tolerable computational cost. Extensive experiments conducted on public datasets demonstrate that our proposed model achieves the state-of-the-art with higher efficiency. Moreover, the proposed MKTB and GRB are plug-and-play modules and the experiments on other tasks, like video understanding and video-based person re-identification, also display their good performance in efficiency and capability of generalization.

    • #5053
      Attribute-Aware Convolutional Neural Networks for Facial Beauty Prediction
      Luojun Lin, Lingyu Liang, Lianwen Jin, Weijie Chen
      Details | PDF
      Biometrics, Face and Gesture Recognition

      Facial beauty prediction (FBP) aims to develop a machine that automatically makes facial attractiveness assessment. To a large extent, the perception of facial beauty for a human is involved with the attributes of facial appearance, which provides some significant visual cues for FBP. Deep convolution neural networks (CNNs) have shown its power for FBP, but convolution filters with fixed parameters cannot take full advantage of the facial attributes for FBP. To address this problem, we propose an Attribute-aware Convolutional Neural Network (AaNet) that modulates the filters of the main network, adaptively, using parameter generators that take beauty-related attributes as extra inputs. The parameter generators update the filters in the main network in two different manners: filter tuning or filter rebirth. However, AaNet takes attributes information as prior knowledge, that is ill-suited to those datasets merely with task-oriented labels. Therefore, imitating the design of AaNet, we further propose a Pseudo Attribute-aware Convolutional Neural Network (P-AaNet) that modulates filters conditioned on global context embeddings (pseudo attributes) of input faces learnt by a lightweight pseudo attribute distiller. Extensive ablation studies show that the AaNet and P-AaNet improve the performance of FBP when compared to conventional convolution and attention scheme, which validates the effectiveness of our method.

    • #30
      Dense Temporal Convolution Network for Sign Language Translation
      Dan Guo, Shuo Wang, Qi Tian, Meng Wang
      Details | PDF
      Biometrics, Face and Gesture Recognition

      The sign language translation (SLT) which aims at translating a sign language video into natural language is a weakly supervised task, given that there is no exact mapping relationship between visual actions and textual words in a sentence label. To align the sign language actions and translate them into the respective words automatically, this paper proposes a dense temporal convolution network, termed DenseTCN which captures the actions in hierarchical views. Within this network, a temporal convolution (TC) is designed to learn the short-term correlation among adjacent features and further extended to a dense hierarchical structure. In the kth TC layer, we integrate the outputs of all preceding layers together: (1) The TC in a deeper layer essentially has larger receptive fields, which captures long-term temporal context by the hierarchical content transition. (2) The integration addresses the SLT problem by different views, including embedded short-term and extended longterm sequential learning. Finally, we adopt the CTC loss and a fusion strategy to learn the featurewise classification and generate the translated sentence. The experimental results on two popular sign language benchmarks, i.e. PHOENIX and USTCConSents, demonstrate the effectiveness of our proposed method in terms of various measurements.

    Tuesday 13 16:30 - 18:00 KRR|DLO - Description Logics and Ontologies 1 (2603-2604)

    Chair: Gerardo I. Simari
    • #2773
      Augmenting Transfer Learning with Semantic Reasoning
      Freddy Lécué, Jiaoyan Chen, Jeff Z. Pan, Huajun Chen
      Details | PDF
      Description Logics and Ontologies 1

      Transfer learning aims at building robust prediction models by transferring knowledge gained from one problem to another. In the semantic Web, learning tasks are enhanced with semantic representations. We exploit their semantics to augment transfer learning by dealing with when to transfer with semantic measurements and what to transfer with semantic embeddings. We further present a general framework that integrates the above measurements and embeddings with existing transfer learning algorithms for higher performance. It has demonstrated to be robust in two real-world applications: bus delay forecasting and air quality forecasting.

    • #2902
      Oblivious and Semi-Oblivious Boundedness for Existential Rules
      Pierre Bourhis, Michel Leclère, Marie-Laure Mugnier, Sophie Tison, Federico Ulliana, Lily Gallois
      Details | PDF
      Description Logics and Ontologies 1

      We study the notion of boundedness in the context positive existential rules, that is, wether there exists an upper bound to the depth of the chase procedure, that is independent from the initial instance. By focussing our attention on the oblivious and the semi-oblivious chase variants, we give a characterization of boundedness in terms of FO-rewritability and chase termination. We show that it is decidable to recognize if a set of rules is bounded for several classes of rules and outline the complexity of the problem.

    • #4008
      Semantic Characterization of Data Services through Ontologies
      Gianluca Cima, Maurizio Lenzerini, Antonella Poggi
      Details | PDF
      Description Logics and Ontologies 1

      We study the problem of associating formal semantic descriptions to data services. We base our proposal on the Ontology-based Data Access paradigm, where a domain ontology is used to provide a semantic layer mapped to the data sources of an organization. The basic idea is to explain the semantics of a data service in terms of a query over the ontology. We illustrate a formal framework for this problem, based on the notion of source-to-ontology (s-to-o) rewriting, which comes in three variants, called sound, complete and perfect, respectively. We present a thorough complexity analysis of two computational problems, namely verification (checking whether a query is an s-to-o rewriting of a given data service), and computation (computing an s-to-o rewriting of a data service).

    • #5437
      Revisiting Controlled Query Evaluation in Description Logics
      Domenico Lembo, Riccardo Rosati, Domenico Fabio Savo
      Details | PDF
      Description Logics and Ontologies 1

      Controlled Query Evaluation (CQE) is a confidentiality-preserving framework in which private information is protected through a policy, and a (optimal) censor guarantees that answers to queries are maximized without violating the policy. CQE has been recently studied in the context of ontologies, where the focus has been mainly on the problem of the existence of an optimal censor. In this paper we instead consider query answering over all possible optimal censors. We study data complexity of this problem for ontologies specified in the Description Logics DL-LiteR and EL_bottom and for variants of the censor language, which is the language used by the censor to enforce the policy. In our investigation we also analyze the relationship between CQE and the problem of Consistent Query Answering (CQA). Some of the complexity results we provide are indeed obtained through mutual reduction between CQE and CQA.

    • #925
      Satisfaction and Implication of Integrity Constraints in Ontology-based Data Access
      Charalampos Nikolaou, Bernardo Cuenca Grau, Egor V. Kostylev, Mark Kaminski, Ian Horrocks
      Details | PDF
      Description Logics and Ontologies 1

      We extend ontology-based data access with integrity constraints over both the source and target schemas. The relevant reasoning problems in this setting are constraint satisfaction—to check whether a database satisfies the target constraints given the mappings and the ontology—and source-to-target (resp., target-to-source) constraint implication, which is to check whether a target constraint (resp., a source constraint) is satisfied by each database satisfying the source constraints (resp., the target constraints). We establish decidability and complexity bounds for all these problems in the case where ontologies are expressed in DL-LiteR and constraints range from functional dependencies to disjunctive tuple-generating dependencies.

    • #2486
      Learning Description Logic Concepts: When can Positive and Negative Examples be Separated?
      Maurice Funk, Jean Christoph Jung, Carsten Lutz, Hadrien Pulcini, Frank Wolter
      Details | PDF
      Description Logics and Ontologies 1

      Learning description logic (DL) concepts from positive and negative examples given in the form of labeled data items in a KB has received significant attention in the literature. We study the fundamental question of when a separating DL concept exists and provide useful model-theoretic characterizations as well as complexity results for the associated decision problem. For expressive DLs such as ALC and ALCQI, our characterizations show a surprising link to the evaluation of ontology-mediated conjunctive queries. We exploit this to determine the combined complexity (between ExpTime and NExpTime) and data complexity (second level of the polynomial hierarchy) of separability. For the Horn DL EL, separability is ExpTime-complete both in combined and in data complexity while for its modest extension ELI it is even undecidable. Separability is also undecidable when the KB is formulated in ALC and the separating concept is required to be in EL or ELI.

    Tuesday 13 16:30 - 18:00 NLP|NLS - Natural Language Semantics (2605-2606)

    Chair: Wenya Wang
    • #1326
      Aspect-Based Sentiment Classification with Attentive Neural Turing Machines
      Qianren Mao, Jianxin Li, Senzhang Wang, Yuanning Zhang, Hao Peng, Min He, Lihong Wang
      Details | PDF
      Natural Language Semantics

      Aspect-based sentiment classification aims to identify sentiment polarity expressed towards a given opinion target in a sentence. The sentiment polarity of the target is not only highly determined by sentiment semantic context but also correlated with the concerned opinion target. Existing works cannot effectively capture and store the inter-dependence between the opinion target and its context. To solve this issue, we propose a novel model of Attentive Neural Turing Machines (ANTM). Via interactive read-write operations between an external memory storage and a recurrent controller, ANTM can learn the dependable correlation of the opinion target to context and concentrate on crucial sentiment information. Specifically, ANTM separates the information of storage and computation, which extends the capabilities of the controller to learn and store sequential features. The read and write operations enable ANTM to adaptively keep track of the interactive attention history between memory content and controller state. Moreover, we append target entity embeddings into both input and output of the controller in order to augment the integration of target information. We evaluate our model on SemEval2014 dataset which contains reviews of Laptop and Restaurant domains and Twitter review dataset. Experimental results verify that our model achieves state-of-the-art performance on aspect-based sentiment classification.

    • #2826
      A Latent Variable Model for Learning Distributional Relation Vectors
      Jose Camacho-Collados, Luis Espinosa-Anke, Shoaib Jameel, Steven Schockaert
      Details | PDF
      Natural Language Semantics

      Recently a number of unsupervised approaches have been proposed for learning vectors that capture the relationship between two words. Inspired by word embedding models, these approaches rely on co-occurrence statistics that are obtained from sentences in which the two target words appear. However, the number of such sentences is often quite small, and most of the words that occur in them are not relevant for characterizing the considered relationship. As a result, standard co-occurrence statistics typically lead to noisy relation vectors. To address this issue, we propose a latent variable model that aims to explicitly determine what words from the given sentences best characterize the relationship between the two target words. Relation vectors then correspond to the parameters of a simple unigram language model which is estimated from these words.

    • #4815
      Dual-View Variational Autoencoders for Semi-Supervised Text Matching
      Zhongbin Xie, Shuai Ma
      Details | PDF
      Natural Language Semantics

      Semantically matching two text sequences (usually two sentences) is a fundamental problem in NLP. Most previous methods either encode each of the two sentences into a vector representation (sentence-level embedding) or leverage word-level interaction features between the two sentences. In this study, we propose to take the sentence-level embedding features and the word-level interaction features as two distinct views of a sentence pair, and unify them with a framework of Variational Autoencoders such that the sentence pair is matched in a semi-supervised manner. The proposed model is referred to as Dual-View Variational AutoEncoder (DV-VAE), where the optimization of the variational lower bound can be interpreted as an implicit Co-Training mechanism for two matching models over distinct views. Experiments on SNLI, Quora and a Community Question Answering dataset demonstrate the superiority of our DV-VAE over several strong semi-supervised and supervised text matching models.

    • #296
      TransMS: Knowledge Graph Embedding for Complex Relations by Multidirectional Semantics
      Shihui Yang, Jidong Tian, Honglun Zhang, Junchi Yan, Hao He, Yaohui Jin
      Details | PDF
      Natural Language Semantics

      Knowledge graph embedding, which projects the symbolic relations and entities onto low-dimension continuous spaces, is essential to knowledge graph completion. Recently, translation-based embedding models (e.g. TransE) have aroused increasing attention for their simplicity and effectiveness. These models attempt to translate semantics from head entities to tail entities with the relations and infer richer facts outside the knowledge graph. In this paper, we propose a novel knowledge graph embedding method named TransMS, which translates and transmits multidirectional semantics: i) the semantics of head/tail entities and relations to tail/head entities with nonlinear functions and ii) the semantics from entities to relations with linear bias vectors. Our model has merely one additional parameter α than TransE for each triplet, which results in its better scalability in large-scale knowledge graph. Experiments show that TransMS achieves substantial improvements against state-of-the-art baselines, especially the Hit@10s of head entity prediction for N-1 relations and tail entity prediction for 1-N relations improved by about 27.1% and 24.8% on FB15K database respectively.

    • #3445
      CNN-Based Chinese NER with Lexicon Rethinking
      Tao Gui, Ruotian Ma, Qi Zhang, Lujun Zhao, Yu-Gang Jiang, Xuanjing Huang
      Details | PDF
      Natural Language Semantics

      Character-level Chinese named entity recognition (NER) that applies long short-term memory (LSTM) to incorporate lexicons has achieved great success. However, this method fails to fully exploit GPU parallelism and candidate lexicons can conflict. In this work, we propose a faster alternative to Chinese NER: a convolutional neural network (CNN)-based method that incorporates lexicons using a rethinking mechanism. The proposed method can model all the characters and potential words that match the sentence in parallel. In addition, the rethinking mechanism can address the word conflict by feeding back the high-level features to refine the networks. Experimental results on four datasets show that the proposed method can achieve better performance than both word-level and character-level baseline methods. In addition, the proposed method performs up to 3.21 times faster than state-of-the-art methods, while realizing better performance.

    • #3652
      Learning Task-Specific Representation for Novel Words in Sequence Labeling
      Minlong Peng, Qi Zhang, Xiaoyu Xing, Tao Gui, Jinlan Fu, Xuanjing Huang
      Details | PDF
      Natural Language Semantics

      Word representation is a key component in neural-network-based sequence labeling systems. However, representations of unseen or rare words trained on the end task are usually poor for appreciable performance. This is commonly referred to as the out-of-vocabulary (OOV) problem. In this work, we address the OOV problem in sequence labeling using only training data of the task. To this end, we propose a novel method to predict representations for OOV words from their surface-forms (e.g., character sequence) and contexts. The method is specifically designed to avoid the error propagation problem suffered by existing approaches in the same paradigm. To evaluate its effectiveness, we performed extensive empirical studies on four part-of-speech tagging (POS) tasks and four named entity recognition (NER) tasks. Experimental results show that the proposed method can achieve better or competitive performance on the OOV problem compared with existing state-of-the-art methods.

    Tuesday 13 16:30 - 18:00 CV|RDCIMRSI - Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 2 (2501-2502)

    Chair: Mang Ye
    • #416
      Resolution-invariant Person Re-Identification
      Shunan Mao, Shiliang Zhang, Ming Yang
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 2

      Exploiting resolution invariant representation is critical for person Re-Identification (ReID) in real applications, where the resolutions of captured person images may vary dramatically. This paper learns person representations robust to resolution variance through jointly training a Foreground-Focus Super-Resolution (FFSR) module and a Resolution-Invariant Feature Extractor (RIFE) by end-to-end CNN learning. FFSR upscales the person foreground using a fully convolutional auto-encoder with skip connections learned with a foreground focus training loss. RIFE adopts two feature extraction streams weighted by a dual-attention block to learn features for low and high resolution images, respectively. These two complementary modules are jointly trained, leading to a strong resolution invariant representation. We evaluate our methods on five datasets containing person images at a large range of resolutions, where our methods show substantial superiority to existing solutions. For instance, we achieve Rank-1 accuracy of 36.4% and 73.3% on CAVIAR and MLR-CUHK03, outperforming the state-of-the art by 2.9% and 2.6%, respectively.

    • #690
      Deep Light-field-driven Saliency Detection from a Single View
      Yongri Piao, Zhengkun Rong, Miao Zhang, Xiao Li, Huchuan Lu
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 2

      Previous 2D saliency detection methods extract salient cues from a single view and directly predict the expected results. Both traditional and deep-learning-based 2D methods do not consider geometric information of 3D scenes. Therefore the relationship between scene understanding and salient objects cannot be effectively established. This limits the performance of 2D saliency detection in challenging scenes. In this paper, we show for the first time that saliency detection problem can be reformulated as two sub-problems: light field synthesis from a single view and light-field-driven saliency detection. We propose a high-quality light field synthesis network to produce reliable 4D light field information. Then we propose a novel light-field-driven saliency detection network with two purposes, that is, i) richer saliency features can be produced for effective saliency detection; ii) geometric information can be considered for integration of multi-view saliency maps in a view-wise attention fashion. The whole pipeline can be trained in an end-to-end fashion. For training our network, we introduce the largest light field dataset for saliency detection, containing 1580 light fields that cover a wide variety of challenging scenes. With this new formulation, our method is able to achieve state-of-the-art performance.

    • #1361
      Generalized Zero-Shot Vehicle Detection in Remote Sensing Imagery via Coarse-to-Fine Framework
      Hong Chen, Yongtan Luo, Liujuan Cao, Baochang Zhang, Guodong Guo, Cheng Wang, Jonathan Li, Rongrong Ji
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 2

      Vehicle detection and recognition in remote sensing images are challenging, especially when only limited training data are available to accommodate various target categories. In this paper, we introduce a novel coarse-to-fine framework, which decomposes vehicle detection into segmentation-based vehicle localization and generalized zero-shot vehicle classification. Particularly, the proposed framework can well handle the problem of generalized zero-shot vehicle detection, which is challenging due to the requirement of recognizing vehicles that are even unseen during training. Specifically, a hierarchical DeepLab v3 model is proposed in the framework, which fully exploits fine-grained features to locate the target on a pixel-wise level, then recognizes vehicles in a coarse-grained manner. Additionally, the hierarchical DeepLab v3 model is beneficially compatible to combine the generalized zero-shot recognition. To the best of our knowledge, there is no publically available dataset to test comparative methods, we therefore construct a new dataset to fill this gap of evaluation. The experimental results show that the proposed framework yields promising results on the imperative yet difficult task of zero-shot vehicle detection and recognition.

    • #6134
      MSR: Multi-Scale Shape Regression for Scene Text Detection
      Chuhui Xue, Shijian Lu, Wei Zhang
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 2

      State-of-the-art scene text detection techniques predict quadrilateral boxes that are prone to localization errors while dealing with straight or curved text lines of different orientations and lengths in scenes. This paper presents a novel multi-scale shape regression network (MSR) that is capable of locating text lines of different lengths, shapes and curvatures in scenes. The proposed MSR detects scene texts by predicting dense text boundary points that inherently capture the location and shape of text lines accurately and are also more tolerant to the variation of text line length as compared with the state of the arts using proposals or segmentation. Additionally, the multi-scale network extracts and fuses features at different scales which demonstrates superb tolerance to the text scale variation. Extensive experiments over several public datasets show that the proposed MSR obtains superior detection performance for both curved and straight text lines of different lengths and orientations.

    • #3846
      Beyond Product Quantization: Deep Progressive Quantization for Image Retrieval
      Lianli Gao, Xiaosu Zhu, Jingkuan Song, Zhou Zhao, Heng Tao Shen
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 2

      Product Quantization (PQ) has long been a mainstream for generating an exponentially large codebook at very low memory/time cost. Despite its success, PQ is still tricky for the decomposition of high-dimensional vector space, and the retraining of model is usually unavoidable when the code length changes. In this work, we propose a deep progressive quantization (DPQ) model, as an alternative to PQ, for large scale image retrieval. DPQ learns the quantization codes sequentially and approximates the original feature space progressively. Therefore, we can train the quantization codes with different code lengths simultaneously. Specifically, we first utilize the label information for guiding the learning of visual features, and then apply several quantization blocks to progressively approach the visual features. Each quantization block is designed to be a layer of a convolutional neural network, and the whole framework can be trained in an end-to-end manner. Experimental results on the benchmark datasets show that our model significantly outperforms the state-of-the-art for image retrieval. Our model is trained once for different code lengths and therefore requires less computation time. Additional ablation study demonstrates the effect of each component of our proposed model. Our code is released at https://github.com/cfm-uestc/DPQ.

    • #1370
      LRDNN: Local-refining based Deep Neural Network for Person Re-Identification with Attribute Discerning
      Qinqin Zhou, Bineng Zhong, Xiangyuan Lan, Gan Sun, Yulun Zhang, Mengran Gou
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 2

      Recently, pose or attribute information has been widely used to solve person re-identification (re-ID) problem. However, the inaccurate output from pose or attribute modules will impair the final person re-ID performance. Since re-ID, pose estimation and attribute recognition are all based on the person appearance information, we propose a Local-refining based Deep Neural Network (LRDNN) to aggregate pose estimation and attribute recognition to improve the re-ID performance. To this end, we add a pose branch to extract the local spatial information and optimize the whole network on both person identity and attribute objectives. To diminish the negative affect from unstable pose estimation, a novel structure called channel parse block (CPB) is introduced to learn weights on different feature channels in pose branch. Then two branches are combined with compact bilinear pooling. Experimental results on Market1501 and DukeMTMC-reid datasets illustrate the effectiveness of the proposed method.

    Tuesday 13 16:30 - 18:00 ML|C - Classification 3 (2503-2504)

    Chair: Shao-Wen Yang
    • #183
      Margin Learning Embedded Prediction for Video Anomaly Detection with A Few Anomalies
      Wen Liu, Weixin Luo, Zhengxin Li, Peilin Zhao, Shenghua Gao
      Details | PDF
      Classification 3

      Classical semi-supervised video anomaly detection assumes that only normal data are available in the training set because of the rare and unbounded nature of anomalies. It is obviously, however, these infrequently observed abnormal events can actually help with the detection of identical or similar abnormal events, a line of thinking that motivates us to study open-set supervised anomaly detection with only a few types of abnormal observed events and many normal events available. Under the assumption that normal events can be well predicted, we propose a Margin Learning Embedded Prediction (MLEP) framework. There are three features in MLEP- based open-set supervised video anomaly detection: i) we customize a video prediction framework that favors the prediction of normal events and distorts the prediction of abnormal events; ii) The margin learning framework learns a more compact normal data distribution and enlarges the margin between normal and abnormal events. Since abnormal events are unbounded, our framework consequently helps with the detection of abnormal events, even for anomalies that have never been previously observed. Therefore, our framework is suitable for the open-set supervised anomaly detection setting; iii) our framework can readily handle both frame-level and video-level anomaly annotations. Considering that video-level anomaly detection is more easily annotated in practice and that anomaly detection with a few anomalies is a more practical setting, our work thus pushes the application of anomaly detection towards real scenarios. Extensive experiments validate the effectiveness of our framework for anomaly detection.

    • #1214
      Comprehensive Semi-Supervised Multi-Modal Learning
      Yang Yang, Ke-Tao Wang, De-Chuan Zhan, Hui Xiong, Yuan Jiang
      Details | PDF
      Classification 3

      Multi-modal learning refers to the process of learning a precise model to represent the joint representations of different modalities. Despite its promise for multi-modal learning, the co-regularization method is based on the consistency principle with a sufficient assumption, which usually does not hold for real-world multi-modal data. Indeed, due to the modal insufficiency in real-world applications, there are divergences among heterogeneous modalities. This imposes a critical challenge for multi-modal learning. To this end, in this paper, we propose a novel Comprehensive Multi-Modal Learning (CMML) framework, which can strike a balance between the consistency and divergency modalities by considering the insufficiency in one unified framework. Specifically, we utilize an instance level attention mechanism to weight the sufficiency for each instance on different modalities. Moreover, novel diversity regularization and robust consistency metrics are designed for discovering insufficient modalities. Our empirical studies show the superior performances of CMML on real-world data in terms of various criteria.

    • #5482
      Exploiting Interaction Links for Node Classification with Deep Graph Neural Networks
      Hogun Park, Jennifer Neville
      Details | PDF
      Classification 3

      Node classification is an important problem in relational machine learning. However, in scenarios where graph edges represent interactions among the entities (e.g., over time), the majority of current methods either summarize the interaction information into link weights or aggregate the links to produce a static graph. In this paper, we propose a neural network architecture that jointly captures both temporal and static interaction patterns, which we call Temporal-Static-Graph-Net (TSGNet). Our key insight is that leveraging both a static neighbor encoder, which can learn aggregate neighbor patterns, and a graph neural network-based recurrent unit, which can capture complex interaction patterns, improve the performance of node classification. In our experiments on node classification tasks, TSGNet produces significant gains compared to state-of-the-art methods—reducing classification error up to 24% and an average of 10% compared to the best competitor on four real-world networks and one synthetic dataset.

    • #5854
      Automated Machine Learning with Monte-Carlo Tree Search
      Herilalaina Rakotoarison, Marc Schoenauer, Michèle Sebag
      Details | PDF
      Classification 3

      The AutoML approach aims to deliver peak performance from a machine learning  portfolio on the dataset at hand. A Monte-Carlo Tree Search Algorithm Selection and Configuration (Mosaic) approach is presented to tackle this mixed (combinatorial and continuous) expensive optimization problem on the structured search space of ML pipelines. Extensive lesion studies are conducted to independently assess and compare: i) the optimization processes based on Bayesian Optimization or Monte Carlo Tree Search (MCTS); ii) its warm-start initialization based on meta-features or random runs; iii) the ensembling of the solutions gathered along the search. Mosaic is assessed on the OpenML 100 benchmark and the Scikit-learn portfolio, with statistically significant gains over AutoSkLearn, winner of all former AutoML challenges.

    • #2178
      Multi-Class Learning using Unlabeled Samples: Theory and Algorithm
      Jian Li, Yong Liu, Rong Yin, Weiping Wang
      Details | PDF
      Classification 3

      In this paper, we investigate the generalization performance of multi-class classification, for which we obtain a shaper error bound by using the notion of local Rademacher complexity and additional unlabeled samples, substantially improving the state-of-the-art bounds in existing multi-class learning methods. The statistical learning motivates us to devise an efficient multi-class learning framework with the local Rademacher complexity and Laplacian regularization. Coinciding with the theoretical analysis, experimental results demonstrate that the stated approach achieves better performance.

    • #3777
      Accelerated Incremental Gradient Descent using Momentum Acceleration with Scaling Factor
      Yuanyuan Liu, Fanhua Shang, Licheng Jiao
      Details | PDF
      Classification 3

      Recently, research on variance reduced incremental gradient descent methods (e.g., SAGA) has made exciting progress (e.g., linear convergence for strongly convex (SC) problems). However, existing accelerated methods (e.g., point-SAGA) suffer from drawbacks such as inflexibility. In this paper, we design a novel and simple momentum to accelerate the classical SAGA algorithm, and propose a direct accelerated incremental gradient descent algorithm. In particular, our theoretical result shows that our algorithm attains a best known oracle complexity for strongly convex problems and an improved convergence rate for the case of n>=L/\mu. We also give experimental results justifying our theoretical results and showing the effectiveness of our algorithm.

    Tuesday 13 16:30 - 18:00 ML|DM - Data Mining 3 (2505-2506)

    Chair: Decebal Constantin Mocanu
    • #3254
      Learning Network Embedding with Community Structural Information
      Yu Li, Ying Wang, Tingting Zhang, Jiawei Zhang, Yi Chang
      Details | PDF
      Data Mining 3

      Network embedding is an effective approach to learn the low-dimensional representations of vertices in networks, aiming to capture and preserve the structure and inherent properties of networks. The vast majority of existing network embedding methods exclusively focus on vertex proximity of networks, while ignoring the network internal community structure. However, the homophily principle indicates that vertices within the same community are more similar to each other than those from different communities, thus vertices within the same community should have similar vertex representations. Motivated by this, we propose a novel network embedding framework NECS to learn the Network Embedding with Community Structural information, which preserves the high-order proximity and incorporates the community structure in vertex representation learning. We formulate the problem into a principled optimization framework and provide an effective alternating algorithm to solve it. Extensive experimental results on several benchmark network datasets demonstrate the effectiveness of the proposed framework in various network analysis tasks including network reconstruction, link prediction and vertex classification.

    • #5016
      Unified Embedding Model over Heterogeneous Information Network for Personalized Recommendation
      Zekai Wang, Hongzhi Liu, Yingpeng Du, Zhonghai Wu, Xing Zhang
      Details | PDF
      Data Mining 3

      Most of heterogeneous information network (HIN) based recommendation models are based on the user and item modeling with meta-paths. However, they always model users and items in isolation under each meta-path, which may lead to information extraction misled. In addition, they only consider structural features of HINs when modeling users and items during exploring HINs, which may lead to useful information for recommendation lost irreversibly. To address these problems, we propose a HIN based unified embedding model for recommendation, called HueRec. We assume there exist some common characteristics under different meta-paths for each user or item, and use data from all meta-paths to learn unified users’ and items’ representations. So the interrelation between meta-paths are utilized to alleviate the problems of data sparsity and noises on one meta-path. Different from existing models which first explore HINs then make recommendations, we combine these two parts into an end-to-end model to avoid useful information lost in initial phases. In addition, we embed all users, items and meta-paths into related latent spaces. Therefore, we can measure users’ preferences on meta-paths to improve the performances of personalized recommendation. Extensive experiments show HueRec consistently outperforms state-of-the-art methods.

    • #5394
      Metric Learning on Healthcare Data with Incomplete Modalities
      Qiuling Suo, Weida Zhong, Fenglong Ma, Ye Yuan, Jing Gao, Aidong Zhang
      Details | PDF
      Data Mining 3

      Utilizing multiple modalities to learn a good distance metric is of vital importance for various clinical applications. However, it is common that modalities are incomplete for some patients due to various technical and practical reasons in healthcare datasets. Existing metric learning methods cannot directly learn the distance metric on such data with missing modalities. Nevertheless, the incomplete data contains valuable information to characterize patient similarity and modality relationships, and they should not be ignored during the learning process. To tackle the aforementioned challenges, we propose a metric learning framework to perform missing modality completion and multi-modal metric learning simultaneously. Employing the generative adversarial networks, we incorporate both complete and incomplete data to learn the mapping relationship between modalities. After completing the missing modalities, we use the nonlinear representations extracted by the discriminator to learn the distance metric among patients. Through jointly training the adversarial generation part and metric learning, the similarity among patients can be learned on data with missing modalities. Experimental results show that the proposed framework learns more accurate distance metric on real-world healthcare datasets with incomplete modalities, comparing with the state-of-the-art approaches. Meanwhile, the quality of the generated modalities can be preserved.

    • #316
      Joint Link Prediction and Network Alignment via Cross-graph Embedding
      Xingbo Du, Junchi Yan, Hongyuan Zha
      Details | PDF
      Data Mining 3

      Link prediction and network alignment are two important problems in social network analysis and other network related applications. Considerable efforts have been devoted to these two problems while often in an independent way to each other. In this paper we argue that these two tasks are relevant and present a joint link prediction and network alignment framework, whereby a novel cross-graph node embedding technique is devised to allow for information propagation. Our approach can either work with a few initial vertex correspondence as seeds, or from scratch. By extensive experiments on public benchmark, we show that link prediction and network alignment can benefit to each other especially for improving the recall for both tasks.

    • #1709
      Masked Graph Convolutional Network
      Liang Yang, Fan Wu, Yingkui Wang, Junhua Gu, Yuanfang Guo
      Details | PDF
      Data Mining 3

      Semi-supervised classification is a fundamental technology to process the structured and unstructured data in machine learning field. The traditional attribute-graph based semi-supervised classification methods propagate labels over the graph which is usually constructed from the data features, while the graph convolutional neural networks smooth the node attributes, i.e., propagate the attributes, over the real graph topology. In this paper, they are interpreted from the perspective of propagation, and accordingly categorized into symmetric and asymmetric propagation based methods. From the perspective of propagation, both the traditional and network based methods are propagating certain objects over the graph. However, different from the label propagation, the intuition ``the connected data samples tend to be similar in terms of the attributes", in attribute propagation is only partially valid. Therefore, a masked graph convolution network (Masked GCN) is proposed by only propagating a certain portion of the attributes to the neighbours according to a masking indicator, which is learned for each node by jointly considering the attribute distributions in local neighbourhoods and the impact on the classification results. Extensive experiments on transductive and inductive node classification tasks have demonstrated the superiority of the proposed method.

    • #3778
      Improving Cross-lingual Entity Alignment via Optimal Transport
      Shichao Pei, Lu Yu, Xiangliang Zhang
      Details | PDF
      Data Mining 3

      Cross-lingual entity alignment identifies entity pairs that share the same meanings but locate in different language knowledge graphs (KGs). The study in this paper is to address two limitations that widely exist in current solutions:  1) the alignment loss functions defined at the entity level serve well the purpose of aligning labeled entities but fail to match the whole picture of labeled and unlabeled entities in different KGs;  2) the translation from one domain to the other has been considered (e.g., X to Y by M1 or Y to X by M2). However, the important duality of alignment between different KGs  (X to Y by M1 and Y to X by M2) is ignored. We propose a novel entity alignment framework (OTEA), which dually optimizes the entity-level loss and group-level loss via optimal transport theory. We also impose a regularizer on the dual translation matrices to mitigate the effect of noise during transformation. Extensive experimental results show that our model consistently outperforms the state-of-the-arts with significant improvements on alignment accuracy.

    Tuesday 13 16:30 - 18:00 ML|LT - Learning Theory (2401-2402)

    Chair: Colin de la Higuera
    • #531
      Approximate Optimal Transport for Continuous Densities with Copulas
      Jinjin Chi, Jihong Ouyang, Ximing Li, Yang Wang, Meng Wang
      Details | PDF
      Learning Theory

      Optimal Transport (OT) formulates a powerful framework by comparing probability distributions, and it has increasingly attracted great attention within the machine learning community. However, it suffers from severe computational burden, due to the intractable objective with respect to the distributions of interest. Especially, there still exist very few attempts for continuous OT, i.e., OT for comparing continuous densities. To this end, we develop a novel continuous OT method, namely Copula OT (Cop-OT). The basic idea is to transform the primal objective of continuous OT into a tractable form with respect to the copula parameter, which can be efficiently solved by stochastic optimization with less time and memory requirements. Empirical results on real applications of image retrieval and synthetic data demonstrate that our Cop-OT can gain more accurate approximations to continuous OT values than the state-of-the-art baselines.

    • #848
      Improved Algorithm on Online Clustering of Bandits
      Shuai Li, Wei Chen, Shuai Li, Kwong-Sak Leung
      Details | PDF
      Learning Theory

      We generalize the setting of online clustering of bandits by allowing non-uniform distribution over user frequencies. A more efficient algorithm is proposed with simple set structures to represent clusters. We prove a regret bound for the new algorithm which is free of the minimal frequency over users. The experiments on both synthetic and real datasets consistently show the advantage of the new algorithm over existing methods.

    • #1736
      Heavy-ball Algorithms Always Escape Saddle Points
      Tao Sun, Dongsheng Li, Zhe Quan, Hao Jiang, Shengguo Li, Yong Dou
      Details | PDF
      Learning Theory

      Nonconvex optimization algorithms with random initialization have attracted increasing attention recently. It has been showed that many first-order methods always avoid saddle points with random starting points. In this paper, we answer a question: can the nonconvex heavy-ball algorithms with random initialization avoid saddle points? The answer is yes! Direct using the existing proof technique for the heavy-ball algorithms is hard due to that each iteration of the heavy-ball algorithm consists of current and last points. It is impossible to formulate the algorithms as iteration like xk+1= g(xk) under some mapping g. To this end, we design a new mapping on a new space. With some transfers, the heavy-ball algorithm can be interpreted as iterations after this mapping. Theoretically, we prove that heavy-ball gradient descent enjoys larger stepsize than the gradient descent to escape saddle points to escape the saddle point. And the heavy-ball proximal point algorithm is also considered; we also proved that the algorithm can always escape the saddle point.

    • #1738
      BN-invariant Sharpness Regularizes the Training Model to Better Generalization
      Mingyang Yi, Huishuai Zhang, Wei Chen, Zhi-Ming Ma, Tie-Yan Liu
      Details | PDF
      Learning Theory

      It is arguably believed that flatter minima can generalize better. However, it has been pointed out that the usual definitions of sharpness, which consider either the maxima or the integral of loss over a delta ball of parameters around minima, cannot give consistent measurement for scale invariant neural networks, e.g., networks with batch normalization layer. In this paper, we first propose a measure of sharpness, BN-Sharpness, which gives consistent value for equivalent networks under BN. It achieves the property of scale invariance by connecting the integral diameter with the scale of parameter. Then we present a computation-efficient way to calculate the BN-sharpness approximately i.e., one dimensional integral along the "sharpest" direction. Furthermore, we use the BN-sharpness to regularize the training and design an algorithm to minimize the new regularized objective. Our algorithm achieves considerably better performance than vanilla SGD over various experiment settings.

    • #2451
      Conditions on Features for Temporal Difference-Like Methods to Converge
      Marcus Hutter, Samuel Yang-Zhao, Sultan Javed Majeed
      Details | PDF
      Learning Theory

      The convergence of many reinforcement learning (RL) algorithms with linear function approximation has been investigated extensively but most proofs assume that these methods converge to a unique solution. In this paper, we provide a complete characterization of non-uniqueness issues for a large class of reinforcement learning algorithms, simultaneously unifying many counter-examples to convergence in a theoretical framework. We achieve this by proving a new condition on features that can determine whether the convergence assumptions are valid or non-uniqueness holds. We consider a general class of RL methods, which we call natural algorithms, whose solutions are characterized as the fixed point of a projected Bellman equation. Our main result proves that natural algorithms converge to the correct solution if and only if all the value functions in the approximation space satisfy a certain shape. This implies that natural algorithms are, in general, inherently prone to converge to the wrong solution for most feature choices even if the value function can be represented exactly. Given our results, we show that state aggregation-based features are a safe choice for natural algorithms and also provide a condition for finding convergent algorithms under other feature constructions.

    • #5172
      Motion Invariance in Visual Environments
      Alessandro Betti, Marco Gori, Stefano Melacci
      Details | PDF
      Learning Theory

      The puzzle of computer vision might find new challenging solutions when we realize that most successful methods are working at image level, which is remarkably more difficult than processing directly visual streams, just as it happens in nature. In this paper, we claim that the processing of a stream of frames naturally leads to formulate the motion invariance principle, which enables the construction of a new theory of visual learning based on convolutional features. The theory addresses a number of intriguing questions that arise in natural vision, and offers a well-posed computational scheme for the discovery of convolutional filters over the retina. They are driven by the Euler- Lagrange differential equations derived from the principle of least cognitive action, that parallels the laws of mechanics. Unlike traditional convolutional networks, which need massive supervision, the proposed theory offers a truly new scenario in which feature learning takes place by unsupervised processing of video signals. An experimental report of the theory is presented where we show that features extracted under motion invariance yield an improvement that can be assessed by measuring information-based indexes.

    Tuesday 13 16:30 - 18:00 CV|LV - Language and Vision 1 (2403-2404)

    Chair: Sheng Tang
    • #801
      Talking Face Generation by Conditional Recurrent Adversarial Network
      Yang Song, Jingwen Zhu, Dawei Li, Andy Wang, Hairong Qi
      Details | PDF
      Language and Vision 1

      Given an arbitrary face image and an arbitrary speech clip, the proposed work attempts to generate the talking face video with accurate lip synchronization. Existing works either do not consider temporal dependency across video frames thus yielding abrupt facial and lip movement or are limited to the generation of talking face video for a specific person thus lacking generalization capacity. We propose a novel conditional recurrent generation network that incorporates both image and audio features in the recurrent unit for  temporal dependency. To achieve both image- and video-realism, a pair of spatial-temporal discriminators are included in the network for better image/video quality. Since accurate lip synchronization is essential to the success of talking face video generation, we also construct a lip-reading discriminator to boost the accuracy of lip synchronization. We also extend the network to model the natural pose and expression of talking face on the Obama Dataset. Extensive experimental results demonstrate the superiority of our framework over the state-of-the-arts in terms of visual quality, lip sync accuracy, and smooth transition pertaining to both lip and facial movement.

    • #2791
      Dynamically Visual Disambiguation of Keyword-based Image Search
      Yazhou Yao, Zeren Sun, Fumin Shen, Li Liu, Limin Wang, Fan Zhu, Lizhong Ding, Gangshan Wu, Ling Shao
      Details | PDF
      Language and Vision 1

      Due to the high cost of manual annotation, learning directly from the web has attracted broad attention. One issue that limits their performance is the problem of visual polysemy. To address this issue, we present an adaptive multi-model framework that resolves polysemy by visual disambiguation. Compared to existing methods, the primary advantage of our approach lies in that our approach can adapt to the dynamic changes in the search results. Our proposed framework consists of two major steps: we first discover and dynamically select the text queries according to the image search results, then we employ the proposed saliency-guided deep multi-instance learning network to remove outliers and learn classification models for visual disambiguation. Extensive experiments demonstrate the superiority of our proposed approach.

    • #3549
      Convolutional Auto-encoding of Sentence Topics for Image Paragraph Generation
      Jing Wang, Yingwei Pan, Ting Yao, Jinhui Tang, Tao Mei
      Details | PDF
      Language and Vision 1

      Image paragraph generation is the task of producing a coherent story (usually a paragraph) that describes the visual content of an image. The problem nevertheless is not trivial especially when there are multiple descriptive and diverse gists to be considered for paragraph generation, which often happens in real images. A valid question is how to encapsulate such gists/topics that are worthy of mention from an image, and then describe the image from one topic to another but holistically with a coherent structure. In this paper, we present a new design --- Convolutional Auto-Encoding (CAE) that purely employs convolutional and deconvolutional auto-encoding framework for topic modeling on the region-level features of an image. Furthermore, we propose an architecture, namely CAE plus Long Short-Term Memory (dubbed as CAE-LSTM), that novelly integrates the learnt topics in support of paragraph generation. Technically, CAE-LSTM capitalizes on a two-level LSTM-based paragraph generation framework with attention mechanism. The paragraph-level LSTM captures the inter-sentence dependency in a paragraph, while sentence-level LSTM is to generate one sentence which is conditioned on each learnt topic. Extensive experiments are conducted on Stanford image paragraph dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, CAE-LSTM increases CIDEr performance from 20.93% to 25.15%.

    • #3674
      Multi-Level Visual-Semantic Alignments with Relation-Wise Dual Attention Network for Image and Text Matching
      Zhibin Hu, Yongsheng Luo, Jiong Lin, Yan Yan, Jian Chen
      Details | PDF
      Language and Vision 1

      Image-text matching is central to visual-semantic cross-modal retrieval and has been attracting extensive attention recently. Previous studies have been devoted to finding the latent correspondence between image regions and words, e.g., connecting key words to specific regions of salient objects. However, existing methods are usually committed to handle concrete objects, rather than abstract ones, e.g., a description of some action, which in fact are also ubiquitous in description texts of real-world. The main challenge in dealing with abstract objects is that there is no explicit connections between them, unlike their concrete counterparts. One therefore has to alternatively find the implicit and intrinsic connections between them. In this paper, we propose a relation-wise dual attention network (RDAN) for image-text matching. Specifically, we maintain an over-complete set that contains pairs of regions and words. Then built upon this set, we encode the local correlations and the global dependencies between regions and words by training a visual-semantic network. Then a dual pathway attention network is presented to infer the visual-semantic alignments and image-text similarity. Extensive experiments validate the efficacy of our method, by achieving the state-of-the-art performance on several public benchmark datasets.

    • #4765
      Densely Connected Attention Flow for Visual Question Answering
      Fei Liu, Jing Liu, Zhiwei Fang, Richang Hong, Hanqing Lu
      Details | PDF
      Language and Vision 1

      Learning effective interactions between multi-modal features is at the heart of visual question answering (VQA). A common defect of the existing VQA approaches is that they only consider a very limited amount of interactions, which may be not enough to model latent complex image-question relations that are necessary for accurately answering questions. Therefore, in this paper, we propose a novel DCAF (Densely Connected Attention Flow) framework for modeling dense interactions. It densely connects all pairwise layers of the network via Attention Connectors, capturing fine-grained interplay between image and question across all hierarchical levels. The proposed Attention Connector efficiently connects the multi-modal features at any two layers with symmetric co-attention, and produces interaction-aware attention features. Experimental results on three publicly available datasets show that the proposed method achieves state-of-the-art performance.

    • #2695
      Video Interactive Captioning with Human Prompts
      Aming Wu, Yahong Han, Yi Yang
      Details | PDF
      Language and Vision 1

      Video captioning aims at generating a proper sentence to describe the video content. As a video often includes rich visual content and semantic details, different people may be interested in different views. Thus the generated sentence always fails to meet the ad hoc expectations. In this paper, we make a new attempt that, we launch a round of interaction between a human and a captioning agent. After generating an initial caption, the agent asks for a short prompt from the human as a clue of his expectation. Then, based on the prompt, the agent could generate a more accurate caption. We name this process a new task of video interactive captioning (ViCap). Taking a video and an initial caption as input, we devise the ViCap agent which consists of a video encoder, an initial caption encoder, and a refined caption generator. We show that the ViCap can be trained via a full supervision (with ground-truth) way or a weak supervision (with only prompts) way. For the evaluation of ViCap, we first extend the MSRVTT with interaction ground-truth. Experimental results not only show the prompts can help generate more accurate captions, but also demonstrate the good performance of the proposed method.

    Tuesday 13 16:30 - 18:00 ML|LGM - Learning Graphical Models (2405-2406)

    Chair: Satoshi Oyama
    • #1256
      Dynamic Hypergraph Neural Networks
      Jianwen Jiang, Yuxuan Wei, Yifan Feng, Jingxuan Cao, Yue Gao
      Details | PDF
      Learning Graphical Models

      In recent years, graph/hypergraph-based deep learning methods have attracted much attention from researchers. These deep learning methods take graph/hypergraph structure as prior knowledge in the model. However, hidden and important relations are not directly represented in the inherent structure. To tackle this issue, we propose a dynamic hypergraph neural networks framework (DHGNN), which is composed of the stacked layers of two modules: dynamic hypergraph construction (DHG) and hypergrpah convolution (HGC). Considering initially constructed hypergraph is probably not a suitable representation for data, the DHG module dynamically updates hypergraph structure on each layer. Then hypergraph convolution is introduced to encode high-order data relations in a hypergraph structure. The HGC module includes two phases: vertex convolution and hyperedge convolution, which are designed to aggregate feature among vertices and hyperedges, respectively. We have evaluated our method on standard datasets, the Cora citation network and Microblog dataset. Our method outperforms state-of-the-art methods. More experiments are conducted to demonstrate the effectiveness and robustness of our method to diverse data distributions.

    • #3145
      Neural Network based Continuous Conditional Random Field for Fine-grained Crime Prediction
      Fei Yi, Zhiwen Yu, Fuzhen Zhuang, Bin Guo
      Details | PDF
      Learning Graphical Models

      Crime prediction has always been a crucial issue for public safety, and recent works have shown the effectiveness of taking spatial correlation, such as region similarity or interaction, for fine-grained crime modeling. In our work, we seek to reveal the relationship across regions for crime prediction using Continuous Conditional Random Field (CCRF). However, conventional CCRF would become impractical when facing a dense graph considering all relationship between regions. To deal with it, in this paper, we propose a Neural Network based CCRF (NN-CCRF) model that formulates CCRF into an end-to-end neural network framework, which could reduce the complexity in model training and improve the overall performance. We integrate CCRF with NN by introducing a Long Short-Term Memory (LSTM) component to learn the non-linear mapping from inputs to outputs of each region, and a modified Stacked Denoising AutoEncoder (SDAE) component for pairwise interactions modeling between regions. Experiments conducted on two different real-world datasets demonstrate the superiority of our proposed model over the state-of-the-art methods.

    • #5018
      Efficient Regularization Parameter Selection for Latent Variable Graphical Models via Bi-Level Optimization
      Joachim Giesen, Frank Nussbaum, Christopher Schneider
      Details | PDF
      Learning Graphical Models

      Latent variable graphical models are an extension of Gaussian graphical models that decompose the precision matrix into a sparse and a low-rank component. These models can be learned with theoretical guarantees from data via a semidefinite program. This program features two regularization terms, one for promoting sparsity and one for promoting a low rank. In practice, however, it is not straightforward to learn a good model since the model highly depends on the regularization parameters that control the relative weight of the loss function and the two regularization terms. Selecting good regularization parameters can be modeled as a bi-level optimization problem, where the upper level optimizes some form of generalization error and the lower level provides a description of the solution gamut. The solution gamut is the set of feasible solutions for all possible values of the regularization parameters. In practice, it is often not feasible to describe the solution gamut efficiently. Hence, algorithmic schemes for approximating solution gamuts have been devised. One such scheme is Benson's generic vector optimization algorithm that comes with approximation guarantees. So far Benson's algorithm has not been used in conjunction with semidefinite programs like the latent variable graphical Lasso. Here, we develop an adaptive variant of Benson's algorithm for the semidefinite case and show that it keeps the known approximation and run time guarantees. Furthermore, Benson's algorithm turns out to be practically more efficient for the latent variable graphical model than the existing solution gamut approximation scheme on a wide range of data sets.

    • #1125
      Amalgamating Filtered Knowledge: Learning Task-customized Student from Multi-task Teachers
      Jingwen Ye, Xinchao Wang, Yixin Ji, Kairi Ou, Mingli Song
      Details | PDF
      Learning Graphical Models

      Many well-trained Convolutional Neural Network~(CNN) models have now been released online by developers for the sake of effortless reproducing. In this paper, we treat such pre-trained networks as teachers and explore how to learn a target student network for customized tasks, using multiple teachers that handle different tasks. We assume no human-labelled annotations are available, and each teacher model can be either single- or multi-task network, where the former is a degenerated case of the latter. The student model, depending on the customized tasks, learns the related knowledge filtered from the multiple teachers, and eventually masters the complete or a subset of expertise from all teachers. To this end, we adopt a layer-wise training strategy, which entangles the student's network block to be learned with the corresponding teachers. As demonstrated on several benchmarks, the learned student network achieves very promising results, even outperforming the teachers on the customized tasks.

    • #3279
      Large Scale Evolving Graphs with Burst Detection
      Yifeng Zhao, Xiangwei Wang, Hongxia Yang, Le Song, Jie Tang
      Details | PDF
      Learning Graphical Models

      Analyzing large-scale evolving graphs are crucial for understanding the dynamic and evolutionary nature of social networks. Most existing works focus on discovering repeated and consistent temporal patterns, however, such patterns cannot fully explain the complexity observed in dynamic networks. For example, in recommendation scenarios, users sometimes purchase products on a whim during a window shopping.Thus, in this paper, we design and implement a novel framework called BurstGraph which can capture both recurrent and consistent patterns, and especially unexpected bursty network changes. The performance of the proposed algorithm is demonstrated on both a simulated dataset and a world-leading E-Commerce company dataset, showing that they are able to discriminate recurrent events from extremely bursty events in terms of action propensity.

    • #3194
      Parametric Manifold Learning of Gaussian Mixture Models
      Ziquan Liu, Lei Yu, Janet H. Hsiao, Antoni B. Chan
      Details | PDF
      Learning Graphical Models

      The Gaussian Mixture Model (GMM) is among the most widely used parametric probability distributions for representing data. However, it is complicated to analyze the relationship among GMMs since they lie on a high-dimensional manifold. Previous works either perform clustering of GMMs, which learns a limited discrete latent representation, or kernel-based embedding of GMMs, which is not interpretable due to difficulty in computing the inverse mapping. In this paper, we propose Parametric Manifold Learning of GMMs (PML-GMM), which learns a parametric mapping from a low-dimensional latent space to a high-dimensional GMM manifold. Similar to PCA, the proposed mapping is parameterized by the principal axes for the component weights, means, and covariances, which are optimized to minimize the reconstruction loss measured using Kullback-Leibler divergence (KLD). As the KLD between two GMMs is intractable, we approximate the objective function by a variational upper bound, which is optimized by an EM-style algorithm. Moreover, We derive an efficient solver by alternating optimization of subproblems and exploit Monte Carlo sampling to escape from local minima. We demonstrate the effectiveness of PML-GMM through experiments on synthetic, eye-fixation, flow cytometry, and social check-in data.

    Tuesday 13 17:30 - 18:00 Early Career 2 - Early Career Spotlight 2 (J)

    Chair: Dengji Zhao
    • #11057
      Integrating Learning with Game Theory for Societal Challenges
      Fei Fang
      Details | PDF
      Early Career Spotlight 2

      Real-world problems often involve more than one decision makers, each with their own goals or preferences. While game theory is an established paradigm for reasoning strategic interactions between multiple decision-makers, its applicability in practice is often limited by the intractability of computing equilibria in large games, and the fact that the game parameters are sometimes unknown and the players are often not perfectly rational. On the other hand, machine learning and reinforcement learning have led to huge successes in various domains and can be leveraged to overcome the limitations of the game-theoretic analysis. In this paper, we introduce our work on integrating learning with computational game theory for addressing societal challenges such as security and sustainability.

    Wednesday 14 08:30 - 09:20 Invited Talk (D-I)

    Chair: Christian Bessiere
  • Empirical Model Learning: merging knowledge-based and data-driven decision models through machine learning
    Michela Milano
    Invited Talk
  • Wednesday 14 09:30 - 09:35 Industry days (D-I)

    Chair: Yu Zheng
  • Opening Remarks
    Industry days
  • Wednesday 14 09:30 - 10:30 AI-HWB - ST: AI for Improving Human Well-Being 1 (J)

    Chair: Maria Gini
    • #1213
      Truly Batch Apprenticeship Learning with Deep Successor Features
      Donghun Lee, Srivatsan Srinivasan, Finale Doshi-Velez
      Details | PDF
      ST: AI for Improving Human Well-Being 1

      We introduce a novel apprenticeship learning algorithm to learn an expert's underlying reward structure in off-policy model-free batch settings. Unlike existing methods that require hand-crafted features, on-policy evaluation, further data acquisition for evaluation policies or the knowledge of model dynamics, our algorithm requires only batch data (demonstrations) of the observed expert behavior.  Such settings are common in many real-world tasks---health care, finance, or industrial process control---where accurate simulators do not exist and additional data acquisition is costly.  We develop a transition-regularized imitation learning model to learn a rich feature representation and a near-expert initial policy that makes the subsequent batch inverse reinforcement learning process viable. We also introduce deep successor feature networks that perform off-policy evaluation to estimate feature expectations of candidate policies. Under the batch setting, our method achieves superior results on control benchmarks as well as a real clinical task of sepsis management in the Intensive Care Unit.

    • #1239
      Automatic Grassland Degradation Estimation Using Deep Learning
      Xiyu Yan, Yong Jiang, Shuai Chen, Zihao He, Chunmei Li, Shu-Tao Xia, Tao Dai, Shuo Dong, Feng Zheng
      Details | PDF
      ST: AI for Improving Human Well-Being 1

      Grassland degradation estimation is essential to prevent global land desertification and sandstorms. Typically, the key to such estimation is to measure the coverage of indicator plants. However, traditional methods of estimation rely heavily on human eyes and manual labor, thus inevitably leading to subjective results and high labor costs. In contrast, deep learning-based image segmentation algorithms are potentially capable of automatic assessment of the coverage of indicator plants. Nevertheless, a suitable image dataset comprising grassland images is not publicly available. To this end, we build an original Automatic Grassland Degradation Estimation Dataset (AGDE-Dataset), with a large number of grassland images captured from the wild. Based on AGDE-Dataset, we are able to propose a brand new scheme to automatically estimate grassland degradation, which mainly consists of two components. 1) Semantic segmentation: we design a deep neural network with an improved encoder-decoder structure to implement semantic segmentation of grassland images. In addition, we propose a novel Focal-Hinge Loss to alleviate the class imbalance of semantics in the training stage.  2) Degradation estimation: we provide the estimation of grassland degradation based on the results of semantic segmentation. Experimental results show that the proposed method achieves satisfactory accuracy in grassland degradation estimation.

    • #5979
      DDL: Deep Dictionary Learning for Predictive Phenotyping
      Tianfan Fu, Trong Nghia Hoang, Cao Xiao, Jimeng Sun
      Details | PDF
      ST: AI for Improving Human Well-Being 1

      Predictive phenotyping is about accurately predicting what phenotypes will occur in the next clinical visit based on longitudinal Electronic Health Record (EHR) data. Several deep learning (DL) models have demonstrated great performance in predictive phenotyping. However, these DL-based phenotyping models requires access to a large amount of labeled data, which are often  expensive to acquire. To address this label-insufficient challenge, we propose a deep dictionary learning framework (DDL) for phenotyping, which utilizes unlabeled data as a complementary source of information to generate a better, more succinct data representation. With extensive experiments on multiple real-world EHR datasets, we demonstrated DDL can outperform the state of the art predictive phenotyping methods on a wide variety of clinical tasks that require patient phenotyping such as heart failure classification, mortality prediction, and sequential prediction. All empirical results consistently show that unlabeled data can indeed be used to generate better data representation, which helps improve DDL's phenotyping performance over existing baseline methods that only uses labeled data.  

    • #203
      Bidirectional Active Learning with Gold-Instance-Based Human Training
      Feilong Tang
      Details | PDF
      ST: AI for Improving Human Well-Being 1

      Active learning was proposed to improve learning performance and reduce labeling cost. However, traditional relabeling-based schemes seriously limit the ability of active learning because human may repeatedly make similar mistakes, without improving their expertise. In this paper, we propose a Bidirectional Active Learning with human Training (BALT) model that can enhance human related expertise during labeling and improve relabelingquality accordingly. We quantitatively capture how gold instances can be used to both estimate labelers? previous performance and improve their future correctness ratio. Then, we propose the backward relabeling scheme that actively selects the most likely incorrectly labeled instances for relabeling. Experimental results on three real datasets demonstrate that our BALT algorithm significantly outperforms representative related proposals.

    Wednesday 14 09:30 - 10:30 ML|AML - Adversarial Machine Learning 1 (L)

    Chair: Jen-Tzung Chien
    • #2984
      Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization
      Feihu Huang, Shangqian Gao, Songcan Chen, Heng Huang
      Details | PDF
      Adversarial Machine Learning 1

      Alternating direction method of multipliers (ADMM) is a popular optimization tool for the composite and constrained problems in machine learning. However, in many machine learning problems such as black-box learning and bandit feedback, ADMM could fail because the explicit gradients of these problems are difficult or even infeasible to obtain. Zeroth-order (gradient-free) methods can effectively solve these problems due to that the objective function values are only required in the optimization. Recently, though there exist a few zeroth-order ADMM methods, they build on the convexity of objective function. Clearly, these existing zeroth-order methods are limited in many applications. In the paper, thus, we propose a class of fast zeroth-order stochastic ADMM methods (\emph{i.e.}, ZO-SVRG-ADMM and ZO-SAGA-ADMM) for solving nonconvex problems with multiple nonsmooth penalties, based on the coordinate smoothing gradient estimator. Moreover, we prove that both the ZO-SVRG-ADMM and ZO-SAGA-ADMM have convergence rate of $O(1/T)$, where $T$ denotes the number of iterations. In particular, our methods not only reach the best convergence rate of $O(1/T)$ for the nonconvex optimization, but also are able to effectively solve many complex machine learning problems with multiple regularized penalties and constraints. Finally, we conduct the experiments of black-box binary classification and structured adversarial attack on black-box deep neural network to validate the efficiency of our algorithms.

    • #5611
      On the Effectiveness of Low Frequency Perturbations
      Yash Sharma, Gavin Weiguang Ding, Marcus A. Brubaker
      Details | PDF
      Adversarial Machine Learning 1

      Carefully crafted, often imperceptible, adversarial perturbations have been shown to cause state-of-the-art models to yield extremely inaccurate outputs, rendering them unsuitable for safety-critical application domains. In addition, recent work has shown that constraining the attack space to a low frequency regime is particularly effective. Yet, it remains unclear whether this is due to generally constraining the attack search space or specifically removing high frequency components from consideration. By systematically controlling the frequency components of the perturbation, evaluating against the top-placing defense submissions in the NeurIPS 2017 competition, we empirically show that performance improvements in both the white-box and black-box transfer settings are yielded only when low frequency components are preserved. In fact, the defended models based on adversarial training are roughly as vulnerable to low frequency perturbations as undefended models, suggesting that the purported robustness of state-of-the-art ImageNet defenses is reliant upon adversarial perturbations being high frequency in nature. We do find that under L-inf-norm constraint 16/255, the competition distortion bound, low frequency perturbations are indeed perceptible. This questions the use of the L-inf-norm, in particular, as a distortion metric, and, in turn, suggests that explicitly considering the frequency space is promising for learning robust models which better align with human perception.

    • #5955
      Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models
      Nupur Kumari, Mayank Singh, Abhishek Sinha, Harshitha Machiraju, Balaji Krishnamurthy, Vineeth N Balasubramanian
      Details | PDF
      Adversarial Machine Learning 1

      Neural networks are vulnerable to adversarial attacks - small visually imperceptible crafted noise which when added to the input drastically changes the output. The most effective method of defending against adversarial attacks is to use the methodology of adversarial training. We analyze the adversarially trained robust models to study their vulnerability against adversarial attacks at the level of the latent layers. Our analysis reveals that contrary to the input layer which is robust to adversarial attack, the latent layer of these robust models are highly susceptible to adversarial perturbations of small magnitude. Leveraging this information, we introduce a new technique Latent Adversarial Training (LAT) which comprises of fine-tuning the adversarially trained models to ensure the robustness at the feature layers. We also propose Latent Attack (LA), a novel algorithm for constructing adversarial examples. LAT results in a minor improvement in test accuracy and leads to a state-of-the-art adversarial accuracy against the universal first-order adversarial PGD attack which is shown for the MNIST, CIFAR-10, CIFAR-100, SVHN and Restricted ImageNet datasets.

    • #10963
      (Sister Conferences Best Papers Track) Adversarial Attacks on Neural Networks for Graph Data
      Daniel Zügner, Amir Akbarnejad, Stephan Günnemann
      Details | PDF
      Adversarial Machine Learning 1

      Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this extended abstract we summarize the key findings and contributions of our work, in which we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model. We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful given only limited knowledge about the graph.

    Wednesday 14 09:30 - 10:30 ML|RL - Reinforcement Learning 2 (2701-2702)

    Chair: I-Chen Wu
    • #977
      Metatrace Actor-Critic: Online Step-Size Tuning by Meta-gradient Descent for Reinforcement Learning Control
      Kenny Young, Baoxiang Wang, Matthew E. Taylor
      Details | PDF
      Reinforcement Learning 2

      Reinforcement learning (RL) has had many successes, but significant hyperparameter tuning is commonly required to achieve good performance. Furthermore, when nonlinear function approximation is used, non-stationarity in the state representation can lead to learning instability. A variety of techniques exist to combat this --- most notably experience replay or the use of parallel actors. These techniques stabilize learning by making the RL problem more similar to the supervised setting. However, they come at the cost of moving away from the RL problem as it is typically formulated, that is, a single agent learning online without maintaining a large database of training examples. To address these issues, we propose Metatrace, a meta-gradient descent based algorithm to tune the step-size online. Metatrace leverages the structure of eligibility traces, and works for both tuning a scalar step-size and a respective step-size for each parameter. We empirically evaluate Metatrace for actor-critic on the Arcade Learning Environment. Results show Metatrace can speed up learning, and improve performance in non-stationary settings.

    • #5320
      Successor Options: An Option Discovery Framework for Reinforcement Learning
      Rahul Ramesh, Manan Tomar, Balaraman Ravindran
      Details | PDF
      Reinforcement Learning 2

      The options framework in reinforcement learning models the notion of a skill or a temporally extended sequence of actions. The discovery of a reusable set of skills has typically entailed building options, that navigate to bottleneck states. In this work, we instead adopt a complementary approach, where we attempt to discover options that navigate to landmark states. These states are prototypical representatives of well-connected regions and can hence access the associated region with relative ease. In this work, we propose Successor Options, which leverages Successor representations to build a model of the state space. The intra-option policies are learnt using a novel pseudo-reward and the model scales to high-dimensional spaces since it does not construct an explicit graph of the entire state space. Additionally, we also propose an Incremental Successor Options model that iterates between constructing Successor representations and building options, which is useful when robust Successor representations cannot be built solely from primitive actions. We demonstrate the efficacy of our approach on a collection of grid-worlds, and on the high-dimensional robotic control environment of Fetch.

    • #5368
      An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning Agents
      Felipe Petroski Such, Vashisht Madhavan, Rosanne Liu, Rui Wang, Pablo Samuel Castro, Yulun Li, Jiale Zhi, Ludwig Schubert, Marc G. Bellemare, Jeff Clune, Joel Lehman
      Details | PDF
      Reinforcement Learning 2

      Much human and computational effort has aimed to improve how deep reinforcement learning (DRL) algorithms perform on benchmarks such as the Atari Learning Environment. Comparatively less effort has focused on understanding what has been learned by such methods, and investigating and comparing the representations learned by different families of DRL algorithms. Sources of friction include the onerous computational requirements, and general logistical and architectural complications for running DRL algorithms at scale. We lessen this friction, by (1) training several algorithms at scale and releasing trained models, (2) integrating with a previous DRL model release, and (3) releasing code that makes it easy for anyone to load, visualize, and analyze such models. This paper introduces the Atari Zoo framework, which contains models trained across benchmark Atari games, in an easy-to-use format, as well as code that implements common modes of analysis and connects such models to a popular neural network visualization library. Further, to demonstrate the potential of this dataset and software package, we show initial quantitative and qualitative comparisons between the performance and representations of several DRL algorithms, highlighting interesting and previously unknown distinctions between them.

    • #5528
      Unobserved Is Not Equal to Non-existent: Using Gaussian Processes to Infer Immediate Rewards Across Contexts
      Hamoon Azizsoltani, Yeo Jin Kim, Markel Sanz Ausin, Tiffany Barnes, Min Chi
      Details | PDF
      Reinforcement Learning 2

      Learning optimal policies in real-world domains with delayed rewards is a major challenge in Reinforcement Learning. We address the credit assignment problem by proposing a Gaussian Process (GP)-based immediate reward approximation algorithm and evaluate its effectiveness in 4 contexts where rewards can be delayed for long trajectories. In one GridWorld game and 8 Atari games, where immediate rewards are available, our results showed that on 7 out 9 games, the proposed GP-inferred reward policy performed at least as well as the immediate reward policy and significantly outperformed the corresponding delayed reward policy. In e-learning and healthcare applications, we combined GP-inferred immediate rewards with offline Deep Q-Network (DQN) policy induction and showed that the GP-inferred reward policies outperformed the policies induced using delayed rewards in both real-world contexts.

    Wednesday 14 09:30 - 10:30 AMS|EPAMBS - Economic Paradigms, Auctions and Market-Based Systems (2703-2704)

    Chair: Reyhan Aydogan
    • #1844
      Dispatching Through Pricing: Modeling Ride-Sharing and Designing Dynamic Prices
      Mengjing Chen, Weiran Shen, Pingzhong Tang, Song Zuo
      Details | PDF
      Economic Paradigms, Auctions and Market-Based Systems

      Over the past few years, ride-sharing has emerged as an effective way to relieve traffic congestion. A key problem for the ride-sharing platforms is to come up with a revenue-optimal (or GMV-optimal) pricing scheme and a vehicle dispatching policy that incorporate geographic and temporal information. In this paper, we aim to tackle this problem via an economic approach. Modeled naively, the underlying optimization problem may be non-convex and thus hard to solve. To this end, we use a so-called ``ironing'' technique to convert the problem into an equivalent convex optimization one via a clean Markov decision process (MDP) formulation, where the states are the driver distributions and the decision variables are the prices for each pair of locations. Our main finding is an efficient algorithm that computes the exact revenue-optimal (or GMV-optimal) randomized pricing scheme, which naturally induces the accompany vehicle dispatching policy. We also conduct empirical evaluations of our solution through real data of a major ride-sharing platform and show its advantages over fixed pricing schemes as well as several prevalent surge-based pricing schemes.

    • #5005
      Strategic Signaling for Selling Information Goods
      Shani Alkoby, David Sarne, Igal Milchtaich
      Details | PDF
      Economic Paradigms, Auctions and Market-Based Systems

      This paper studies the benefit in using signaling by an information seller holding information that can completely disambiguate some uncertainty concerning the state of the world for the information buyer. We show that a necessary condition for having the information seller benefit from signaling in this model is having some ``seed of truth" in the signaling scheme used. We then introduce two natural signaling mechanisms that adhere to this condition, one where the seller pre-commits to the signaling scheme to be used and the other where she commits to use a signaling scheme that contains a ``seed of truth". Finally, we analyze the equilibrium resulting from each and show that, somehow counter-intuitively, despite the inherent differences between the two mechanisms, they are equivalent in the sense that for any equilibrium associated with the maximum revenue in one there is an equilibrium offering the seller the same revenue in the other.

    • #5490
      Explore Truthful Incentives for Tasks with Heterogenous Levels of Difficulty in the Sharing Economy
      Pengzhan Zhou, Xin Wei, Cong Wang, Yuanyuan Yang
      Details | PDF
      Economic Paradigms, Auctions and Market-Based Systems

      Incentives are explored in the sharing economy to inspire users for better resource allocation. Previous works build a budget-feasible incentive mechanism to learn users' cost distribution. However, they only consider a special case that all tasks are considered as the same. The general problem asks for finding a solution when the cost for different tasks varies. In this paper, we investigate this general problem by considering a system with k levels of difficulty. We present two incentivizing strategies for offline and online implementation, and formally derive the ratio of utility between them in different scenarios. We propose a regret-minimizing mechanism to decide incentives by dynamically adjusting budget assignment and learning from users' cost distributions. Our experiment demonstrates utility improvement about 7 times and time saving of 54% to meet a utility objective compared to the previous works.

    • #6397
      On the Problem of Assigning PhD Grants
      Katarína Cechlárová, Laurent Gourvès, Julien Lesca
      Details | PDF
      Economic Paradigms, Auctions and Market-Based Systems

      In this paper, we study the problem of assigning PhD grants. Master students apply for PhD grants on different topics and the number of available grants is limited. In this problem, students have preferences over topics they applied to and the university has preferences over possible matchings of student/topic that satisfy the limited number of grants. The particularity of this framework is the uncertainty on a student's decision to accept or reject a topic offered to him. Without using probability to model uncertainty, we study the possibility of designing protocols of exchanges between the students and the university in order to construct a matching which is as close as possible to the optimal one i.e., the best achievable matching without uncertainty.

    Wednesday 14 09:30 - 10:30 AMS|ATM - Agent Theories and Models (2705-2706)

    Chair: Martin Caminada
    • #63
      The Interplay of Emotions and Norms in Multiagent Systems
      Anup K. Kalia, Nirav Ajmeri, Kevin S. Chan, Jin-Hee Cho, Sibel Adalı, Munindar P. Singh
      Details | PDF
      Agent Theories and Models

      We study how emotions influence norm outcomes in decision-making contexts. Following the literature, we provide baseline Dynamic Bayesian models to capture an agent's two perspectives on a directed norm. Unlike the literature, these models are holistic in that they incorporate not only norm outcomes and emotions but also trust and goals. We obtain data from an empirical study involving game play with respect to the above variables. We provide a step-wise process to discover two new Dynamic Bayesian models based on maximizing log-likelihood scores with respect to the data. We compare the new models with the baseline models to discover new insights into the relevant relationships. Our empirically supported models are thus holistic and characterize how emotions influence norm outcomes better than previous approaches.

    • #4198
      Strategy Logic with Simple Goals: Tractable Reasoning about Strategies
      Francesco Belardinelli, Wojciech Jamroga, Damian Kurpiewski, Vadim Malvone, Aniello Murano
      Details | PDF
      Agent Theories and Models

      In this paper we introduce Strategy Logic with simple goals (SL[SG]), a fragment of Strategy Logic that strictly extends the well-known Alternating-time Temporal Logic ATL by introducing arbitrary quantification over the agents' strategies.  Our motivation comes from game-theoretic applications, such as expressing Stackelberg equilibria in games, coercion in voting protocols, as well as module checking for simple goals. Most importantly, we prove that the model checking problem for SL[SG] is PTIME-complete, the same as ATL. Thus, the extra expressive power comes at no computational cost as far as verification is concerned.

    • #4774
      Average-case Analysis of the Assignment Problem with Independent Preferences
      Yansong Gao, Jie Zhang
      Details | PDF
      Agent Theories and Models

      The fundamental assignment problem is in search of welfare maximization mechanisms to allocate items to agents when the private preferences over indivisible items are provided by self-interested agents. The mainstream mechanism \textit{Random Priority} is asymptotically the best mechanism for this purpose, when comparing its welfare  to the optimal social welfare using the canonical \textit{worst-case approximation ratio}.  Surprisingly, the efficiency loss indicated by the worst-case ratio does not have a constant bound \cite{FFZ:14}.Recently, \cite{DBLP:conf/mfcs/DengG017} shows that when the agents' preferences are drawn from a uniform distribution, its \textit{average-case approximation ratio} is upper bounded by 3.718. They left it as an open question of whether a constant ratio holds for general scenarios. In this paper, we offer an affirmative answer to this question by showing that the ratio is bounded by $1/\mu$ when the preference values are independent and identically distributed random variables, where $\mu$ is the expectation of the value distribution. This upper bound improves the results in \cite{DBLP:conf/mfcs/DengG017} for the Uniform distribution as well. Moreover, under mild conditions, the ratio has a \textit{constant} bound for any independent  random values. En route to these results, we develop powerful tools to show the insights that for most valuation inputs, the efficiency loss is small.

    • #4962
      On Computational Tractability for Rational Verification
      Julian Gutierrez, Muhammad Najib, Giuseppe Perelli, Michael Wooldridge
      Details | PDF
      Agent Theories and Models

      Rational verification involves checking which temporal logic properties hold of a concurrent and multiagent system, under the assumption that agents in the system choose strategies in game theoretic equilibrium. Rational verification can be understood as a counterpart of model checking for multiagent systems, but while model checking can be done in polynomial time for some temporal logic specification languages such as CTL, and polynomial space with LTL specifications, rational verification is much more intractable: it is 2EXPTIME-complete with LTL specifications, even when using explicit-state system representations.  In this paper we show that the complexity of rational verification can be greatly reduced by restricting specifications to GR(1), a fragment of LTL that can represent most response properties of reactive systems. We also provide improved complexity results for rational verification when considering players' goals given by mean-payoff utility functions -- arguably the most widely used quantitative objective for agents in concurrent and multiagent systems. In particular, we show that for a number of relevant settings, rational verification can be done in polynomial space or even in polynomial time.

    Wednesday 14 09:30 - 10:30 ML|TDS - Time-series;Data Streams 1 (2601-2602)

    Chair: Qianli Ma
    • #279
      E²GAN: End-to-End Generative Adversarial Network for Multivariate Time Series Imputation
      Yonghong Luo, Ying Zhang, Xiangrui Cai, Xiaojie Yuan
      Details | PDF
      Time-series;Data Streams 1

      The missing values, appear in most of multivariate time series, prevent advanced analysis of multivariate time series data. Existing imputation approaches try to deal with missing values by deletion, statistical imputation, machine learning based imputation and generative imputation. However, these methods are either incapable of dealing with temporal information or multi-stage. This paper proposes an end-to-end generative model E²GAN to impute missing values in multivariate time series. With the help of the discriminative loss and the squared error loss, E²GAN can impute the incomplete time series by the nearest generated complete time series at one stage. Experiments on multiple real-world datasets show that our model outperforms the baselines on the imputation accuracy and achieves state-of-the-art classification/regression results on the downstream applications. Additionally, our method also gains better time efficiency than multi-stage method on the training of neural networks.

    • #607
      CLVSA: A Convolutional LSTM Based Variational Sequence-to-Sequence Model with Attention for Predicting Trends of Financial Markets
      Jia Wang, Tong Sun, Benyuan Liu, Yu Cao, Hongwei Zhu
      Details | PDF
      Time-series;Data Streams 1

      Financial markets are a complex dynamical system. The complexity comes from the interaction between a market and its participants, in other words, the integrated outcome of activities of the entire participants determines the markets trend, while the markets trend affects activities of participants. These interwoven interactions make financial markets keep evolving. Inspired by stochastic recurrent models that successfully capture variability observed in natural sequential data such as speech and video, we propose CLVSA, a hybrid model that consists of stochastic recurrent networks, the sequence-to-sequence architecture, the self- and inter-attention mechanism, and convolutional LSTM units to capture variationally underlying features in raw financial trading data. Our model outperforms basic models, such as convolutional neural network, vanilla LSTM network, and sequence-to-sequence model with attention, based on backtesting results of six futures from January 2010 to December 2017. Our experimental results show that, by introducing an approximate posterior, CLVSA takes advantage of an extra regularizer based on the Kullback-Leibler divergence to prevent itself from overfitting traps.

    • #4502
      Confirmatory Bayesian Online Change Point Detection in the Covariance Structure of Gaussian Processes
      Jiyeon Han, Kyowoon Lee, Anh Tong, Jaesik Choi
      Details | PDF
      Time-series;Data Streams 1

      In the analysis of sequential data, the detection of abrupt changes is important in predicting future events. In this paper, we propose statistical hypothesis tests for detecting covariance structure changes in locally smooth time series modeled by Gaussian Processes (GPs). We provide theoretically justified thresholds for the tests, and use them to improve Bayesian Online Change Point Detection (BOCPD) by confirming statistically significant changes and non-changes. Our Confirmatory BOCPD (CBOCPD) algorithm finds multiple structural breaks in GPs even when hyperparameters are not tuned precisely. We also provide conditions under which CBOCPD provides the lower prediction error compared to BOCPD. Experimental results on synthetic and real-world datasets show that our proposed algorithm outperforms existing methods for the prediction of nonstationarity in terms of both regression error and log-likelihood.

    • #5504
      Linear Time Complexity Time Series Clustering with Symbolic Pattern Forest
      Xiaosheng Li, Jessica Lin, Liang Zhao
      Details | PDF
      Time-series;Data Streams 1

      With increasing powering of data storage and advances in data generation and collection technologies, large volumes of time series data become available and the content is changing rapidly. This requires the data mining methods to have low time complexity to handle the huge and fast-changing data. This paper presents a novel time series clustering algorithm that has linear time complexity. The proposed algorithm partitions the data by checking some randomly selected symbolic patterns in the time series. Theoretical analysis is provided to show that group structures in the data can be revealed from this process. We evaluate the proposed algorithm extensively on all 85 datasets from the well-known UCR time series archive, and compare with the state-of-the-art approaches with statistical analysis. The results show that the proposed method is faster, and achieves better accuracy compared with other rival methods.

    Wednesday 14 09:30 - 10:30 KRR|ARTP - Automated Reasoning and Theorem Proving (2603-2604)

    Chair: Roni Stern
    • #2513
      An ASP Approach to Generate Minimal Countermodels in Intuitionistic Propositional Logic
      Camillo Fiorentini
      Details | PDF
      Automated Reasoning and Theorem Proving

      Intuitionistic Propositional Logic is complete w.r.t. Kripke semantics: if a formula is not intuitionistically valid, then there exists a finite Kripke model falsifying it. The problem of obtaining concise models has been scarcely investigated in the literature. We present a procedure to generate minimal models in the number of worlds relying on Answer Set Programming (ASP).

    • #3164
      Approximating Integer Solution Counting via Space Quantification for Linear Constraints
      Cunjing Ge, Feifei Ma, Xutong Ma, Fan Zhang, Pei Huang, Jian Zhang
      Details | PDF
      Automated Reasoning and Theorem Proving

      Solution counting or solution space quantification (means volume computation and volume estimation) for linear constraints (LCs) has found interesting applications in various fields. Experimental data shows that integer solution counting is usually more expensive than quantifying volume of solution space while their output values are close. So it is helpful to approximate the number of integer solutions by the volume if the error is acceptable. In this paper, we present and prove a bound of such error for LCs. It is the first bound that can be used to approximate the integer solution counts. Based on this result, an approximate integer solution counting method for LCs is proposed. Experiments show that our approach is over 20x faster than the state-of-the-art integer solution counters. Moreover, such advantage increases with the problem scale.

    • #2076
      Solving the Satisfiability Problem of Modal Logic S5 Guided by Graph Coloring
      Pei Huang, Minghao Liu, Ping Wang, Wenhui Zhang, Feifei Ma, Jian Zhang
      Details | PDF
      Automated Reasoning and Theorem Proving

      Modal logic S5 has found various applications in artificial intelligence. With the advances in modern SAT solvers, SAT-based approach has shown great potential in solving the satisfiability problem of S5. The scale of the SAT encoding for S5 is strongly influenced by the upper bound on the number of possible worlds. In this paper, we present a novel SAT-based approach for S5 satisfiability problem. We show a normal form for S5 formulas. Based on this normal form, a conflict graph can be derived whose chromatic number provides an upper bound of the possible worlds and a lot of unnecessary search spaces can be eliminated in this process. A heuristic graph coloring algorithm is adopted to balance the efficiency and optimality. The number of possible worlds can be significantly reduced for many practical instances. Extensive experiments demonstrate that our approach outperforms state-of-the-art S5-SAT solvers.

    • #888
      Guarantees for Sound Abstractions for Generalized Planning
      Blai Bonet, Raquel Fuentetaja, Yolanda E-Martín, Daniel Borrajo
      Details | PDF
      Automated Reasoning and Theorem Proving

      Generalized planning is about finding plans that solve collections of planning instances, often infinite collections, rather than single instances. Recently it has been shown how to reduce the planning problem for generalized planning to the planning problem for a qualitative numerical problem; the latter being a reformulation that simultaneously captures all the instances in the collection. An important thread of research thus consists in finding such reformulations, or abstractions, automatically. A recent proposal learns the abstractions inductively from a finite and small sample of transitions from instances in the collection. However, as in all inductive processes, the learned abstraction is not guaranteed to be correct for the whole collection. In this work we address this limitation by performing an analysis of the abstraction with respect to the collection, and show how to obtain formal guarantees for generalization. These guarantees, in the form of first-order formulas, may be used to 1) define subcollections of instances on which the abstraction is guaranteed to be sound, 2) obtain necessary conditions for generalization under certain assumptions, and 3) do automated synthesis of complex invariants for planning problems. Our framework is general, it can be extended or combined with other approaches, and it has applications that go beyond generalized planning.

    Wednesday 14 09:30 - 10:30 NLP|IE - Information Extraction 1 (2605-2606)

    Chair: Shourya Roy
    • #1684
      End-to-End Multi-Perspective Matching for Entity Resolution
      Cheng Fu, Xianpei Han, Le Sun, Bo Chen, Wei Zhang, Suhui Wu, Hao Kong
      Details | PDF
      Information Extraction 1

      Entity resolution (ER) aims to identify data records referring to the same real-world entity. Due to the heterogeneity of entity attributes and the diversity of similarity measures, one main challenge of ER is how to select appropriate similarity measures for different attributes. Previous ER methods usually employ heuristic similarity selection algorithms, which are highly specialized to specific ER problems and are hard to be generalized to other situations. Furthermore, previous studies usually perform similarity learning and similarity selection independently, which often result in error propagation and are hard to be optimized globally. To resolve the above problems, this paper proposes an end-to-end multi-perspective entity matching model, which can adaptively select optimal similarity measures for heterogenous attributes by jointly learning and selecting similarity measures in an end-to-end way. Experiments on two real-world datasets show that our method significantly outperforms previous ER methods.

    • #3048
      Improving Cross-Domain Performance for Relation Extraction via Dependency Prediction and Information Flow Control
      Amir Pouran Ben Veyseh, Thien Nguyen, Dejing Dou
      Details | PDF
      Information Extraction 1

      Relation Extraction (RE) is one of the fundamental tasks in Information Extraction and Natural Language Processing. Dependency trees have been shown to be a very useful source of information for this task. The current deep learning models for relation extraction has mainly exploited this dependency information by guiding their computation along the structures of the dependency trees. One potential problem with this approach is it might prevent the models from capturing important context information beyond syntactic structures and cause the poor cross-domain generalization. This paper introduces a novel method to use dependency trees in RE for deep learning models that jointly predicts dependency and semantics relations. We also propose a new mechanism to control the information flow in the model based on the input entity mentions. Our extensive experiments on benchmark datasets show that the proposed model outperforms the existing methods for RE significantly.

    • #3319
      Neural Collective Entity Linking Based on Recurrent Random Walk Network Learning
      Mengge Xue, Weiming Cai, Jinsong Su, Linfeng Song, Yubin Ge, Yubao Liu, Bin Wang
      Details | PDF
      Information Extraction 1

      Benefiting from the excellent ability of neural networks on learning semantic representations, existing studies for entity linking (EL) have resorted to neural networks to exploit both the local mention-to-entity compatibility and the global interdependence between different EL decisions for target entity disambiguation. However, most neural collective EL methods depend entirely upon neural networks to automatically model the semantic dependencies between different EL decisions, which lack of the guidance from external knowledge. In this paper, we propose a novel end-to-end neural network with recurrent random-walk layers for collective EL, which introduces external knowledge to model the semantic interdependence between different EL decisions. Specifically, we first establish a model based on local context features, and then stack random-walk layers to reinforce the evidence for related EL decisions into high-probability decisions, where the semantic interdependence between candidate entities is mainly induced from an external knowledge base. Finally, a semantic regularizer that preserves the collective EL decisions consistency is incorporated into the conventional objective function, so that the external knowledge base can be fully exploited in collective EL decisions. Experimental results and in-depth analysis on various datasets show that our model achieves better performance than other state-of-the-art models. Our code and data are released at https://github.com/DeepLearnXMU/RRWEL.

    • #5335
      Coreference Aware Representation Learning for Neural Named Entity Recognition
      Zeyu Dai, Hongliang Fei, Ping Li
      Details | PDF
      Information Extraction 1

      Recent neural network models have achieved state-of-the-art performance on the task of named entity recognition (NER). However, previous neural network models typically treat the input sentences as a linear sequence of words but ignore rich structural information, such as the coreference relations among non-adjacent words, phrases or entities. In this paper, we propose a novel approach to learn coreference-aware word representations for the NER task at the document level. In particular, we enrich the well-known neural architecture ``CNN-BiLSTM-CRF'' with a coreference layer on top of the BiLSTM layer to incorporate coreferential relations. Furthermore, we introduce the coreference regularization to ensure the coreferential entities to share similar representations and consistent predictions within the same coreference cluster. Our proposed model achieves new state-of-the-art performance on two NER benchmarks: CoNLL-2003 and OntoNotes v5.0. More importantly, we demonstrate that our framework does not rely on gold coreference knowledge, and can still work well even when the coreferential relations are generated by a third-party toolkit.

    Wednesday 14 09:30 - 10:30 CV|VEAS - Video: Events, Activities and Surveillance (2501-2502)

    Chair: Wenbing Huang
    • #1593
      Supervised Set-to-Set Hashing in Visual Recognition
      I-Hong Jhuo
      Details | PDF
      Video: Events, Activities and Surveillance

      Visual data, such as an image or a sequence of video frames, is often naturally represented as a point set. In this paper, we consider the fundamental problem of finding a nearest set from a collection of sets, to a query set. This problem has obvious applications in large-scale visual retrieval and recognition, and also in applied fields beyond computer vision. One challenge stands out in solving the problem---set representation and measure of similarity. Particularly, the query set and the sets in dataset collection can have varying cardinalities. The training collection is large enough such that linear scan is impractical. We propose a simple representation scheme that encodes both statistical and structural information of the sets. The derived representations are integrated in a kernel framework for flexible similarity measurement. For the query set process, we adopt a learning-to-hash pipeline that turns the kernel representations into hash bits based on simple learners, using multiple kernel learning. Experiments on two visual retrieval datasets show unambiguously that our set-to-set hashing framework outperforms prior methods that do not take the set-to-set search setting. 

    • #4203
      Variation Generalized Feature Learning via Intra-view Variation Adaptation
      Jiawei Li, Mang Ye, Andy Jinhua Ma, Pong C Yuen
      Details | PDF
      Video: Events, Activities and Surveillance

      This paper addresses the variation generalized feature learning problem in unsupervised video-based person re-identification (re-ID). With advanced tracking and detection algorithms, large-scale intra-view positive samples can be easily collected by assuming that the image frames within the tracking sequence belong to the same person. Existing methods either directly use the intra-view positives to model cross-view variations or simply minimize the intra-view variations to capture the invariant component with some discriminative information loss. In this paper, we propose a Variation Generalized Feature Learning (VGFL) method to learn adaptable feature representation with intra-view positives. The proposed method can learn a discriminative re-ID model without any manually annotated cross-view positive sample pairs. It could address the unseen testing variations with a novel variation generalized feature learning algorithm. In addition, an Adaptability-Discriminability (AD) fusion method is introduced to learn adaptable video-level features. Extensive experiments on different datasets demonstrate the effectiveness of the proposed method.

    • #4243
      DBDNet: Learning Bi-directional Dynamics for Early Action Prediction
      Guoliang Pang, Xionghui Wang, Jian-Fang Hu, Qing Zhang, Wei-Shi Zheng
      Details | PDF
      Video: Events, Activities and Surveillance

      Predicting future actions from observed partial videos is very challenging as the missing future is uncertain and sometimes has multiple possibilities. To obtain a reliable future estimation, a novel encoder-decoder architecture is proposed for integrating the tasks of synthesizing future motions from observed videos and reconstructing observed motions from synthesized future motions in an unified framework, which can capture the bi-directional dynamics depicted in partial videos along the temporal (past-to-future) direction and reverse chronological (future-back-to-past) direction. We then employ a bi-directional long short-term memory (Bi-LSTM) architecture to exploit the learned bi-directional dynamics for predicting early actions. Our experiments on two benchmark action datasets show that learning bi-directional dynamics benefits the early action prediction and our system clearly outperforms the state-of-the-art methods.

    • #3056
      Predicting dominance in multi-person videos
      Chongyang Bai, Maksim Bolonkin, Srijan Kumar, Jure Leskovec, Judee Burgoon, Norah Dunbar, V. S. Subrahmanian
      Details | PDF
      Video: Events, Activities and Surveillance

      We consider the problems of predicting (i) the most dominant person in a group of people, and (ii) the more dominant of a pair of people, from videos depicting group interactions. We introduce a novel family of variables called Dominance Rank. We combine features not previously used for dominance prediction (e.g., facial action units, emotions), with a novel ensemble-based approach to solve these two problems. We test our models against four competing algorithms in the literature on two datasets and show that our results improve past performance. We show 2.4% to 16.7% improvement in AUC compared to baselines on one dataset, and a gain of 0.6% to 8.8% in accuracy on the other. Ablation testing shows that Dominance Rank features play a key role.

    Wednesday 14 09:30 - 10:30 R|MPP - Motion and Path Planning (2503-2504)

    Chair: Masoumeh Mansouri
    • #1466
      The Parameterized Complexity of Motion Planning for Snake-Like Robots
      Siddharth Gupta, Guy Sa'ar, Meirav Zehavi
      Details | PDF
      Motion and Path Planning

      We study a motion-planning problem inspired by the game Snake that models scenarios like the transportation of linked wagons towed by a locomotor to the movement of a group of agents that travel in an ``ant-like'' fashion. Given a ``snake-like'' robot with initial and final positions in an environment modeled by a graph, our goal is to decide whether the robot can reach the final position from the initial position without intersecting itself. Already on grid graphs, this problem is PSPACE-complete [Biasi and Ophelders, 2018]. Nevertheless, we prove that even on general graphs, it is solvable in time k^{O(k)}|I|^{O(1)} where k is the size of the robot, and |I| is the input size. Towards this, we give a novel application of color-coding to sparsify the configuration graph of the problem. We also show that the problem is unlikely to have a polynomial kernel even on grid graphs, but it admits a treewidth-reduction procedure. To the best of our knowledge, the study of the parameterized complexity of motion problems has been~largely~neglected, thus our work is pioneering in this regard.

    • #10955
      (Sister Conferences Best Papers Track) The Provable Virtue of Laziness in Motion Planning
      Nika Haghtalab, Simon Mackenzie, Ariel D. Procaccia, Oren Salzman, Siddhartha Srinivasa
      Details | PDF
      Motion and Path Planning

      The Lazy Shortest Path (LazySP) class consists of motion-planning algorithms that only evaluate edges along candidate shortest paths between the source and target. These algorithms were designed to minimize the number of edge evaluations in settings where edge evaluation dominates the running time of the algorithm such as manipulation in cluttered environments and planning for robots in surgical settings; but how close to optimal are LazySP algorithms in terms of this objective? Our main result is an analytical upper bound, in a probabilistic model, on the number of edge evaluations required by LazySP algorithms; a matching lower bound shows that these algorithms are asymptotically optimal in the worst case.

    • #656
      Energy-Efficient Slithering Gait Exploration for a Snake-Like Robot Based on Reinforcement Learning
      Zhenshan Bing, Christian Lemke, Zhuangyi Jiang, Kai Huang, Alois Knoll
      Details | PDF
      Motion and Path Planning

      Similar to their counterparts in nature, the flexible bodies of snake-like robots enhance their movement capability and adaptability in diverse environments. However, this flexibility corresponds to a complex control task involving highly redundant degrees of freedom, where traditional model-based methods usually fail to propel the robots energy-efficiently. In this work, we present a novel approach for designing an energy-efficient slithering gait for a snake-like robot using a model-free reinforcement learning (RL) algorithm. Specifically, we present an RL-based controller for generating locomotion gaits at a wide range of velocities, which is trained using the proximal policy optimization (PPO) algorithm. Meanwhile, a traditional parameterized gait controller is presented and the parameter sets are optimized using the grid search and Bayesian optimization algorithms for the purposes of reasonable comparisons. Based on the analysis of the simulation results, we demonstrate that this RL-based controller exhibits very natural and adaptive movements, which are also substantially more energy-efficient than the gaits generated by the parameterized controller. Videos are shown at https://videoviewsite.wixsite.com/rlsnake .

    • #10966
      (Sister Conferences Best Papers Track) Differentiable Physics and Stable Modes for Tool-Use and Manipulation Planning - Extended Abtract
      Marc Toussaint, Kelsey R. Allen, Kevin A. Smith, Joshua B. Tenenbaum
      Details | PDF
      Motion and Path Planning

      We propose to formulate physical reasoning and manipulation planning as an optimization problem that integrates first order logic, which we call Logic-Geometric Programming.

    Wednesday 14 09:30 - 10:30 ML|DM - Data Mining 4 (2505-2506)

    Chair: Decebal Constantin Mocanu
    • #938
      A Degeneracy Framework for Scalable Graph Autoencoders
      Guillaume Salha, Romain Hennequin, Viet Anh Tran, Michalis Vazirgiannis
      Details | PDF
      Data Mining 4

      In this paper, we present a general framework to scale graph autoencoders (AE) and graph variational autoencoders (VAE). This framework leverages graph degeneracy concepts to train models only from a dense subset of nodes instead of using the entire graph. Together with a simple yet effective propagation mechanism, our approach significantly improves scalability and training speed while preserving performance. We evaluate and discuss our method on several variants of existing graph AE and VAE, providing the first application of these models to large graphs with up to millions of nodes and edges. We achieve empirically competitive results w.r.t. several popular scalable node embedding methods, which emphasizes the relevance of pursuing further research towards more scalable graph AE and VAE.

    • #2479
      Community Detection and Link Prediction via Cluster-driven Low-rank Matrix Completion
      Junming Shao, Zhong Zhang, Zhongjing Yu, Jun Wang, Yi Zhao, Qinli Yang
      Details | PDF
      Data Mining 4

      Community detection and link prediction are highly dependent since knowing cluster structure as a priori will help identify missing links, and in return, clustering on networks with supplemented missing links will improve community detection performance. In this paper, we propose a Cluster-driven Low-rank Matrix Completion (CLMC), for performing community detection and link prediction simultaneously in a unified framework. To this end, CLMC decomposes the adjacent matrix of a target network as three additive matrices: clustering matrix, noise matrix and supplement matrix. The community-structure and low-rank constraints are imposed on the clustering matrix, such that the noisy edges between communities are removed and the resulting matrix is an ideal block-diagonal matrix. Missing edges are further learned via low-rank matrix completion. Extensive experiments show that CLMC achieves state-of-the-art performance.

    • #4127
      Graph Convolutional Networks on User Mobility Heterogeneous Graphs for Social Relationship Inference
      Yongji Wu, Defu Lian, Shuowei Jin, Enhong Chen
      Details | PDF
      Data Mining 4

      Inferring social relations from user trajectory data is of great value in real-world applications such as friend recommendation and ride-sharing. Most existing methods predict relationship based on a pairwise approach using some hand-crafted features or rely on a simple skip-gram based model to learn embeddings on graphs. Using hand-crafted features often fails to capture the complex dynamics in human social relations, while the graph embedding based methods only use random walks to propagate information and cannot incorporate external semantic data provided. We propose a novel model that utilizes Graph Convolutional Networks (GCNs) to learn user embeddings on the User Mobility Heterogeneous Graph in an unsupervised manner. This model is capable of propagating relation layer-wisely as well as combining both the rich structural information in the heterogeneous graph and predictive node features provided. Our method can also be extended to a semi-supervised setting if a part of the social network is available. The evaluation on three real-world datasets demonstrates that our method outperforms the state-of-the-art approaches.

    • #10973
      (Sister Conferences Best Papers Track) Discovering Reliable Dependencies from Data: Hardness and Improved Algorithms
      Panagiotis Mandros, Mario Boley, Jilles Vreeken
      Details | PDF
      Data Mining 4

      The reliable fraction of information is an attractive score for quantifying (functional) dependencies in high-dimensional data. In this paper, we systematically explore the algorithmic implications of using this measure for optimization. We show that the problem is NP-hard, justifying worst-case exponential-time as well as heuristic search methods. We then substantially improve the practical performance for both optimization styles by deriving a novel admissible bounding function that has an unbounded potential for additional pruning over the previously proposed one. Finally, we empirically investigate the approximation ratio of the greedy algorithm and show that it produces highly competitive results in a fraction of time needed for complete branch-and-bound style search.

    Wednesday 14 09:30 - 10:30 ML|TAML - Transfer, Adaptation, Multi-task Learning 1 (2401-2402)

    Chair: Michael Perrot
    • #2102
      Weak Supervision Enhanced Generative Network for Question Generation
      Yutong Wang, Jiyuan Zheng, Qijiong Liu, Zhou Zhao, Jun Xiao, Yueting Zhuang
      Details | PDF
      Transfer, Adaptation, Multi-task Learning 1

      Automatic question generation according to an answer within the given passage is useful for many applications, such as question answering system, dialogue system, etc. Current neural-based methods mostly take two steps which extract several important sentences based on the candidate answer through manual rules or supervised neural networks and then use an encoder-decoder framework to generate questions about these sentences. These approaches still acquire two steps and neglect the semantic relations between the answer and the context of the whole passage which is sometimes necessary for answering the question. To address this problem, we propose the Weakly Supervision Enhanced Generative Network (WeGen) which automatically discovers relevant features of the passage given the answer span in a weakly supervised manner to improve the quality of generated questions. More specifically, we devise a discriminator, Relation Guider, to capture the relations between the passage and the associated answer and then the Multi-Interaction mechanism is deployed to transfer the knowledge dynamically for our question generation system. Experiments show the effectiveness of our method in both automatic evaluations and human evaluations.

    • #2519
      Fast and Robust Multi-View Multi-Task Learning via Group Sparsity
      Lu Sun, Canh Hao Nguyen, Hiroshi Mamitsuka
      Details | PDF
      Transfer, Adaptation, Multi-task Learning 1

      Multi-view multi-task learning has recently attracted more and more attention due to its dual-heterogeneity, i.e.,each task has heterogeneous features from multiple views, and probably correlates with other tasks via common views.Existing methods usually suffer from three problems: 1) lack the ability to eliminate noisy features, 2) hold a strict assumption on view consistency and 3) ignore the possible existence of task-view outliers.To overcome these limitations, we propose a robust method with joint group-sparsity by decomposing feature parameters into a sum of two components,in which one saves relevant features (for Problem 1) and flexible view consistency (for Problem 2),while the other detects task-view outliers (for Problem 3).With a global convergence property, we develop a fast algorithm to solve the optimization problem in a linear time complexity w.r.t. the number of features and labeled samples.Extensive experiments on various synthetic and real-world datasets demonstrate its effectiveness.

    • #5280
      A Principled Approach for Learning Task Similarity in Multitask Learning
      Changjian Shui, Mahdieh Abbasi, Louis-Émile Robitaille, Boyu Wang, Christian Gagné
      Details | PDF
      Transfer, Adaptation, Multi-task Learning 1

      Multitask learning aims at solving a set of related tasks simultaneously, by exploiting the shared knowledge for improving the performance on individual tasks. Hence, an important aspect of multitask learning is to understand the similarities within a set of tasks. Previous works have incorporated this similarity information explicitly (e.g., weighted loss for each task) or implicitly (e.g., adversarial loss for feature adaptation), for achieving good empirical performances. However, the theoretical motivations for adding task similarity knowledge are often missing or incomplete. In this paper, we give a different perspective from a theoretical point of view to understand this practice. We first provide an upper bound on the generalization error of multitask learning, showing the benefit of explicit and implicit task similarity knowledge. We systematically derive the bounds based on two distinct task similarity metrics: H divergence and Wasserstein distance. From these theoretical results, we revisit the Adversarial Multi-task Neural Network, proposing a new training algorithm to learn the task relation coefficients and neural network parameters iteratively. We assess our new algorithm empirically on several benchmarks, showing not only that we find interesting and robust task relations, but that the proposed approach outperforms the baselines, reaffirming the benefits of theoretical insight in algorithm design.

    • #843
      Node Embedding over Temporal Graphs
      Uriel Singer, Ido Guy, Kira Radinsky
      Details | PDF
      Transfer, Adaptation, Multi-task Learning 1

      In this work, we present a method for node embedding in temporal graphs. We propose an algorithm that learns the evolution of a temporal graph's nodes and edges over time and incorporates this dynamics in a temporal node embedding framework for different graph prediction tasks. We present a joint loss function that creates a temporal embedding of a node by learning to combine its historical temporal embeddings, such that it optimizes per given task (e.g., link prediction). The algorithm is initialized using static node embeddings, which are then aligned over the representations of a node at different time points, and eventually adapted for the given task in a joint optimization. We evaluate the effectiveness of our approach over a variety of temporal graphs for the two fundamental tasks of temporal link prediction and multi-label node classification, comparing to competitive baselines and algorithmic alternatives. Our algorithm shows performance improvements across many of the datasets and baselines and is found particularly effective for graphs that are less cohesive, with a lower clustering coefficient.

    Wednesday 14 09:30 - 10:30 AMS|RA - Resource Allocation (2403-2404)

    Chair: Iannis Caragiannis
    • #119
      Almost Envy-Freeness in Group Resource Allocation
      Maria Kyropoulou, Warut Suksompong, Alexandros A. Voudouris
      Details | PDF
      Resource Allocation

      We study the problem of fairly allocating indivisible goods between groups of agents using the recently introduced relaxations of envy-freeness. We consider the existence of fair allocations under different assumptions on the valuations of the agents. In particular, our results cover cases of arbitrary monotonic, responsive, and additive valuations, while for the case of binary valuations we fully characterize the cardinalities of two groups of agents for which a fair allocation can be guaranteed with respect to both envy-freeness up to one good (EF1) and envy-freeness up to any good (EFX). Moreover, we introduce a new model where the agents are not partitioned into groups in advance, but instead the partition can be chosen in conjunction with the allocation of the goods. In this model, we show that for agents with arbitrary monotonic valuations, there is always a partition of the agents into two groups of any given sizes along with an EF1 allocation of the goods. We also provide an extension of this result to any number of groups.

    • #438
      The Price of Fairness for Indivisible Goods
      Xiaohui Bei, Xinhang Lu, Pasin Manurangsi, Warut Suksompong
      Details | PDF
      Resource Allocation

      We investigate the efficiency of fair allocations of indivisible goods using the well-studied price of fairness concept. Previous work has focused on classical fairness notions such as envy-freeness, proportionality, and equitability. However, these notions cannot always be satisfied for indivisible goods, leading to certain instances being ignored in the analysis. In this paper, we focus instead on notions with guaranteed existence, including envy-freeness up to one good (EF1), balancedness, maximum Nash welfare (MNW), and leximin. We mostly provide tight or asymptotically tight bounds on the worst-case efficiency loss for allocations satisfying these notions.

    • #5104
      Reallocating Multiple Facilities on the Line
      Dimitris Fotakis, Loukas Kavouras, Panagiotis Kostopanagiotis, Philip Lazos, Stratis Skoulakis, Nikos Zarifis
      Details | PDF
      Resource Allocation

      We study the multistage K-facility reallocation problem on the real line, where we maintain K facility locations over T stages, based on the stage-dependent locations of n agents. Each agent is connected to the nearest facility at each stage, and the facilities may move from one stage to another, to accommodate different agent locations. The objective is to minimize the connection cost of the agents plus the total moving cost of the facilities, over all stages. K-facility reallocation problem was introduced by (B.D. Kaijzer and D. Wojtczak, IJCAI 2018), where they mostly focused on the special case of a single facility. Using an LP-based approach, we present a polynomial time algorithm that computes the optimal solution for any number of facilities. We also consider online K-facility reallocation, where the algorithm becomes aware of agent locations in a stage-by stage fashion. By exploiting an interesting connection to the classical K-server problem, we present a constant-competitive algorithm for K = 2 facilities.

    • #1823
      Equitable Allocations of Indivisible Goods
      Rupert Freeman, Sujoy Sikdar, Rohit Vaish, Lirong Xia
      Details | PDF
      Resource Allocation

      In fair division, equitability dictates that each participant receives the same level of utility. In this work, we study equitable allocations of indivisible goods among agents with additive valuations. While prior work has studied (approximate) equitability in isolation, we consider equitability in conjunction with other well-studied notions of fairness and economic efficiency. We show that the Leximin algorithm produces an allocation that satisfies equitability up to any good and Pareto optimality. We also give a novel algorithm that guarantees Pareto optimality and equitability up to one good in pseudopolynomial time.  Our experiments on real-world preference data reveal that approximate envy-freeness, approximate equitability, and Pareto optimality can often be achieved simultaneously.

    Wednesday 14 09:30 - 10:30 HAI|CM - Cognitive Modeling (2405-2406)

    Chair: Guibing Guo
    • #3037
      A Semantics-based Model for Predicting Children's Vocabulary
      Ishaan Grover, Hae Won Park, Cynthia Breazeal
      Details | PDF
      Cognitive Modeling

      Intelligent tutoring systems (ITS) provide educational benefits through one-on-one tutoring by assessing children's existing knowledge and providing tailored educational content. In the domain of language acquisition, several studies have shown that children often learn new words by forming semantic relationships with words they already know. In this paper, we present a model that uses word semantics (semantics-based model) to make inferences about a child's vocabulary from partial information about their existing vocabulary knowledge. We show that the proposed semantics-based model outperforms models that do not use word semantics (semantics-free models) on average. A subject-level analysis of results reveals that different models perform well for different children, thus motivating the need to combine predictions. To this end, we use two methods to combine predictions from semantics-based and semantics-free models and show that these methods yield better predictions of a child's vocabulary knowledge. Our results motivate the use of semantics-based models to assess children's vocabulary knowledge and build ITS that maximizes children's semantic understanding of words.

    • #3647
      Fast and Accurate Classification with a Multi-Spike Learning Algorithm for Spiking Neurons
      Rong Xiao, Qiang Yu, Rui Yan, Huajin Tang
      Details | PDF
      Cognitive Modeling

      The formulation of efficient supervised learning algorithms for spiking neurons is complicated and remains challenging. Most existing learning methods with the precisely firing times of spikes often result in relatively low efficiency and poor robustness to noise. To address these limitations, we propose a simple and effective multi-spike learning rule to train neurons to match their output spike number with a desired one. The proposed method will quickly find a local maximum value (directly related to the embedded feature) as the relevant signal for synaptic updates based on membrane potential trace of a neuron, and constructs an error function defined as the difference between the local maximum membrane potential and the firing threshold. With the presented rule, a single neuron can be trained to learn multi-category tasks, and can successfully mitigate the impact of the input noise and discover embedded features. Experimental results show the proposed algorithm has higher precision, lower computation cost, and better noise robustness than current state-of-the-art learning methods under a wide range of learning tasks.

    • #6329
      STCA: Spatio-Temporal Credit Assignment with Delayed Feedback in Deep Spiking Neural Networks
      Pengjie Gu, Rong Xiao, Gang Pan, Huajin Tang
      Details | PDF
      Cognitive Modeling

      The temporal credit assignment problem, which aims to discover the predictive features hidden in distracting background streams with delayed feedback, remains a core challenge in biological and machine learning. To address this issue, we propose a novel spatio-temporal credit assignment algorithm called STCA for training deep spiking neural networks (DSNNs). We present a new spatiotemporal error backpropagation policy by defining a temporal based loss function, which is able to credit the network losses to spatial and temporal domains simultaneously. Experimental results on MNIST dataset and a music dataset (MedleyDB) demonstrate that STCA can achieve comparable performance with other state-of-the-art algorithms with simpler architectures. Furthermore, STCA successfully discovers predictive sensory features and shows the highest performance in the unsegmented sensory event detection tasks.

    • #10962
      (Sister Conferences Best Papers Track) Trust Dynamics and Transfer across Human-Robot Interaction Tasks: Bayesian and Neural Computational Models
      Harold Soh, Shu Pan, Min Chen, David Hsu
      Details | PDF
      Cognitive Modeling

      This work contributes both experimental findings and novel computational human-robot trust models for multi-task settings. We describe Bayesian non-parametric and neural models, and compare their performance on data collected from real-world human-subjects study. Our study spans two distinct task domains: household tasks performed by a Fetch robot, and a virtual reality driving simulation of an autonomous vehicle performing a variety of maneuvers. We find that human trust changes and transfers across tasks in a structured manner based on perceived task characteristics. Our results suggest that task-dependent functional trust models capture human trust in robot capabilities more accurately, and trust transfer across tasks can be inferred to a good degree. We believe these models are key for enabling trust-based robot decision-making for natural human-robot interaction.

    Wednesday 14 09:30 - 10:30 DemoT2 - Demo Talks 2 (2306)

    Chair: Andrew Perrault
    • #11026
      An Online Intelligent Visual Interaction System
      Anxiang Zeng, Han Yu, Xin Gao, Kairi Ou, Zhenchuan Huang, Peng Hou, Mingli Song, Jingshu Zhang, Chunyan Miao
      Details | PDF
      Demo Talks 2

      This paper proposes an Online Intelligent Visual Interactive System (OIVIS), which can be applied to various live video broadcast and short video scenes to provide an interactive user experience. In the live video broadcast, the anchor can issue various commands by using pre-defined gestures, and can trigger real-time background replacement to create an immersive atmosphere. To support such dynamic interactivity, we implemented algorithms including real-time gesture recognition and real-time video portrait segmentation, developed a deep network inference framework, and a real-time rendering framework AI Gender at the front end to create a complete set of visual interaction solutions for use in resource constrained mobile.

    • #11030
      ERICA and WikiTalk
      Divesh Lala, Graham Wilcock, Kristiina Jokinen, Tatsuya Kawahara
      Details | PDF
      Demo Talks 2

      The demo shows ERICA, a highly realistic female android robot, and WikiTalk, an application that helps robots to talk about thousands of topics using information from Wikipedia. The combination of ERICA and WikiTalk results in more natural and engaging human-robot conversations.

    • #11038
      Hintikka's World: Scalable Higher-order Knowledge
      Tristan Charrier, Sébastien Gamblin, Alexandre Niveau, François Schwarzentruber
      Details | PDF
      Demo Talks 2

      Hintikka's World is a graphical and pedagogical tool that shows how artificial agents can reason about higher-order knowledge. In this demonstration paper, we present the implementation of symbolic models in Hintikka's World. They enable the tool to scale, by helping it to face the state explosion, which makes it possible to provide examples featuring real card games, such as Hanabi.

    • #11044
      DISPUTool -- A tool for the Argumentative Analysis of Political Debates
      Shohreh Haddadan, Elena Cabrio, Serena Villata
      Details | PDF
      Demo Talks 2

      Political debates are the means used by political candidates to put forward and justify their positions in front of the electors with respect to the issues at stake. Argument mining is a novel research area in Artificial Intelligence, aiming at analyzing discourse on the pragmatics level and applying a certain argumentation theory to model and automatically analyze textual data. In this paper, we present DISPUTool, a tool designed to ease the work of historians and social science scholars in analyzing the argumentative content of political speeches. More precisely, DISPUTool allows to explore and automatically identify argumentative components over the 39 political debates from the last 50 years of US presidential campaigns (1960-2016). 

    • #11048
      Mappa Mundi: An Interactive Artistic Mind Map Generator with Artificial Imagination
      Ruixue Liu, Baoyang Chen, Meng Chen, Youzheng Wu, Zhijie Qiu, Xiaodong He
      Details | PDF
      Demo Talks 2

      We present a novel real-time, collaborative, and interactive AI painting system, Mappa Mundi, for artistic Mind Map creation. The system consists  of a voice-based input interface, an automatic topic expansion module, and an image projection module. The key innovation is to inject Artificial Imagination into painting creation by considering lexical and phonological similarities of language, learning and inheriting artist’s original painting style, and applying the principles of Dadaism and impossibility of improvisation. Our system indicates that AI and artist can collaborate seamlessly to create imaginative artistic painting and Mappa Mundi has been applied in art exhibition in UCCA, Beijing.    

    • #11027
      The Open Vault Challenge - Learning How to Build Calibration-Free Interactive Systems by Cracking the Code of a Vault
      Jonathan Grizou
      Details | PDF
      Demo Talks 2

      This demo takes the form of a challenge to the IJCAI community. A physical vault, secured by a 4-digit code, will be placed in the demo area. The author will publicly open the vault by entering the code on a touch-based interface, and as many times as requested. The challenge to the IJCAI participants will be to crack the code, open the vault, and collect its content. The interface is based on previous work on calibration-free interactive systems that enables a user to start instructing a machine without the machine knowing how to interpret the user’s actions beforehand. The intent and the behavior of the human are simultaneously learned by the machine. An online demo and videos are available for readers to participate in the challenge. An additional interface using vocal commands will be revealed on the demo day, demonstrating the scalability of our approach to continuous input signals.

    Wednesday 14 09:30 - 18:00 Competition (2305)


  • AIBIRDS 2019: The 8th Angry Birds AI Competition
    Competition
  • Wednesday 14 09:30 - 18:00 DB2 - Demo Booths 2 (Hall A)

    Chair: TBA
    • #11022
      Fair and Explainable Dynamic Engagement of Crowd Workers
      Han Yu, Yang Liu, Xiguang Wei, Chuyu Zheng, Tianjian Chen, Qiang Yang, Xiong Peng
      Details | PDF
      Demo Booths 2

      Years of rural-urban migration has resulted in a significant population in China seeking ad-hoc work in large urban centres. At the same time, many businesses face large fluctuations in demand for manpower and require more efficient ways to satisfy such demands. This paper outlines AlgoCrowd, an artificial intelligence (AI)-empowered algorithmic crowdsourcing platform. Equipped with an efficient explainable task-worker matching optimization approach designed to focus on fair treatment of workers while maximizing collective utility, the platform provides explainable task recommendations to workers' personal work management mobile apps which are becoming popular, with the aim to address the above societal challenge.

    • #11024
      Multi-Agent Visualization for Explaining Federated Learning
      Xiguang Wei, Quan Li, Yang Liu, Han Yu, Tianjian Chen, Qiang Yang
      Details | PDF
      Demo Booths 2

      As an alternative decentralized training approach, Federated Learning enables distributed agents to collaboratively learn a machine learning model while keeping personal/private information on local devices. However, one significant issue of this framework is the lack of transparency, thus obscuring understanding of the working mechanism of Federated Learning systems. This paper proposes a multi-agent visualization system that illustrates what is Federated Learning and how it supports multi-agents coordination. To be specific, it allows users to participate in the Federated Learning empowered multi-agent coordination. The input and output of Federated Learning are visualized simultaneously, which provides an intuitive explanation of Federated Learning for users in order to help them gain deeper understanding of the technology.

    • #11028
      AiD-EM: Adaptive Decision Support for Electricity Markets Negotiations
      Tiago Pinto, Zita Vale
      Details | PDF
      Demo Booths 2

      This paper presents the Adaptive Decision Support for Electricity Markets Negotiations (AiD-EM) system. AiD-EM is a multi-agent system that provides decision support to market players by incorporating multiple sub-(agent-based) systems, directed to the decision support of specific problems. These sub-systems make use of different artificial intelligence methodologies, such as machine learning and evolutionary computing, to enable players adaptation in the planning phase and in actual negotiations in auction-based markets and bilateral negotiations. AiD-EM demonstration is enabled by its connection to MASCEM (Multi-Agent Simulator of Competitive Electricity Markets).

    • #11038
      Hintikka's World: Scalable Higher-order Knowledge
      Tristan Charrier, Sébastien Gamblin, Alexandre Niveau, François Schwarzentruber
      Details | PDF
      Demo Booths 2

      Hintikka's World is a graphical and pedagogical tool that shows how artificial agents can reason about higher-order knowledge. In this demonstration paper, we present the implementation of symbolic models in Hintikka's World. They enable the tool to scale, by helping it to face the state explosion, which makes it possible to provide examples featuring real card games, such as Hanabi.

    • #11032
      Embodied Conversational AI Agents in a Multi-modal Multi-agent Competitive Dialogue
      Rahul R. Divekar, Xiangyang Mou, Lisha Chen, Maíra Gatti de Bayser, Melina Alberio Guerra, Hui Su
      Details | PDF
      Demo Booths 2

      In a setting where two AI agents embodied as animated humanoid avatars are engaged in a conversation with one human and each other, we see two challenges. One, determination by the AI agents about which one of them is being addressed. Two, determination by the AI agents if they may/could/should speak at the end of a turn. In this work we bring these two challenges together and explore the participation of AI agents in multi-party conversations. Particularly, we show two embodied AI shopkeeper agents who sell similar items aiming to get the business of a user by competing with each other on the price. In this scenario, we solve the first challenge by using headpose (estimated by deep learning techniques) to determine who the user is talking to. For the second challenge we use deontic logic to model rules of a negotiation conversation.

    • #11043
      Multi-Agent Path Finding on Ozobots
      Roman Barták, Ivan Krasičenko, Jiří Švancara
      Details | PDF
      Demo Booths 2

      Multi-agent path finding (MAPF) is the problem to find collision-free paths for a set of agents (mobile robots) moving on a graph. There exists several abstract models describing the problem with various types of constraints. The demo presents software to evaluate the abstract models when the plans are executed on Ozobots, small mobile robots developed for teaching programming. The software allows users to design the grid-like maps, to specify initial and goal locations of robots, to generate plans using various abstract models implemented in the Picat programming language, to simulate and to visualise execution of these plans, and to translate the plans to command sequences for Ozobots.

    • #11050
      Reagent: Converting Ordinary Webpages into Interactive Software Agents
      Matthew Peveler, Jeffrey O. Kephart, Hui Su
      Details | PDF
      Demo Booths 2

      We introduce Reagent, a technology that can be used in conjunction with automated speech recognition to allow users to query and manipulate ordinary webpages via speech and pointing. Reagent can be used out-of-the-box with third-party websites, as it requires neither special instrumentation from website developers nor special domain knowledge to capture semantically-meaningful mouse interactions with structured elements such as tables and plots. When it is unable to infer mappings between domain vocabulary and visible webpage content on its own, Reagent proactively seeks help by engaging in a voice-based interaction with the user.

    • #11029
      Deep Reinforcement Learning for Ride-sharing Dispatching and Repositioning
      Zhiwei (Tony) Qin, Xiaocheng Tang, Yan Jiao, Fan Zhang, Chenxi Wang, Qun (Tracy) Li
      Details | PDF
      Demo Booths 2

      In this demo, we will present a simulation-based human-computer interaction of deep reinforcement learning in action on order dispatching and driver repositioning for ride-sharing.  Specifically, we will demonstrate through several specially designed domains how we use deep reinforcement learning to train agents (drivers) to have longer optimization horizon and to cooperate to achieve higher objective values collectively. 

    • #11035
      Intelligent Decision Support for Improving Power Management
      Yongqing Zheng, Han Yu, Kun Zhang, Yuliang Shi, Cyril Leung, Chunyan Miao
      Details | PDF
      Demo Booths 2

      With the development and adoption of the electricity information tracking system in China, real-time electricity consumption big data have become available to enable artificial intelligence (AI) to help power companies and the urban management departments to make demand side management decisions. We demonstrate the Power Intelligent Decision Support (PIDS) platform, which can generate Orderly Power Utilization (OPU) decision recommendations and perform Demand Response (DR) implementation management based on a short-term load forecasting model. It can also provide different users with query and application functions to facilitate explainable decision support.

    • #11041
      Contextual Typeahead Sticker Suggestions on Hike Messenger
      Mohamed Hanoosh, Abhishek Laddha, Debdoot Mukherjee
      Details | PDF
      Demo Booths 2

      In this demonstration, we present Hike's sticker recommendation system, which helps users choose the right sticker to substitute the next message that they intend to send in a chat. We describe how the system addresses the issue of numerous orthographic variations for chat messages and operates under 20 milliseconds with low CPU and memory footprint on device.

    Wednesday 14 09:35 - 09:40 2019 ACM SIGAI Industry Award (D-I)


  • Real World Reinforcement Learning Team (Microsoft)
    2019 ACM SIGAI Industry Award
  • Wednesday 14 09:40 - 10:30 Industry Days (D-I)

    Chair: Yu Zheng
  • A Real World Reinforcement Learning Service
    John Langford and Tyler Clintworth, Principal Research Scientist and Lead Developer, Microsoft Research
    Industry Days
  • Wednesday 14 11:00 - 12:00 MTA|SP - Security and Privacy 1 (2705-2706)

    Chair: Wang Pinghui
    • #1230
      DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks
      Huili Chen, Cheng Fu, Jishen Zhao, Farinaz Koushanfar
      Details | PDF
      Security and Privacy 1

      Deep Neural Networks (DNNs) are vulnerable to Neural Trojan (NT) attacks where the adversary injects malicious behaviors during DNN training. This type of ‘backdoor’ attack is activated when the input is stamped with the trigger pattern specified by the attacker, resulting in an incorrect prediction of the model. Due to the wide application of DNNs in various critical fields, it is indispensable to inspect whether the pre-trained DNN has been trojaned before employing a model. Our goal in this paper is to address the security concern on unknown DNN to NT attacks and ensure safe model deployment. We propose DeepInspect, the first black-box Trojan detection solution with minimal prior knowledge of the model. DeepInspect learns the probability distribution of potential triggers from the queried model using a conditional generative model, thus retrieves the footprint of backdoor insertion. In addition to NT detection, we show that DeepInspect’s trigger generator enables effective Trojan mitigation by model patching. We corroborate the effectiveness, efficiency, and scalability of DeepInspect against the state-of-the-art NT attacks across various benchmarks. Extensive experiments show that DeepInspect offers superior detection performance and lower runtime overhead than the prior work.

    • #3546
      VulSniper: Focus Your Attention to Shoot Fine-Grained Vulnerabilities
      Xu Duan, Jingzheng Wu, Shouling Ji, Zhiqing Rui, Tianyue Luo, Mutian Yang, Yanjun Wu
      Details | PDF
      Security and Privacy 1

      With the explosive development of information technology, vulnerabilities have become one of the major threats to computer security. Most vulnerabilities with similar patterns can be detected effectively by static analysis methods. However, some vulnerable and non-vulnerable code is hardly distinguishable, resulting in low detection accuracy. In this paper, we define the accurate identification of vulnerabilities in similar code as a fine-grained vulnerability detection problem. We propose VulSniper which is designed to detect fine-grained vulnerabilities more effectively. In VulSniper, attention mechanism is used to capture the critical features of the vulnerabilities. Especially, we use bottom-up and top-down structures to learn the attention weights of different areas of the program. Moreover, in order to fully extract the semantic features of the program, we generate the code property graph, design a 144-dimensional vector to describe the relation between the nodes, and finally encode the program as a feature tensor. VulSniper achieves F1-scores of 80.6% and 73.3% on the two benchmark datasets, the SARD Buffer Error dataset and the SARD Resource Management Error dataset respectively, which are significantly higher than those of the state-of-the-art methods.

    • #5689
      Data Poisoning against Differentially-Private Learners: Attacks and Defenses
      Yuzhe Ma, Xiaojin Zhu, Justin Hsu
      Details | PDF
      Security and Privacy 1

      Data poisoning attacks aim to manipulate the model produced by a learning algorithm by adversarially modifying the training set. We consider differential privacy as a defensive measure against this type of attack. We show that private learners are resistant to data poisoning attacks when the adversary is only able to poison a small number of items. However, this protection degrades as the adversary is allowed to poison more data. We emprically evaluate this protection by designing attack algorithms targeting objective and output perturbation learners, two standard approaches to differentially-private machine learning. Experiments show that our methods are effective when the attacker is allowed to poison sufficiently many training items.

    • #5758
      Robustra: Training Provable Robust Neural Networks over Reference Adversarial Space
      Linyi Li, Zexuan Zhong, Bo Li, Tao Xie
      Details | PDF
      Security and Privacy 1

      Machine learning techniques, especially deep neural networks (DNNs), have been widely adopted in various applications. However, DNNs are recently found to be vulnerable against adversarial examples, i.e., maliciously perturbed inputs that can mislead the models to make arbitrary prediction errors. Empirical defenses have been studied, but many of them can be adaptively attacked again. Provable defenses provide provable error bound of DNNs, while such bound so far is far from satisfaction. To address this issue, in this paper, we present our approach named Robustra for effectively improving the provable error bound of DNNs. We leverage the adversarial space of a reference model as the feasible region to solve the min-max game between the attackers and defenders. We solve its dual problem by linearly approximating the attackers' best strategy and utilizing the monotonicity of the slack variables introduced by the reference model. The evaluation results show that our approach can provide significantly better provable adversarial error bounds on MNIST and CIFAR10 datasets, compared to the state-of-the-art results. In particular, bounded by L^infty, with epsilon = 0.1, on MNIST we reduce the error bound from 2.74% to 2.09%; with epsilon = 0.3, we reduce the error bound from 24.19% to 16.91%.

    Wednesday 14 11:00 - 12:15 ML|RL - Reinforcement Learning 3 (2701-2702)

    Chair: Marc Toussaint
    • #149
      Experience Replay Optimization
      Daochen Zha, Kwei-Herng Lai, Kaixiong Zhou, Xia Hu
      Details | PDF
      Reinforcement Learning 3

      Experience replay enables reinforcement learning agents to memorize and reuse past experiences, just as humans replay memories for the situation at hand. Contemporary off-policy algorithms either replay past experiences uniformly or utilize a rule-based replay strategy, which may be sub-optimal. In this work, we consider learning a replay policy to optimize the cumulative reward. Replay learning is challenging because the replay memory is noisy and large, and the cumulative reward is unstable. To address these issues, we propose a novel experience replay optimization (ERO) framework which alternately updates two policies: the agent policy, and the replay policy. The agent is updated to maximize the cumulative reward based on the replayed data, while the replay policy is updated to provide the agent with the most useful experiences. The conducted experiments on various continuous control tasks demonstrate the effectiveness of ERO, empirically showing promise in experience replay learning to improve the performance of off-policy reinforcement learning algorithms.

    • #354
      Interactive Teaching Algorithms for Inverse Reinforcement Learning
      Parameswaran Kamalaruban, Rati Devidze, Volkan Cevher, Adish Singla
      Details | PDF
      Reinforcement Learning 3

      We study the problem of inverse reinforcement learning (IRL) with the added twist that the learner is assisted by a helpful teacher. More formally, we tackle the following algorithmic question: How could a teacher provide an informative sequence of demonstrations to an IRL learner to speed up the learning process? We present an interactive teaching framework where a teacher adaptively chooses the next demonstration based on learner's current policy. In particular, we design teaching algorithms for two concrete settings: an omniscient setting where a teacher has full knowledge about the learner's dynamics and a blackbox setting where the teacher has minimal knowledge. Then, we study a sequential variant of the popular MCE-IRL learner and prove convergence guarantees of our teaching algorithm in the omniscient setting. Extensive experiments with a car driving simulator environment show that the learning progress can be speeded up drastically as compared to an uninformative teacher.

    • #619
      Interactive Reinforcement Learning with Dynamic Reuse of Prior Knowledge from Human and Agent Demonstrations
      Zhaodong Wang, Matthew E. Taylor
      Details | PDF
      Reinforcement Learning 3

      Reinforcement learning has enjoyed multiple impressive successes in recent years. However, these successes typically require very large amounts of data before an agent achieves acceptable performance. This paper focuses on a novel way of combating such requirements by leveraging existing (human or agent) knowledge. In particular, this paper leverages demonstrations, allowing an agent to quickly achieve high performance. This paper introduces the Dynamic Reuse of Prior (DRoP) algorithm, which combines the offline knowledge (demonstrations recorded before learning) with online confidence-based performance analysis. DRoP leverages the demonstrator's knowledge by automatically balancing between reusing the prior knowledge and the current learned policy, allowing the agent to outperform the original demonstrations. We compare with multiple state-of-the-art learning algorithms and empirically show that DRoP can achieve superior performance in two domains. Additionally, we show that this confidence measure can be used to selectively request additional demonstrations, significantly improving the learning performance of the agent.

    • #1363
      Meta Reinforcement Learning with Task Embedding and Shared Policy
      Lin Lan, Zhenguo Li, Xiaohong Guan, Pinghui Wang
      Details | PDF
      Reinforcement Learning 3

      Despite significant progress, deep reinforcement learning (RL) suffers from data-inefficiency and limited generalization. Recent efforts apply meta-learning to learn a meta-learner from a set of RL tasks such that a novel but related task could be solved quickly. Though specific in some ways, different tasks in meta-RL are generally similar at a high level. However, most meta-RL methods do not explicitly and adequately model the specific and shared information among different tasks, which limits their ability to learn training tasks and to generalize to novel tasks. In this paper, we propose to capture the shared information on the one hand and meta-learn how to quickly abstract the specific information about a task on the other hand. Methodologically, we train an SGD meta-learner to quickly optimize a task encoder for each task, which generates a task embedding based on past experience. Meanwhile, we learn a policy which is shared across all tasks and conditioned on task embeddings. Empirical results on four simulated tasks demonstrate that our method has better learning capacity on both training and novel tasks and attains up to 3 to 4 times higher returns compared to baselines.

    • #5384
      Planning with Expectation Models
      Yi Wan, Muhammad Zaheer, Adam White, Martha White, Richard S. Sutton
      Details | PDF
      Reinforcement Learning 3

      Distribution and sample models are two popular model choices in model-based reinforcement learning (MBRL). However, learning these models can be intractable, particularly when the state and action spaces are large. Expectation models, on the other hand, are relatively easier to learn due to their compactness and have also been widely used for deterministic environments. For stochastic environments, it is not obvious how expectation models can be used for planning as they only partially characterize a distribution. In this paper, we propose a sound way of using approximate expectation models for MBRL. In particular, we 1) show that planning with an expectation model is equivalent to planning with a distribution model if the state value function is linear in state features, 2) analyze two common parametrization choices for approximating the expectation: linear and non-linear expectation models, 3) propose a sound model-based policy evaluation algorithm and present its convergence results, and 4) empirically demonstrate the effectiveness of the proposed planning algorithm. 

    Wednesday 14 11:00 - 12:15 Survey 1 - Survey Session 1 (2405-2406)

    Chair: Virginia Dignum
    • #10893
      A Survey on Hierarchical Planning – One Abstract Idea, Many Concrete Realizations
      Pascal Bercher, Ron Alford, Daniel Höller
      Details | PDF
      Survey Session 1

      Hierarchical planning has attracted renewed interest in the last couple of years, which led to numerous novel formalisms, problem classes, and theoretical investigations. Yet it is important to differentiate between the various formalisms and problem classes, since they show -- sometimes fundamental -- differences with regard to their expressivity and computational complexity: Some of them can be regarded equivalent to non-hierarchical formalisms while others are clearly more expressive. We survey the most important hierarchical problem classes and explain their differences and similarities. We furthermore give pointers to some of the best-known planning systems capable of solving the respective problem classes.

    • #10895
      Integrating Knowledge and Reasoning in Image Understanding
      Somak Aditya, Yezhou Yang, Chitta Baral
      Details | PDF
      Survey Session 1

      Deep learning based data-driven approaches have been successfully applied in various image understanding applications ranging from object recognition, semantic segmentation to visual question answering. However, the lack of knowledge integration as well as higher-level reasoning capabilities with the methods still pose a hindrance. In this work, we present a brief survey of a few representative reasoning mechanisms, knowledge integration methods and their corresponding image understanding applications developed by various groups of researchers, approaching the problem from a variety of angles. Furthermore, we discuss upon key efforts on integrating external knowledge with neural networks. Taking cues from these efforts, we conclude by discussing potential pathways to improve reasoning capabilities.

    • #10909
      Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning
      Ruth M. J. Byrne
      Details | PDF
      Survey Session 1

      Counterfactuals about what could have happened are increasingly used in an array of Artificial Intelligence (AI) applications, and especially in explainable AI (XAI). Counterfactuals can aid the provision of interpretable models to make the decisions of inscrutable systems intelligible to developers and users. However, not all counterfactuals are equally helpful in assisting human comprehension. Discoveries about the nature of the counterfactuals that humans create are a helpful guide to maximize the effectiveness of counterfactual use in AI.

    • #10953
      A Replication Study of Semantics in Argumentation
      Leila Amgoud
      Details | PDF
      Survey Session 1

      Argumentation aims at increasing acceptability of claims by supporting them with arguments. Roughly speaking, an argument is a set of premises intended to establish a definite claim. Its strength depends on the plausibility of the premises, the nature of the link between the premises and claim, and the prior acceptability of the claim. It may generally be weakened by other arguments that undermine one or more of its three components. Evaluation of arguments is a crucial task, and a sizable amount of methods, called semantics, has been proposed in the literature. This paper discusses two classifications of the existing semantics: the first one is based on the type of semantics' outcomes (sets of arguments, weighting, and preorder), the second is based on the goals pursued by the semantics (acceptability, strength, coalitions).

    • #10954
      Automated Essay Scoring: A Survey of the State of the Art
      Zixuan Ke, Vincent Ng
      Details | PDF
      Survey Session 1

      Despite being investigated for over 50 years, the task of automated essay scoring is far from being solved. Nevertheless, it continues to draw a lot of attention in the natural language processing community in part because of its commercial and educational values as well as the associated research challenges. This paper presents an overview of the major milestones made in automated essay scoring research since its inception.

    Wednesday 14 11:00 - 12:30 Industry Days (D-I)

    Chair: Anand Rao (PwC)
  • AI x Robotics in Sony as Creative Entertainment Company
    Masahiro Fujita, Senior Chief Researcher, AI Collaboration Office, Sony Corporation; Michael Spranger, Senior Research Scientist, Sony Corporation AND Researcher, Sony Computer Science Laboratories Inc.
    Industry Days
  • Wednesday 14 11:00 - 12:30 Panel (K)

    Chair: Ray Perrault
  • 50 years of IJCAI
    Panel
  • Wednesday 14 11:00 - 12:30 AI-HWB - ST: AI for Improving Human Well-Being 2 (J)

    Chair: Christophe Marsala
    • #457
      Safe Contextual Bayesian Optimization for Sustainable Room Temperature PID Control Tuning
      Marcello Fiducioso, Sebastian Curi, Benedikt Schumacher, Markus Gwerder, Andreas Krause
      Details | PDF
      ST: AI for Improving Human Well-Being 2

      We tune one of the most common heating, ventilation, and air conditioning (HVAC) control loops, namely the temperature control of a room. For economical and environmental reasons, it is of prime importance to optimize the performance of this system. Buildings account from 20 to 40 % of a country energy consumption, and almost 50 % of it comes from HVAC systems. Scenario projections predict a 30 % decrease in heating consumption by 2050 due to efficiency increase. Advanced control techniques can improve performance; however, the proportional-integral-derivative (PID) control is typically used due to its simplicity and overall performance. We use Safe Contextual Bayesian Optimization to optimize the PID parameters without human intervention. We reduce costs by 32 % compared to the current PID controller setting while assuring safety and comfort to people in the room. The results of this work have an immediate impact on the room control loop performances and its related commissioning costs. Furthermore, this successful attempt paves the way for further use at different levels of HVAC systems, with promising energy, operational, and commissioning costs savings, and it is a practical demonstration of the positive effects that Artificial Intelligence can have on environmental sustainability.

    • #1168
      Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses
      Xiao Wang, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, Sang Chin
      Details | PDF
      ST: AI for Improving Human Well-Being 2

      Despite achieving remarkable success in various domains, recent studies have uncovered the vulnerability of deep neural networks to adversarial perturbations, creating concerns on model generalizability and new threats such as prediction-evasive misclassification or stealthy reprogramming. Among different defense proposals, stochastic network defenses such as random neuron activation pruning or random perturbation to layer inputs are shown to be promising for attack mitigation. However, one critical drawback of current defenses is that the robustness enhancement is at the cost of noticeable performance degradation on legitimate data, e.g., large drop in test accuracy.This paper is motivated by pursuing for a better trade-off between adversarial robustness and test accuracy for stochastic network defenses. We propose Defense Efficiency Score (DES), a comprehensive metric that measures the gain in unsuccessful attack attempts at the cost of drop in test accuracy of any defense. To achieve a better DES, we propose hierarchical random switching (HRS), which protects neural networks through a novel randomization scheme. A HRS-protected model contains several blocks of randomly switching channels to prevent adversaries from exploiting fixed model structures and parameters for their malicious purposes. Extensive experiments show that HRS is superior in defending against state-of-the-art white-box and adaptive adversarial misclassification attacks. We also demonstrate the effectiveness of HRS in defending adversarial reprogramming, which is the first defense against adversarial programs. Moreover, in most settings the average DES of HRS is at least 5X higher than current stochastic network defenses, validating its significantly improved robustness-accuracy trade-off.

    • #707
      KitcheNette: Predicting and Ranking Food Ingredient Pairings using Siamese Neural Network
      Donghyeon Park, Keonwoo Kim, Yonggyu Park, Jungwoon Shin, Jaewoo Kang
      Details | PDF
      ST: AI for Improving Human Well-Being 2

      As a vast number of ingredients exist in the culinary world, there are countless food ingredient pairings, but only a small number of pairings have been adopted by chefs and studied by food researchers. In this work, we propose KitcheNette which is a model that predicts food ingredient pairing scores and recommends optimal ingredient pairings. KitcheNette employs Siamese neural networks and is trained on our annotated dataset containing 300K scores of pairings generated from numerous ingredients in food recipes. As the results demonstrate, our model not only outperforms other baseline models, but also can recommend complementary food pairings and discover novel ingredient pairings.

    • #1303
      SparseSense: Human Activity Recognition from Highly Sparse Sensor Data-streams Using Set-based Neural Networks
      Alireza Abedin, S. Hamid Rezatofighi, Qinfeng Shi, Damith C. Ranasinghe
      Details | PDF
      ST: AI for Improving Human Well-Being 2

      Batteryless or so called passive wearables are providing new and innovative methods for human activity recognition (HAR), especially in healthcare applications for older people. Passive sensors are low cost, lightweight, unobtrusive and desirably disposable; attractive attributes for healthcare applications in hospitals and nursing homes. Despite the compelling propositions for sensing applications, the data streams from these sensors are characterised by high sparsity---the time intervals between sensor readings are irregular while the number of readings per unit time are often limited. In this paper, we rigorously explore the problem of learning activity recognition models from temporally sparse data. We describe how to learn directly from sparse data using a deep learning paradigm in an end-to-end manner. We demonstrate significant classification performance improvements on real-world passive sensor datasets from older people over the state-of-the-art deep learning human activity recognition models. Further, we provide insights into the model's behaviour through complementary experiments on a benchmark dataset and visualisation of the learned activity feature spaces.

    • #3424
      MNN: Multimodal Attentional Neural Networks for Diagnosis Prediction
      Zhi Qiao, Xian Wu, Shen Ge, Wei Fan
      Details | PDF
      ST: AI for Improving Human Well-Being 2

      Diagnosis prediction plays a key role in clinical decision supporting process, which attracted extensive research attention recently. Existing studies mainly utilize discrete medical codes (e.g., the ICD codes and procedure codes) as the primary features in prediction. However, in real clinical settings, such medical codes could be either incomplete or erroneous. For example, missed diagnosis will neglect some codes which should be included, mis-diagnosis will generate incorrect medical codes. To increase the robustness towards noisy data, we introduce textual clinical notes in addition to medical codes. Combining information from both sides will lead to improved understanding towards clinical health conditions. To accommodate both the textual notes and discrete medical codes in the same framework, we propose Multimodal Attentional Neural Networks (MNN), which integrates multi-modal data in a collaborative manner. Experimental results on real world EHR datasets demonstrate the advantages of MNN in terms of both robustness and accuracy.

    • #4465
      Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance
      Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, Marta Kwiatkowska
      Details | PDF
      ST: AI for Improving Human Well-Being 2

      Deployment of deep neural networks (DNNs) in safety-critical systems requires provable guarantees for their correct behaviours. We compute the maximal radius of a safe norm ball around a given input, within which there are no adversarial examples for a trained DNN. We define global robustness as an expectation of the maximal safe radius over a test dataset, and develop an algorithm to approximate the global robustness measure by iteratively computing its lower and upper bounds. Our algorithm is the first efficient method for the Hamming (L0) distance, and we hypothesise that this norm is a good proxy for a certain class of physical attacks. The algorithm is anytime, i.e., it returns intermediate bounds and robustness estimates that are gradually, but strictly, improved as the computation proceeds; tensor-based, i.e., the computation is conducted over a set of inputs simultaneously to enable efficient GPU computation; and has provable guarantees, i.e., both the bounds and the robustness estimates can converge to their optimal values. Finally, we demonstrate the utility of our approach by applying the algorithm to a set of challenging problems.

    Wednesday 14 11:00 - 12:30 ML|DL - Deep Learning 4 (L)

    Chair: Longbing Cao
    • #2022
      Towards Robust ResNet: A Small Step but a Giant Leap
      Jingfeng Zhang, Bo Han, Laura Wynter, Bryan Kian Hsiang Low, Mohan Kankanhalli
      Details | PDF
      Deep Learning 4

      This paper presents a simple yet principled approach to boosting the robustness of the residual network (ResNet) that is motivated by a dynamical systems perspective. Namely, a deep neural network can be interpreted using a partial differential equation, which naturally inspires us to characterize ResNet based on an explicit Euler method. This consequently allows us to exploit the step factor h in the Euler method to control the robustness of ResNet in both its training and generalization. In particular, we prove that a small step factor h can benefit its training and generalization robustness during backpropagation and forward propagation, respectively. Empirical evaluation on real-world datasets corroborates our analytical findings that a small h can indeed improve both its training and generalization robustness.

    • #2847
      Reparameterizable Subset Sampling via Continuous Relaxations
      Sang Michael Xie, Stefano Ermon
      Details | PDF
      Deep Learning 4

      Many machine learning tasks require sampling a subset of items from a collection based on a parameterized distribution. The Gumbel-softmax trick can be used to sample a single item, and allows for low-variance reparameterized gradients with respect to the parameters of the underlying distribution. However, stochastic optimization involving subset sampling is typically not reparameterizable. To overcome this limitation, we define a continuous relaxation of subset sampling that provides reparameterization gradients by generalizing the Gumbel-max trick. We use this approach to sample subsets of features in an instance-wise feature selection task for model interpretability, subsets of neighbors to implement a deep stochastic k-nearest neighbors model, and sub-sequences of neighbors to implement parametric t-SNE by directly comparing the identities of local neighbors. We improve performance in all these tasks by incorporating subset sampling in end-to-end training.

    • #2949
      Image Captioning with Compositional Neural Module Networks
      Junjiao Tian, Jean Oh
      Details | PDF
      Deep Learning 4

      In image captioning where fluency is an important factor in evaluation, n-gram metrics, sequential models are commonly used; however, sequential models generally result in overgeneralized expressions that lack the details that may be present in an input image. Inspired by the idea of the compositional neural module networks in the visual question answering task, we introduce a hierarchical framework for image captioning that explores both compositionality and sequentiality of natural language. Our algorithm learns to compose a detail-rich sentence by selectively attending to different modules corresponding to unique aspects of each object detected in an input image to include specific descriptions such as counts and color. In a set of experiments on the MSCOCO dataset, the proposed model outperforms a state-of-the art model across multiple evaluation metrics, more importantly, presenting visually interpretable results. Furthermore, the breakdown of subcategories f-scores of the SPICE metric and human evaluation on Amazon Mechanical Turk show that our compositional module networks effectively generate accurate and detailed captions.

    • #5573
      Extrapolating Paths with Graph Neural Networks
      Jean-Baptiste Cordonnier, Andreas Loukas
      Details | PDF
      Deep Learning 4

      We consider the problem of path inference: given a path prefix, i.e., a partially observed sequence of nodes in a graph, we want to predict which nodes are in the missing suffix. In particular, we focus on natural paths occurring as a by-product of the interaction of an agent with a network---a driver on the transportation network, an information seeker in Wikipedia, or a client in an online shop. Our interest is sparked by the realization that, in contrast to shortest-path problems, natural paths are usually not optimal in any graph-theoretic sense, but might still follow predictable patterns. Our main contribution is a graph neural network called Gretel. Conditioned on a path prefix, this network can efficiently extrapolate path suffixes, evaluate path likelihood, and sample from the future path distribution. Our experiments with GPS traces on a road network and user-navigation paths in Wikipedia confirm that Gretel is able to adapt to graphs with very different properties, while also comparing favorably to previous solutions.

    • #3760
      Ornstein Auto-Encoders
      Youngwon Choi, Joong-Ho Won
      Details | PDF
      Deep Learning 4

      We propose the Ornstein auto-encoder (OAE), a representation learning model for correlated data. In many interesting applications, data have nested structures. Examples include the VGGFace and MNIST datasets. We view such data consist of i.i.d. copies of a stationary random process, and seek a latent space representation of the observed sequences. This viewpoint necessitates a distance measure between two random processes. We propose to use Orstein's d-bar distance, a process extension of Wasserstein's distance. We first show that the theorem by Bousquet et al. (2017) for Wasserstein auto-encoders extends to stationary random processes. This result, however, requires both encoder and decoder to map an entire sequence to another. We then show that, when exchangeability within a process, valid for VGGFace and MNIST, is assumed, these maps reduce to univariate ones, resulting in a much simpler, tractable optimization problem. Our experiments show that OAEs successfully separate individual sequences in the latent space, and can generate new variations of unknown, as well as known, identity. The latter has not been possible with other existing methods.

    • #3138
      Variational Graph Embedding and Clustering with Laplacian Eigenmaps
      Zitai Chen, Chuan Chen, Zong Zhang, Zibin Zheng, Qingsong Zou
      Details | PDF
      Deep Learning 4

      As a fundamental machine learning problem, graph clustering has facilitated various real-world applications, and tremendous efforts had been devoted to it in the past few decades. However, most of the existing methods like spectral clustering suffer from the sparsity, scalability, robustness and handling high dimensional raw information in clustering. To address this issue, we propose a deep probabilistic model, called Variational Graph Embedding and Clustering with Laplacian Eigenmaps (VGECLE), which learns node embeddings and assigns node clusters simultaneously. It represents each node as a Gaussian distribution to disentangle the true embedding position and the uncertainty from the graph. With a Mixture of Gaussian (MoG) prior, VGECLE is capable of learning an interpretable clustering by the variational inference and generative process. In order to learn the pairwise relationships better, we propose a Teacher-Student mechanism encouraging node to learn a better Gaussian from its instant neighbors in the stochastic gradient descent (SGD) training fashion. By optimizing the graph embedding and the graph clustering problem as a whole, our model can fully take the advantages in their correlation. To our best knowledge, we are the first to tackle graph clustering in a deep probabilistic viewpoint. We perform extensive experiments on both synthetic and real-world networks to corroborate the effectiveness and efficiency of the proposed framework.

    Wednesday 14 11:00 - 12:30 AMS|AGT - Algorithmic Game Theory 1 (2703-2704)

    Chair: Bei Xiaohui
    • #166
      Achieving a Fairer Future by Changing the Past
      Jiafan He, Ariel D. Procaccia, Alexandros Psomas, David Zeng
      Details | PDF
      Algorithmic Game Theory 1

      We study the problem of allocating T indivisible items that arrive online to agents with additive valuations. The allocation must satisfy a prominent fairness notion, envy-freeness up to one item (EF1), at each round. To make this possible, we allow the reallocation of previously allocated items, but aim to minimize these so-called adjustments. For the case of two agents, we show that algorithms that are informed about the values of future items can get by without any adjustments, whereas uninformed algorithms require Theta(T) adjustments. For the general case of three or more agents, we prove that even informed algorithms must use Omega(T) adjustments, and design an uninformed algorithm that requires only O(T^(3/2)).

    • #3032
      Preferred Deals in General Environments
      Yuan Deng, Sébastien Lahaie, Vahab Mirrokni
      Details | PDF
      Algorithmic Game Theory 1

      A preferred deal is a special contract for selling impressions of display ad inventory. By accepting a deal, a buyer agrees to buy a minimum amount of impressions at a fixed price per impression, and is granted priority access to the impressions before they are sent to an open auction on an ad exchange. We consider the problem of designing preferred deals (inventory, price, quantity) in the presence of general convex constraints, including budget constraints, and propose an approximation algorithm to maximize the revenue obtained from the deals. We then evaluate our algorithm using auction data from a major advertising exchange and our empirical results show that the algorithm achieves around 95% of the optimal revenue.

    • #3679
      On the Efficiency and Equilibria of Rich Ads
      MohammadAmin Ghiasi, MohammadTaghi Hajiaghayi, Sébastien Lahaie, Hadi Yami
      Details | PDF
      Algorithmic Game Theory 1

      Search ads have evolved in recent years from simple text formats to rich ads that allow deep site links, rating, images and videos. In this paper, we consider a model where several slots are available on the search results page, as in the classic generalized second-price auction (GSP), but now a bidder can be allocated several consecutive slots, which are interpreted as a rich ad. As in the GSP, each bidder submits a bid-per-click, but the click-through rate (CTR) function is generalized from a simple CTR for each slot to a general CTR function over sets of consecutive slots. We study allocation and pricing in this model under subadditive and fractionally subadditive CTRs. We design and analyze a constant-factor approximation algorithm for the efficient allocation problem under fractionally subadditive CTRs, and a log-approximation algorithm for the subadditive case. Building on these results, we show that approximate competitive equilibrium prices exist and can be computed for subadditive and fractionally subadditive CTRs, with the same guarantees as for allocation.

    • #4855
      Neural Networks for Predicting Human Interactions in Repeated Games
      Yoav Kolumbus, Gali Noti
      Details | PDF
      Algorithmic Game Theory 1

      We consider the problem of predicting human players' actions in repeated strategic interactions. Our goal is to predict the dynamic step-by-step behavior of individual players in previously unseen games. We study the ability of neural networks to perform such predictions and the information that they require. We show on a dataset of normal-form games from experiments with human participants that standard neural networks are able to learn functions that provide more accurate predictions of the players' actions than established models from behavioral economics. The networks outperform the other models in terms of prediction accuracy and cross-entropy, and yield higher economic value. We show that if the available input is only of a short sequence of play, economic information about the game is important for predicting behavior of human agents. However, interestingly, we find that when the networks are trained with long enough sequences of history of play, action-based networks do well and additional economic details about the game do not improve their performance, indicating that the sequence of actions encode sufficient information for the success in the prediction task.

    • #6034
      Ridesharing with Driver Location Preferences
      Duncan Rheingans-Yoo, Scott Duke Kominers, Hongyao Ma, David C. Parkes
      Details | PDF
      Algorithmic Game Theory 1

      We study revenue-optimal pricing and driver compensation in ridesharing platforms when drivers have heterogeneous preferences over locations. If a platform ignores drivers' location preferences, it may make inefficient trip dispatches; moreover, drivers may strategize so as to route towards their preferred locations. In a model with stationary and continuous demand and supply, we present a mechanism that incentivizes drivers to both (i) report their location preferences truthfully and (ii) always provide service. In settings with unconstrained driver supply or symmetric demand patterns, our mechanism achieves (full-information) first-best revenue. Under supply constraints and unbalanced demand, we show via simulation that our mechanism improves over existing mechanisms and has performance close to the first-best.

    • #10967
      (Sister Conferences Best Papers Track) The Power of Context in Networks: Ideal Point Models with Social Interactions
      Mohammad T. Irfan, Tucker Gordon
      Details | PDF
      Algorithmic Game Theory 1

      Game theory has been widely used for modeling strategic behaviors in networked multiagent systems. However, the context within which these strategic behaviors take place has received limited attention. We present a model of strategic behavior in networks that incorporates the behavioral context, focusing on the contextual aspects of congressional voting. One salient predictive model in political science is the ideal point model, which assigns each senator and each bill a number on the real line of political spectrum. We extend the classical ideal point model with network-structured interactions among senators. In contrast to the ideal point model's prediction of individual voting behavior, we predict joint voting behaviors in a game-theoretic fashion. The consideration of context allows our model to outperform previous models that solely focus on the networked interactions with no contextual parameters. We focus on two fundamental questions: learning the model using real-world data and computing stable outcomes of the model with a view to predicting joint voting behaviors and identifying most influential senators. We demonstrate the effectiveness of our model through experiments using data from the 114th U.S. Congress.

    Wednesday 14 11:00 - 12:30 ML|OL - Online Learning 1 (2601-2602)

    Chair: Asim Munawar
    • #2197
      A Practical Semi-Parametric Contextual Bandit
      Yi Peng, Miao Xie, Jiahao Liu, Xuying Meng, Nan Li, Cheng Yang, Tao Yao, Rong Jin
      Details | PDF
      Online Learning 1

      Classic multi-armed bandit algorithms are inefficient for a large number of arms. On the other hand, contextual bandit algorithms are more efficient, but they suffer from a large regret due to the bias of reward estimation with finite dimensional features. Although recent studies proposed semi-parametric bandits to overcome these defects, they assume arms' features are constant over time. However, this assumption rarely holds in practice, since real-world problems often involve underlying processes that are dynamically evolving over time especially for the special promotions like Singles' Day sales. In this paper, we formulate a novel Semi-Parametric Contextual Bandit Problem to relax this assumption. For this problem, a novel Two-Steps Upper-Confidence Bound framework, called Semi-Parametric UCB (SPUCB), is presented. It can be flexibly applied to linear parametric function problem with a satisfied gap-free bound on the n-step regret. Moreover, to make our method more practical in online system, an optimization is proposed for dealing with high dimensional features of a linear function. Extensive experiments on synthetic data as well as a real dataset from one of the largest e-commercial platforms demonstrate the superior performance of our algorithm.

    • #3401
      Marginal Posterior Sampling for Slate Bandits
      Maria Dimakopoulou, Nikos Vlassis, Tony Jebara
      Details | PDF
      Online Learning 1

      We introduce a new Thompson sampling-based algorithm, called marginal posterior sampling, for online slate bandits, that is characterized by three key ideas. First, it postulates that the slate-level reward is a monotone function of the marginal unobserved rewards of the base actions selected in the slates's slots, but it does not attempt to estimate this function. Second, instead of maintaining a slate-level reward posterior, the algorithm maintains posterior distributions for the marginal reward of each slot's base actions and uses the samples from these marginal posteriors to select the next slate. Third, marginal posterior sampling optimizes at the slot-level rather than the slate-level, which makes the approach computationally efficient. Simulation results establish substantial advantages of marginal posterior sampling over alternative Thompson sampling-based approaches that are widely used in the domain of web services.

    • #3855
      Learning Multi-Objective Rewards and User Utility Function in Contextual Bandits for Personalized Ranking
      Nirandika Wanigasekara, Yuxuan Liang, Siong Thye Goh, Ye Liu, Joseph Jay Williams, David S. Rosenblum
      Details | PDF
      Online Learning 1

      This paper tackles the problem of providing users with ranked lists of relevant search results, by incorporating contextual features of the users and search results, and learning how a user values multiple objectives. For example, to recommend a ranked list of hotels, an algorithm must learn which hotels are the right price for users, as well as how users vary in their weighting of price against the location. In our paper, we formulate the context-aware, multi-objective, ranking problem as a Multi-Objective Contextual Ranked Bandit (MOCR-B). To solve the MOCR-B problem, we present a novel algorithm, named Multi-Objective Utility-Upper Confidence Bound (MOU-UCB). The goal of MOU-UCB is to learn how to generate a ranked list of resources that maximizes the rewards in multiple objectives to give relevant search results. Our algorithm learns to predict rewards in multiple objectives based on contextual information (combining the Upper Confidence Bound algorithm for multi-armed contextual bandits with neural network embeddings), as well as learns how a user weights the multiple objectives. Our empirical results reveal that the ranked lists generated by MOU-UCB lead to better click-through rates, compared to approaches that do not learn the utility function over multiple reward objectives.

    • #5278
      Perturbed-History Exploration in Stochastic Multi-Armed Bandits
      Branislav Kveton, Csaba Szepesvári, Mohammad Ghavamzadeh, Craig Boutilier
      Details | PDF
      Online Learning 1

      We propose an online algorithm for cumulative regret minimization in a stochastic multi-armed bandit. The algorithm adds O(t) i.i.d. pseudo-rewards to its history in round t and then pulls the arm with the highest average reward in its perturbed history. Therefore, we call it perturbed-history exploration (PHE). The pseudo-rewards are carefully designed to offset potentially underestimated mean rewards of arms with a high probability. We derive near-optimal gap-dependent and gap-free bounds on the n-round regret of PHE. The key step in our analysis is a novel argument that shows that randomized Bernoulli rewards lead to optimism. Finally, we empirically evaluate PHE and show that it is competitive with state-of-the-art baselines.

    • #5548
      Unifying the Stochastic and the Adversarial Bandits with Knapsack
      Anshuka Rangi, Massimo Franceschetti, Long Tran-Thanh
      Details | PDF
      Online Learning 1

      This work investigates the adversarial Bandits with Knapsack (BwK) learning problem, where a player repeatedly chooses to perform an action, pays the corresponding cost of the action, and receives a reward associated with the action. The player is constrained by the maximum budget that can be spent to perform the actions, and the rewards and the costs of these actions are assigned by an adversary. This setting is studied in terms of expected regret, defined as the difference between the total expected rewards per unit cost corresponding the best fixed action and the total expected rewards per unit cost of the learning algorithm. We propose a novel algorithm EXP3.BwK and show that the expected regret of the algorithm is order optimal in the budget. We then propose another algorithm EXP3++.BwK, which is order optimal in the adversarial BwK setting, and incurs an almost optimal expected regret in the stochastic BwK setting where the rewards and the costs are drawn from unknown underlying distributions. These results are then extended to a more general online learning setting, by designing another algorithm EXP3++.LwK and providing its performance guarantees. Finally, we investigate the scenario where the costs of the actions are large and comparable to the budget. We show that for the adversarial setting, the achievable regret bounds scale at least linearly with the maximum cost for any learning algorithm, and are significantly worse in comparison to the case of having costs bounded by a constant, which is a common assumption in the BwK literature.

    • #2097
      Multi-Objective Generalized Linear Bandits
      Shiyin Lu, Guanghui Wang, Yao Hu, Lijun Zhang
      Details | PDF
      Online Learning 1

      In this paper, we study the multi-objective bandits (MOB) problem, where a learner repeatedly selects one arm to play and then receives a reward vector consisting of multiple objectives. MOB has found many real-world applications as varied as online recommendation and network routing. On the other hand, these applications typically contain contextual information that can guide the learning process which, however, is ignored by most of existing work. To utilize this information, we associate each arm with a context vector and assume the reward follows the generalized linear model (GLM). We adopt the notion of Pareto regret to evaluate the learner's performance and develop a novel algorithm for minimizing it. The essential idea is to apply a variant of the online Newton step to estimate model parameters, based on which we utilize the upper confidence bound (UCB) policy to construct an approximation of the Pareto front, and then uniformly at random choose one arm from the approximate Pareto front. Theoretical analysis shows that the proposed algorithm achieves an \tilde O(d\sqrt{T}) Pareto regret, where T is the time horizon and d is the dimension of contexts, which matches the optimal result for single objective contextual bandits problem. Numerical experiments demonstrate the effectiveness of our method.

    Wednesday 14 11:00 - 12:30 ML|C - Classification 4 (2603-2604)

    Chair: Miao Xu
    • #2994
      SPAGAN: Shortest Path Graph Attention Network
      Yiding Yang, Xinchao Wang, Mingli Song, Junsong Yuan, Dacheng Tao
      Details | PDF
      Classification 4

      Graph convolutional networks (GCN) have recently demonstrated their potential in analyzing non-grid structure data that can be represented as graphs. The core idea is to encode the local topology of a graph, via convolutions, into the feature of a center node. In this paper, we propose a novel GCN model, which we term as Shortest Path Graph Attention Network (SPAGAN). Unlike conventional GCN models that carry out node-based attentions, on either first-order neighbors or random higher-order ones, the proposed SPAGAN conducts path-based attention that explicitly accounts for the influence of a sequence of nodes yielding the minimum cost, or shortest path, between the center node and its higher-order neighbors. SPAGAN therefore allows for a more informative and intact exploration of the graph structure and further the more effective aggregation of information from distant neighbors, as compared to node-based GCN methods. We test SPAGAN for the downstream classification task on several standard datasets, and achieve performances superior to the state of the art.

    • #3128
      Learn Smart with Less: Building Better Online Decision Trees with Fewer Training Examples
      Ariyam Das, Jin Wang, Sahil M. Gandhi, Jae Lee, Wei Wang, Carlo Zaniolo
      Details | PDF
      Classification 4

      Online  decision  tree  models  are  extensively  used in  many  industrial  machine  learning  applications for real-time classification tasks. These models are highly accurate, scalable and easy to use in practice. The Very Fast Decision Tree (VFDT) is the classic  online  decision  tree  induction  model  that has been widely adopted due to its theoretical guarantees  as  well  as  competitive  performance.  However, VFDT and its variants solely rely on conservative statistical measures like Hoeffding bound to incrementally grow the tree. This makes these models  extremely  circumspect  and  limits  their  ability to  learn  fast.  In  this  paper,  we  efficiently  employ statistical  resampling  techniques  to  build  an online tree faster using fewer examples. We first theoretically show that a naive implementation of resampling techniques like non-parametric bootstrap does not scale due to large memory and computational overheads. We  mitigate  this  by  proposing  a  robust  memory-efficient bootstrap simulation heuristic (Mem-ES) that  successfully  expedites  the  learning  process. Experimental  results  on  both  synthetic  data  and large-scale real world datasets demonstrate the efficiency  and  effectiveness  of  our  proposed  technique.

    • #3915
      Discrete Binary Coding based Label Distribution Learning
      Ke Wang, Xin Geng
      Details | PDF
      Classification 4

      Label Distribution Learning (LDL) is a general learning paradigm in machine learning, which includes both single-label learning (SLL) and multi-label learning (MLL) as its special cases. Recently, many LDL algorithms have been proposed to handle different application tasks such as facial age estimation, head pose estimation and visual sentiment distributions prediction. However, the training time complexity of most existing LDL algorithms is too high, which makes them unapplicable to large-scale LDL. In this paper, we propose a novel LDL method to address this issue, termed Discrete Binary Coding based Label Distribution Learning (DBC-LDL). Specifically, we design an efficiently discrete coding framework to learn binary codes for instances. Furthermore, both the pair-wise semantic similarities and the original label distributions are integrated into this framework to learn highly discriminative binary codes. In addition, a fast approximate nearest neighbor (ANN) search strategy is utilized to predict label distributions for testing instances. Experimental results on five real-world datasets demonstrate its superior performance over several state-of-the-art LDL methods with the lower time cost.

    • #3982
      Learning for Tail Label Data: A Label-Specific Feature Approach
      Tong Wei, Wei-Wei Tu, Yu-Feng Li
      Details | PDF
      Classification 4

      Tail label data (TLD) is prevalent in real-world tasks, and large-scale multi-label learning (LMLL) is its major learning scheme. Previous LMLL studies typically need to additionally take into account extensive head label data (HLD), and thus fail to guide the learning behavior of TLD. In many applications such as recommender systems, however, the prediction of tail label is very necessary, since it provides very important supplementary information. We call this kind of problem as \emph{tail label learning}. In this paper, we propose a novel method for the tail label learning problem. Based on the observation that the raw feature representation in LMLL data usually benefits HLD, which may not be suitable for TLD, we construct effective and rich label-specific features through exploring labeled data distribution and leveraging label correlations. Specifically, we employ clustering analysis to explore discriminative features for each tail label replacing the original high-dimensional and sparse features. In addition, due to the scarcity of positive examples of TLD, we encode knowledge from HLD by exploiting label correlations to enhance the label-specific features. Experimental results verify the superiority of the proposed method in terms of performance on TLD.

    • #4952
      Spatio-Temporal Attentive RNN for Node Classification in Temporal Attributed Graphs
      Dongkuan Xu, Wei Cheng, Dongsheng Luo, Xiao Liu, Xiang Zhang
      Details | PDF
      Classification 4

      Node classification in graph-structured data aims to classify the nodes where labels are only available for a subset of nodes. This problem has attracted considerable research efforts in recent years. In real-world applications, both graph topology and node attributes evolve over time. Existing techniques, however, mainly focus on static graphs and lack the capability to simultaneously learn both temporal and spatial/structural features. Node classification in temporal attributed graphs is challenging for two major aspects. First, effectively modeling the spatio-temporal contextual information is hard. Second, as temporal and spatial dimensions are entangled, to learn the feature representation of one target node, it’s desirable and challenging to differentiate the relative importance of different factors, such as different neighbors and time periods. In this paper, we propose STAR, a spatio-temporal attentive recurrent network model, to deal with the above challenges. STAR extracts the vector representation of neighborhood by sampling and aggregating local neighbor nodes. It further feeds both the neighborhood representation and node attributes into a gated recurrent unit network to jointly learn the spatio-temporal contextual information. On top of that, we take advantage of the dual attention mechanism to perform a thorough analysis on the model interpretability. Extensive experiments on real datasets demonstrate the effectiveness of the STAR model.

    • #2572
      Worst-Case Discriminative Feature Selection
      Shuangli Liao, Quanxue Gao, Feiping Nie, Yang Liu, Xiangdong Zhang
      Details | PDF
      Classification 4

      Feature selection plays a critical role in data mining, driven by increasing feature dimensionality in target problems. In this paper, we propose a new criterion for discriminative feature selection, worst-case discriminative feature selection (WDFS). Unlike Fisher Score and other methods based on the discriminative criteria considering the overall (or average) separation of data, WDFS adopts a new perspective called worst-case view which arguably is more suitable for classification applications. Specifically, WDFS directly maximizes the ratio of the minimum of between-class variance of all class pairs over the maximum of within-class variance, and thus it duly considers the separation of all classes. Otherwise, we take a greedy strategy by finding one feature at a time, but it is very easy to implement. Moreover, we also utilize the correlation between features to help reduce the redundancy and extend WDFS to uncorrelated WDFS (UWDFS). To evaluate the effectiveness of the proposed algorithm, we conduct classification experiments on many real data sets. In the experiment, we respectively use the original features and the score vectors of features over all class pairs to calculate the correlation coefficients, and analyze the experimental results in these two ways. Experimental results demonstrate the effectiveness of WDFS and UWDFS.

    Wednesday 14 11:00 - 12:30 NLP|D - Dialogue (2605-2606)

    Chair: Magnini Bernardo
    • #28
      Exploiting Persona Information for Diverse Generation of Conversational Responses
      Haoyu Song, Wei-Nan Zhang, Yiming Cui, Dong Wang, Ting Liu
      Details | PDF
      Dialogue

      In human conversations, due to their personalities in mind, people can easily carry out and maintain the conversations. Giving conversational context with persona information to a chatbot, how to exploit the information to generate diverse and sustainable conversations is still a non-trivial task. Previous work on persona-based conversational models successfully make use of predefined persona information and have shown great promise in delivering more realistic responses. And they all learn with the assumption that given a source input, there is only one target response. However, in human conversations, there are massive appropriate responses to a given input message. In this paper, we propose a memory-augmented architecture to exploit persona information from context and incorporate a conditional variational autoencoder model together to generate diverse and sustainable conversations. We evaluate the proposed model on a benchmark persona-chat dataset. Both automatic and human evaluations show that our model can deliver more diverse and more engaging persona-based responses than baseline approaches.

    • #1946
      Generating Multiple Diverse Responses with Multi-Mapping and Posterior Mapping Selection
      Chaotao Chen, Jinhua Peng, Fan Wang, Jun Xu, Hua Wu
      Details | PDF
      Dialogue

      In human conversation an input post is open to multiple potential responses, which is typically regarded as a one-to-many problem. Promising approaches mainly incorporate multiple latent mechanisms to build the one-to-many relationship. However, without accurate selection of the latent mechanism corresponding to the target response during training, these methods suffer from a rough optimization of latent mechanisms. In this paper, we propose a multi-mapping mechanism to better capture the one-to-many relationship, where multiple mapping modules are employed as latent mechanisms to model the semantic mappings from an input post to its diverse responses. For accurate optimization of latent mechanisms, a posterior mapping selection module is designed to select the corresponding mapping module according to the target response for further optimization. We also introduce an auxiliary matching loss to facilitate the optimization of posterior mapping selection. Empirical results demonstrate the superiority of our model in generating multiple diverse and informative responses over the state-of-the-art methods.

    • #1987
      Learning to Select Knowledge for Response Generation in Dialog Systems
      Rongzhong Lian, Min Xie, Fan Wang, Jinhua Peng, Hua Wu
      Details | PDF
      Dialogue

      End-to-end neural models for intelligent dialogue systems suffer from the problem of generating uninformative responses. Various methods were proposed to generate more informative responses by leveraging external knowledge. However,  few previous work has focused on selecting appropriate knowledge in the learning process. The inappropriate selection of knowledge could prohibit the model from learning to make full use of the knowledge. Motivated by this, we propose an end-to-end neural model which employs a novel knowledge selection mechanism where both prior and posterior distributions over knowledge are used to facilitate knowledge selection. Specifically, a posterior distribution over knowledge is inferred from both utterances and responses, and it ensures the appropriate selection of knowledge during the training process. Meanwhile, a prior distribution, which is inferred from utterances only,  is used to approximate the posterior distribution so that appropriate knowledge can be selected even without responses during the inference process. Compared with the previous work, our model can better incorporate appropriate knowledge in response generation. Experiments on both automatic and human evaluation verify the superiority of our model over previous baselines.

    • #2334
      GSN: A Graph-Structured Network for Multi-Party Dialogues
      Wenpeng Hu, Zhangming Chan, Bing Liu, Dongyan Zhao, Jinwen Ma, Rui Yan
      Details | PDF
      Dialogue

      Existing neural models for dialogue response generation assume that utterances are sequentially organized. However, many real-world dialogues involve multiple interlocutors (i.e., multi-party dialogues), where the assumption does not hold as utterances from different interlocutors can occur ``in parallel.'' This paper generalizes existing sequence-based models to a Graph-Structured neural Network (GSN) for dialogue modeling. The core of GSN is a graph-based encoder that can model the information flow along the graph-structured dialogues (two-party sequential dialogues are a special case). Experimental results show that GSN significantly outperforms existing sequence-based models.

    • #2504
      Dual Visual Attention Network for Visual Dialog
      Dan Guo, Hui Wang, Meng Wang
      Details | PDF
      Dialogue

      Visual dialog is a challenging task, which involves multi-round semantic transformations between vision and language. This paper aims to address cross-modal semantic correlation for visual dialog. Motivated by that Vg (global vision), Vl (local vision), Q (question) and H (history) have inseparable relevances, the paper proposes a novel Dual Visual Attention Network (DVAN) to realize (Vg, Vl, Q, H)--> A. DVAN is a three-stage query-adaptive attention model. In order to acquire accurate A (answer), it first explores the textual attention, which imposes the question on history to pick out related context H'. Then, based on Q and H', it implements respective visual attentions to discover related global image visual hints Vg' and local object-based visual hints Vl'. Next, a dual crossing visual attention is proposed. Vg' and Vl' are mutually embedded to learn the complementary of visual semantics. Finally, the attended textual and visual features are combined to infer the answer. Experimental results on the VisDial v0.9 and v1.0 datasets validate the effectiveness of the proposed approach.

    • #2665
      A Document-grounded Matching Network for Response Selection in Retrieval-based Chatbots
      Xueliang Zhao, Chongyang Tao, Wei Wu, Can Xu, Dongyan Zhao, Rui Yan
      Details | PDF
      Dialogue

      We present a document-grounded matching network (DGMN) for response selection that can power a knowledge-aware retrieval-based chatbot system. The challenges of building such a model lie in how to ground conversation contexts with background documents and how to recognize important information in the documents for matching. To overcome the challenges, DGMN fuses information in a document and a context into representations of each other, and dynamically determines if grounding is necessary and importance of different parts of the document and the context through hierarchical interaction with a response at the matching step. Empirical studies on two public data sets indicate that DGMN can significantly improve upon state-of-the-art methods and at the same time enjoys good interpretability.

    Wednesday 14 11:00 - 12:30 CV|RDCIMRSI - Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 3 (2501-2502)

    Chair: Shiliang Zhang
    • #2079
      Pedestrian Attribute Recognition by Joint Visual-semantic Reasoning and Knowledge Distillation
      Qiaozhe Li, Xin Zhao, Ran He, Kaiqi Huang
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 3

      Pedestrian attribute recognition in surveillance is a challenging task in computer vision due to significant pose variation, viewpoint change and poor image quality. To achieve effective recognition, this paper presents a graph-based global reasoning framework to jointly model potential visual-semantic relations of attributes and distill auxiliary human parsing knowledge to guide the relational learning. The reasoning framework models attribute groups on a graph and learns a projection function to adaptively assign local visual features to the nodes of the graph. After feature projection, graph convolution is utilized to perform global reasoning between the attribute groups to model their mutual dependencies. Then, the learned node features are projected back to visual space to facilitate knowledge transfer. An additional regularization term is proposed by distilling human parsing knowledge from a pre-trained teacher model to enhance feature representations. The proposed framework is verified on three large scale pedestrian attribute datasets including PETA, RAP, and PA-100k. Experiments show that our method achieves state-of-the-art results.

    • #3776
      Low Shot Box Correction for Weakly Supervised Object Detection
      Tianxiang Pan, Bin Wang, Guiguang Ding, Jungong Han, Junhai Yong
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 3

      Weakly supervised object detection (WSOD) has been widely studied but the accuracy of state-of-art methods remains far lower than strongly supervised methods. One major reason for this huge gap is the incomplete box detection problem which arises because most previous WSOD models are structured on classification networks and therefore tend to recognize the most discriminative parts instead of complete bounding boxes. To solve this problem, we define a low-shot weakly supervised object detection task and propose a novel low-shot box correction network to address it. The proposed task enables to train object detectors on a large data set all of which have image-level annotations, but only a small portion or few shots have box annotations. Given the low-shot box annotations, we use a novel box correction network to transfer the incomplete boxes into complete ones. Extensive empirical evidence shows that our proposed method yields state-of-art detection accuracy under various settings on the PASCAL VOC benchmark.

    • #3835
      Transferable Adversarial Attacks for Image and Video Object Detection
      Xingxing Wei, Siyuan Liang, Ning Chen, Xiaochun Cao
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 3

      Identifying adversarial examples is beneficial for understanding deep networks and developing robust models. However, existing attacking methods for image object detection have two limitations: weak transferability---the generated adversarial examples often have a low success rate to attack other kinds of detection methods, and high computation cost---they need much time to deal with video data, where many frames need polluting. To address these issues, we present a generative method to obtain adversarial images and videos, thereby significantly reducing the processing time. To enhance transferability, we manipulate the feature maps extracted by a feature network, which usually constitutes the basis of object detectors. Our method is based on the Generative Adversarial Network (GAN) framework, where we combine a high-level class loss and a low-level feature loss to jointly train the adversarial example generator. Experimental results on PASCAL VOC and ImageNet VID datasets show that our method efficiently generates image and video adversarial examples, and more importantly, these adversarial examples have better transferability, therefore being able to simultaneously attack two kinds of  representative object detection models: proposal based models like Faster-RCNN and regression based models like SSD.

    • #4376
      Equally-Guided Discriminative Hashing for Cross-modal Retrieval
      Yufeng Shi, Xinge You, Feng Zheng, Shuo Wang, Qinmu Peng
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 3

      Cross-modal hashing intends to project data from two modalities into a common hamming space to perform cross-modal retrieval efficiently. Despite satisfactory performance achieved on real applications, existing methods are incapable of effectively preserving semantic structure to maintain inter-class relationship and improving discriminability to make intra-class samples aggregated simultaneously, which thus limits the higher retrieval performance. To handle this problem, we propose Equally-Guided Discriminative Hashing (EGDH), which jointly takes into consideration semantic structure and discriminability. Specifically, we discover the connection between semantic structure preserving and discriminative methods. Based on it, we directly encode multi-label annotations that act as high-level semantic features to build a common semantic structure preserving classifier. With the common classifier to guide the learning of different modal hash functions equally, hash codes of samples are intra-class aggregated and inter-class relationship preserving. Experimental results on two benchmark datasets demonstrate the superiority of EGDH compared with the state-of-the-arts.

    • #151
      Color-Sensitive Person Re-Identification
      Guan'an Wang, Yang Yang, Jian Cheng, Jinqiao Wang, Zengguang Hou
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 3

      Recent deep Re-ID models mainly focus on learning high-level semantic features, while failing to explicitly explore color information which is one of the most important cues for person Re-ID. In this paper, we propose a novel Color-Sensitive Re-ID to take full advantage of color information. On one hand, we train our model with real and fake images. By using the extra fake images, more color information can be exploited and it can avoid overfitting during training. On the other hand, we also train our model with images of the same person with different colors. By doing so, features can be forced to focus on the color difference in regions. To generate fake images with specified colors, we propose a novel Color Translation GAN (CTGAN) to learn mappings between different clothing colors and preserve identity consistency among the same clothing color. Extensive evaluations on two benchmark datasets show that our approach significantly outperforms state-of-the-art  Re-ID models.

    • #168
      Graph Convolutional Network Hashing for Cross-Modal Retrieval
      Ruiqing Xu, Chao Li, Junchi Yan, Cheng Deng, Xianglong Liu
      Details | PDF
      Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation 3

      Deep network based cross-modal retrieval has recently made significant progress. However, bridging modality gap to further enhance the retrieval accuracy still remains a crucial bottleneck. In this paper, we propose a Graph Convolutional Hashing (GCH) approach, which learns modality-unified binary codes via an affinity graph. An end-to-end deep architecture is constructed with three main components: a semantic encoder module, two feature encoding networks, and a graph convolutional network (GCN). We design a semantic encoder as a teacher module to guide the feature encoding process, a.k.a. student module, for semantic information exploiting. Furthermore, GCN is utilized to explore the inherent similarity structure among data points, which will help to generate discriminative hash codes. Extensive experiments on three benchmark datasets demonstrate that the proposed GCH outperforms the state-of-the-art methods.

    Wednesday 14 11:00 - 12:30 PS|TFP - Theoretical Foundations of Planning (2503-2504)

    Chair: Sylvie Thiebaux
    • #2816
      Partitioning Techniques in LTLf Synthesis
      Lucas Martinelli Tabajara, Moshe Y. Vardi
      Details | PDF
      Theoretical Foundations of Planning

      Decomposition is a general principle in computational thinking, aiming at decomposing a problem instance into easier subproblems. Indeed, decomposing a transition system into a partitioned transition relation was critical to scaling BDD-based model checking to large state spaces. Since then, it has become a standard technique for dealing with related problems, such as Boolean synthesis. More recently, partitioning has begun to be explored in the synthesis of reactive systems. LTLf synthesis, a finite-horizon version of reactive synthesis with applications in areas such as robotics, seems like a promising candidate for partitioning techniques. After all, the state of the art is based on a BDD-based symbolic algorithm similar to those from model checking, and partitioning could be a potential solution to the current bottleneck of this approach, which is the construction of the state space. In this work, however, we expose fundamental limitations of partitioning that hinder its effective application to symbolic LTLf synthesis. We not only provide evidence for this fact through an extensive experimental evaluation, but also perform an in-depth analysis to identify the reason for these results. We trace the issue to an overall increase in the size of the explored state space, caused by an inability of partitioning to fully exploit state-space minimization, which has a crucial effect on performance. We conclude that more specialized decomposition techniques are needed for LTLf synthesis which take into account the effects of minimization.

    • #6582
      Dynamic logic of parallel propositional assignments and its applications to planning
      Andreas Herzig, Frédéric Maris, Julien Vianey
      Details | PDF
      Theoretical Foundations of Planning

      We introduce a dynamic logic with parallel composition and two kinds of nondeterministic composition, exclusive and inclusive. We show PSPACE completeness of both the model checking and the satisfiability problem and apply our logic to sequential and parallel classical planning where actions have conditional effects.

    • #733
      Planning for LTLf /LDLf Goals in Non-Markovian Fully Observable Nondeterministic Domains
      Ronen I. Brafman, Giuseppe De Giacomo
      Details | PDF
      Theoretical Foundations of Planning

      In this paper, we investigate non-Markovian Nondeterministic Fully Observable Planning Domains (NMFONDs), variants of Nondeterministic Fully Observable Planning Domains (FONDs) where the next state is determined by the full history leading to the current state. In particular, we introduce TFONDs which are NMFONDs where conditions on the history are succinctly and declaratively specified using the linear-time temporal logic on finite traces LTLf and its extension LDLf. We provide algorithms for planning in TFONDs for general LTLf/LDLf goals, and establish tight complexity bounds w.r.t. the domain representation and the goal, separately. We also show that TFONDs are able to capture all NMFONDs in which the dependency on the history is "finite state". Finally, we show that TFONDs also capture Partially Observable Nondeterministic Planning Domains (PONDs), but without referring to unobservable variables.

    • #1561
      Steady-State Policy Synthesis for Verifiable Control
      Alvaro Velasquez
      Details | PDF
      Theoretical Foundations of Planning

      In this paper, we introduce the Steady-State Policy Synthesis (SSPS) problem which consists of finding a stochastic decision-making policy that maximizes expected rewards while satisfying a set of asymptotic behavioral specifications. These specifications are determined by the steady-state probability distribution resulting from the Markov chain induced by a given policy. Since such distributions necessitate recurrence, we propose a solution which finds policies that induce recurrent Markov chains within possibly non-recurrent Markov Decision Processes (MDPs). The SSPS problem functions as a generalization of steady-state control, which has been shown to be in PSPACE. We improve upon this result by showing that SSPS is in P via linear programming. Our results are validated using CPLEX simulations on MDPs with over 10000 states. We also prove that the deterministic variant of SSPS is NP-hard.

    • #10960
      (Sister Conferences Best Papers Track) A Refined Understanding of Cost-optimal Planning with Polytree Causal Graphs
      Christer Bäckström, Peter Jonsson, Sebastian Ordyniak
      Details | PDF
      Theoretical Foundations of Planning

      Complexity analysis based on the causal graphs of planning instances is a highly important research area. In particular, tractability results have led to new methods for constructing domain-independent heuristics. Important early examples of such results were presented by, for instance, Brafman & Domshlak and Katz & Keyder. More general results based on polytrees and bounding certain parameters were subsequently derived by Aghighi et al. and Ståhlberg. We continue this line of research by analyzing cost-optimal planning for instances with a polytree causal graph, bounded domain size and bounded depth. We show that no further restrictions are necessary for tractability, thus generalizing the previous results. Our approach is based on a novel method of closely analysing optimal plans: we recursively decompose the causal graph in a way that allows for bounding the number of variable changes as a function of the depth, using a reording argument and a comparison with prefix trees of known size. We then transform the planning instances into tree-structured constraint satisfaction instances.

    • #4601
      Reachability and Coverage Planning for Connected Agents
      Tristan Charrier, Arthur Queffelec, Ocan Sankur, François Schwarzentruber
      Details | PDF
      Theoretical Foundations of Planning

      Motivated by the increasing appeal of robots in information-gathering missions, we study multi-agent path planning problems in which the agents must remain interconnected. We model an area by a topological graph specifying the movement and the connectivity constraints of the agents. We study the theoretical complexity of the reachability and the coverage problems of a fleet of connected agents on various classes of topological graphs. We establish the complexity of these problems on known classes, and introduce a new class called sight-moveable graphs which admit efficient algorithms.

    Wednesday 14 11:00 - 12:30 ML|DM - Data Mining 5 (2505-2506)

    Chair: Hau Chan
    • #1007
      RecoNet: An Interpretable Neural Architecture for Recommender Systems
      Francesco Fusco, Michalis Vlachos, Vasileios Vasileiadis, Kathrin Wardatzky, Johannes Schneider
      Details | PDF
      Data Mining 5

      Neural systems offer high predictive accuracy but are plagued by long training times and low interpretability. We present a simple neural architecture for recommender systems that lifts several of these shortcomings. Firstly, the approach has a high predictive power that is comparable to state-of-the-art recommender approaches. Secondly, owing to its simplicity, the trained model can be interpreted easily because it provides the individual contribution of each input feature to the decision. Our method is three orders of magnitude faster than general-purpose explanatory approaches, such as LIME. Finally, thanks to its design, our architecture addresses cold-start issues, and therefore the model does not require retraining in the presence of new users.

    • #1044
      GSTNet: Global Spatial-Temporal Network for Traffic Flow Prediction
      Shen Fang, Qi Zhang, Gaofeng Meng, Shiming Xiang, Chunhong Pan
      Details | PDF
      Data Mining 5

      Predicting traffic flow on traffic networks is a very challenging task, due to the complicated and dynamic spatial-temporal dependencies between different nodes on the network. The traffic flow renders two types of temporal dependencies, including short-term neighboring and long-term periodic dependencies. What's more, the spatial correlations over different nodes are both local and non-local. To capture the global dynamic spatial-temporal correlations, we propose a Global Spatial-Temporal Network (GSTNet), which consists of several layers of spatial-temporal blocks. Each block contains a multi-resolution temporal module and a global correlated spatial module in sequence, which can simultaneously extract the dynamic temporal dependencies and the global spatial correlations. Extensive experiments on the real world datasets verify the effectiveness and superiority of the proposed method on both the public transportation network and the road network.

    • #1050
      Graph Contextualized Self-Attention Network for Session-based Recommendation
      Chengfeng Xu, Pengpeng Zhao, Yanchi Liu, Victor S. Sheng, Jiajie Xu, Fuzhen Zhuang, Junhua Fang, Xiaofang Zhou
      Details | PDF
      Data Mining 5

      Session-based recommendation, which aims to predict the user's immediate next action based on anonymous sessions, is a key task in many online services (e.g., e-commerce, media streaming).  Recently, Self-Attention Network (SAN) has achieved significant success in various sequence modeling tasks without using either recurrent or convolutional network. However, SAN lacks local dependencies that exist over adjacent items and limits its capacity for learning contextualized representations of items in sequences.  In this paper, we propose a graph contextualized self-attention model (GC-SAN), which utilizes both graph neural network and self-attention mechanism, for session-based recommendation. In GC-SAN, we dynamically construct a graph structure for session sequences and capture rich local dependencies via graph neural network (GNN).  Then each session learns long-range dependencies by applying the self-attention mechanism. Finally, each session is represented as a linear combination of the global preference and the current interest of that session. Extensive experiments on two real-world datasets show that GC-SAN outperforms state-of-the-art methods consistently.

    • #2999
      Outlier Detection for Time Series with Recurrent Autoencoder Ensembles
      Tung Kieu, Bin Yang, Chenjuan Guo, Christian S. Jensen
      Details | PDF
      Data Mining 5

      We propose two solutions to outlier detection in time series based on recurrent autoencoder ensembles. The solutions exploit autoencoders built using sparsely-connected recurrent neural networks (S-RNNs). Such networks make it possible to generate multiple autoencoders with different neural network connection structures. The two solutions are ensemble frameworks, specifically an independent framework and a shared framework, both of which combine multiple S-RNN based autoencoders to enable outlier detection.  This ensemble-based approach aims to reduce the effects of some autoencoders being overfitted to outliers, this way improving overall detection quality. Experiments with two large real-world time series data sets, including univariate and multivariate time series, offer insight into the design properties of the proposed frameworks and demonstrate that the resulting solutions are capable of outperforming both baselines and the state-of-the-art methods.

    • #5978
      Similarity Preserving Representation Learning for Time Series Clustering
      Qi Lei, Jinfeng Yi, Roman Vaculin, Lingfei Wu, Inderjit S. Dhillon
      Details | PDF
      Data Mining 5

      A considerable amount of clustering algorithms take instance-feature matrices as their inputs. As such, they cannot directly analyze time series data due to its temporal nature, usually unequal lengths, and complex properties. This is a great pity since many of these algorithms are effective, robust, efficient, and easy to use. In this paper, we bridge this gap by proposing an efficient representation learning framework that is able to convert a set of time series with various lengths to an instance-feature matrix. In particular, we guarantee that the pairwise similarities between time series are well preserved after the transformation , thus the learned feature representation is particularly suitable for the time series clustering task. Given a set of $n$ time series, we first construct an $n\times n$ partially-observed similarity matrix by randomly sampling $\mathcal{O}(n \log n)$ pairs of time series and computing their pairwise similarities. We then propose an efficient algorithm that solves a non-convex and NP-hard problem to learn new features based on the partially-observed similarity matrix. By conducting extensive empirical studies, we demonstrate that the proposed framework is much more effective, efficient, and flexible compared to other state-of-the-art clustering methods.

    • #6128
      DyAt Nets: Dynamic Attention Networks for State Forecasting in Cyber-Physical Systems
      Nikhil Muralidhar, Sathappan Muthiah, Naren Ramakrishnan
      Details | PDF
      Data Mining 5

      Multivariate time series forecasting is an important task in state forecasting for cyber-physical systems (CPS). State forecasting in CPS is imperative for optimal planning of system energy utility and understanding normal operational characteristics of the system thus enabling anomaly detection. Forecasting models can also be used to identify sub-optimal or worn out components and are thereby useful for overall system monitoring. Most existing work only performs single step forecasting but in CPS it is imperative to forecast the next sequence of system states (i.e curve forecasting). In this paper, we propose DyAt (Dynamic Attention) networks, a novel deep learning sequence to sequence (Seq2Seq) model with a novel hierarchical attention mechanism for long-term time series state forecasting. We evaluate our method on several CPS state forecasting and electric load forecasting tasks and find that our proposed DyAt models yield a performance improvement of at least 13.69% for the CPS state forecasting task and a performance improvement of at least 18.83% for the electric load forecasting task over other state-of-the-art forecasting baselines. We perform rigorous experimentation with several variants of the DyAt model and demonstrate that the DyAt models indeed learn better representations over the entire course of the long term forecast as compared to their counterparts with or without traditional attention mechanisms. All data and source code has been made available online.

    Wednesday 14 11:00 - 12:30 ML|TAML - Transfer, Adaptation, Multi-task Learning 2 (2401-2402)

    Chair: Boyu Wang
    • #2810
      Metadata-driven Task Relation Discovery for Multi-task Learning
      Zimu Zheng, Yuqi Wang, Quanyu Dai, Huadi Zheng, Dan Wang
      Details | PDF
      Transfer, Adaptation, Multi-task Learning 2

      Task Relation Discovery (TRD), i.e., reveal the relation of tasks, has notable value: it is the key concept underlying Multi-task Learning (MTL) and provides a principled way for identifying redundancies across tasks. However, task relation is usually specifically determined by data scientist resulting in the additional human effort for TRD, while transfer based on brute-force methods or mere training samples may cause negative effects which degrade the learning performance. To avoid negative transfer in an automatic manner, our idea is to leverage commonly available context attributes in nowadays systems, i.e., the metadata. In this paper, we, for the first time, introduce metadata into TRD for MTL and propose a novel Metadata Clustering method, which jointly uses historical samples and additional metadata to automatically exploit the true relatedness. It also avoids the negative transfer by identifying reusable samples between related tasks. Experimental results on five real-world datasets demonstrate that the proposed method is effective for MTL with TRD, and particularly useful in complicated systems with diverse metadata but insufficient data samples. In general, this study helps in automatic relation discovery among partially related tasks and sheds new light on the development of TRD in MTL through the use of metadata as apriori information.

    • #4660
      Group LASSO with Asymmetric Structure Estimation for Multi-Task Learning
      Saullo H. G. Oliveira, André R. Gonçalves, Fernando J. Von Zuben
      Details | PDF
      Transfer, Adaptation, Multi-task Learning 2

      Group LASSO is a widely used regularization that imposes sparsity considering groups of covariates. When used in Multi-Task Learning (MTL) formulations, it makes an underlying assumption that if one group of covariates is not relevant for one or a few tasks, it is also not relevant for all tasks, thus implicitly assuming that all tasks are related. This implication can easily lead to negative transfer if this assumption does not hold for all tasks. Since for most practical applications we hardly know a priori how the tasks are related, several approaches have been conceived in the literature to (i) properly capture the transference structure, (ii) improve interpretability of the tasks interplay, and (iii) penalize potential negative transfer. Recently, the automatic estimation of asymmetric structures inside the learning process was capable of effectively avoiding negative transfer. Our proposal is the first attempt in the literature to conceive a Group LASSO with asymmetric transference formulation, looking for the best of both worlds in a framework that admits the overlap of groups. The resulting optimization problem is solved by an alternating procedure with fast methods. We performed experiments using synthetic and real datasets to compare our proposal with state-of-the-art approaches, evidencing the promising predictive performance and distinguished interpretability of our proposal. The real case study involves the prediction of cognitive scores for Alzheimer's disease progression assessment. The source codes are available at GitHub.

    • #5047
      Meta-Learning for Low-resource Natural Language Generation in Task-oriented Dialogue Systems
      Fei Mi, Minlie Huang, Jiyong Zhang, Boi Faltings
      Details | PDF
      Transfer, Adaptation, Multi-task Learning 2

      Natural language generation (NLG) is an essential component of task-oriented dialogue systems. Despite the recent success of neural approaches for NLG, they are typically developed for particular domains with rich annotated training examples. In this paper, we study NLG in a low-resource setting to generate sentences in new scenarios with handful training examples. We formulate the problem from a meta-learning perspective, and propose a generalized optimization-based approach (Meta-NLG) based on the well-recognized model-agnostic meta-learning (MAML) algorithm. Meta-NLG defines a set of meta tasks, and directly incorporates the objective of adapting to new low-resource NLG tasks into the meta-learning optimization process. Extensive experiments are conducted on a large multi-domain dataset (MultiWoz) with diverse linguistic variations. We show that Meta-NLG significantly outperforms other training procedures in various low-resource configurations. We analyze the results, and demonstrate that Meta-NLG adapts extremely fast and well to low-resource situations.

    • #5499
      One Network for Multi-Domains: Domain Adaptive Hashing with Intersectant Generative Adversarial Networks
      Tao He, Yuan-Fang Li, Lianli Gao, Dongxiang Zhang, Jingkuan Song
      Details | PDF
      Transfer, Adaptation, Multi-task Learning 2

      With the recent explosive increase of digital data, image recognition and retrieval become a critical practical application. Hashing is an effective solution to this problem, due to its low storage requirement and high query speed. However, most of past works focus on hashing in a single (source) domain. Thus, the learned hash function may not adapt well in a new (target) domain that has a large distributional difference with the source domain. In this paper, we explore an end-to-end domain adaptive learning framework that simultaneously and precisely generates discriminative hash codes and classifies target domain images. Our method encodes two domains images into a semantic common space, followed by two independent generative adversarial networks arming at crosswise reconstructing two domains’ images, reducing domain disparity and improving alignment in the shared space. We evaluate our framework on four public benchmark datasets, all of which show that our method is superior to the other state-of-the-art methods on the tasks of object recognition and image retrieval.

    • #1358
      Progressive Transfer Learning for Person Re-identification
      Zhengxu Yu, Zhongming Jin, Long Wei, Jishun Guo, Jianqiang Huang, Deng Cai, Xiaofei He, Xian-Sheng Hua
      Details | PDF
      Transfer, Adaptation, Multi-task Learning 2

      Model fine-tuning is a widely used transfer learning approach in person Re-identification (ReID) applications, which fine-tuning a pre-trained feature extraction model into the target scenario instead of training a model from scratch. It is challenging due to the significant variations inside the target scenario, e.g., different camera viewpoint, illumination changes, and occlusion. These variations result in a gap between the distribution of each mini-batch and the distribution of the whole dataset when using mini-batch training. In this paper, we study model fine-tuning from the perspective of the aggregation and utilization of the global information of the dataset when using mini-batch training. Specifically, we introduce a novel network structure called Batch-related Convolutional Cell (BConv-Cell), which progressively collects the global information of the dataset into a latent state and uses this latent state to rectify the extracted feature. Based on BConv-Cells, we further proposed the Progressive Transfer Learning (PTL) method to facilitate the model fine-tuning process by joint training the BConv-Cells and the pre-trained ReID model. Empirical experiments show that our proposal can improve the performance of the ReID model greatly on MSMT17, Market-1501, CUHK03 and DukeMTMC-reID datasets. The code will be released later on at \url{https://github.com/ZJULearning/PTL}

    • #2913
      Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay
      Mohammad Rostami, Soheil Kolouri, Praveen K. Pilly
      Details | PDF
      Transfer, Adaptation, Multi-task Learning 2

      Despite huge success, deep networks are unable to learn effectively in sequential multitask learning settings as they forget the past learned tasks after learning new tasks. Inspired from complementary learning systems theory, we address this challenge by learning a generative model that couples the current task to the past learned tasks through a discriminative embedding space. We learn an abstract generative distribution in the embedding that allows generation of data points to represent past experience. We sample from this distribution and utilize experience replay to avoid forgetting and simultaneously accumulate new knowledge to the abstract distribution in order to couple the current task with past experience. We demonstrate theoretically and empirically that our framework learns a distribution in the embedding, which is shared across all tasks, and as a result tackles catastrophic forgetting.

    Wednesday 14 11:00 - 12:30 HSGP|HS - Heuristic Search 1 (2403-2404)

    Chair: Ariel Felner
    • #762
      Depth-First Memory-Limited AND/OR Search and Unsolvability in Cyclic Search Spaces
      Akihiro Kishimoto, Adi Botea, Radu Marinescu
      Details | PDF
      Heuristic Search 1

      Computing cycle-free solutions in cyclic AND/OR search spaces is an important AI problem.  Previous work on optimal depth-first search strongly assumes the use of consistent heuristics, the need to keep all examined states in a transposition table, and the existence of solutions.  We give a new theoretical analysis under relaxed assumptions where previous results no longer hold.  We then present a generic approachto proving unsolvability, and apply it to RBFAOO and BLDFS, two state-of-the-art algorithms. We demonstrate the performance in domain-independent nondeterministic planning

    • #1195
      Conditions for Avoiding Node Re-expansions in Bounded Suboptimal Search
      Jingwei Chen, Nathan R. Sturtevant
      Details | PDF
      Heuristic Search 1

      Many practical problems are too difficult to solve optimally, motivating the need to found suboptimal solutions, particularly those with bounds on the final solution quality. Algorithms like Weighted A*, A*-epsilon, Optimistic Search, EES, and DPS have been developed to find suboptimal solutions with solution quality that is within a constant bound of the optimal solution. However, with the exception of weighted A*, all of these algorithms require performing node re-expansions during search. This paper explores the properties of priority functions that can find bounded suboptimal solution without requiring node re-expansions. After general bounds are developed, two new convex priority functions are developed that can outperform weighted A*.

    • #5894
      A*+IDA*: A Simple Hybrid Search Algorithm
      Zhaoxing Bu, Richard E. Korf
      Details | PDF
      Heu