Cutting Edge AI Research



This track will showcase cutting edge research and algorithms from both commercial and academic labs, that will be available for deployment in the next 1-3 years. The audience will learn what is currently being worked on and address several relevant issues, including:

  • How much data is needed to train a model?
  • Robustness of the models – trustworthy models
  • Ensuring privacy in the data

Friday, October 25

Cityview 1

7:45 am
Registration Opens

8:00 Continental Breakfast (Harborview Foyer)

8:15 am – 12:30 pm Keynote Session (Harborview)

12:30 Networking, Coffee & Dessert in the Expo – Last Chance for Viewing (Commonwealth Hall)

1:45 Opening Remarks

jyoti RituRitu Jyoti, Program Vice President, Artificial Intelligence Strategies, IDC


1:50 Synthetic Data and Text-mining: Using Simulations and Natural Language Processing Build Datasets

shin hoo changHoo Chang Shin, PhD, Senior Research Scientist and Solutions Architect, NVIDIA

As new applications of big data become feasible, a dearth of data remains. Generating a repository of data outside of real-world surveys and experiments enables simulations to generate data that didn’t previously exist for training models.
In addition, biomedical text-mining can be applied to not only build datasets from a sea of un-mined database, but also by constructing more comprehensive knowledge and relations rather than say, a single image label.

2:20 PANEL: Data for Training Models – How Much Data do you Need?

Deep learning networks have revolutionized the field of artificial intelligence in the recent years. They have enabled rapid advances in diverse areas such as medical diagnosis, autonomous driving, financial forecasting, material design, and drug discovery. But these deep networks require very large quantities of training data (often requiring millions of labeled data points) in order to achieve their high levels of prediction accuracy. In this panel, we will discuss work being done to reduce this data requirement by a few orders of magnitude. This includes building systems that emulate human cognitive processes (e.g. using visual cues to hasten language acquisition), networks that incorporate knowledge about physical laws (e.g. using symmetries and first principles to constrain the parameter space), and paradigms that combine symbolic and data-driven approaches (e.g. incorporating knowledge representation into network design).

  • What amount of data is really required?
  • The emergence of advanced analytics that require less data
  • Ways to reduce the data requirement by orders of magnitude
  • Small data
  • Synthetic data

Moderator: jyoti RituRitu Jyoti, Program Vice President, Artificial Intelligence Strategies, IDC


Panelists:

Minhas RajRaj Minhas, PhD, Vice President, Director of Interaction and Analytics Laboratory, PARC


myers KarenKaren Myers, PhD, Lab Director, SRI International's Artificial Intelligence Center


Lucas Siow, CoFounder, ProteinQure 

2:50 Performance Breakthroughs through Machine Learning and Deep Networks

stefik MarkMark Stefik, PhD, Research Fellow, Lead, Explainable AI, PARC

News and excitement have been building as machine learning and, specifically, deep networks have led to performance breakthroughs in several areas. Perhaps tempering this enthusiasm are the concerns that arise from observed issues with the technology and its application. Depending on the application, questions arise about unintended bias (e.g. loan processing), unexpected failures (self-driving cars), strange competency failures (image classifications), and others.  It is easy to forget that AI/ML is at an early stage of technical maturity. My interest is understanding root issues, vulnerabilities, and then opportunities for advancing the art – as we find better ways to evaluate and improve systems that operate on “knowledge is learned.”

3:10 Networking Break (Plaza & Harbor Level Atriums)

3:25 PANEL: Fairness, Trustworthiness, and Transparency for AI Systems

Rapid technical advances have led to AI capabilities that were unimaginable only ten years ago.   With these successes in hand, attention is shifting to how AI technologies can and should be deployed in real-world settings: while good performance is essential, qualities such as fairness, trustworthiness, and transparency are becoming increasingly critical for technology acceptance. This panel will explore factors motivating these usability requirements and discuss current research aimed at satisfying them.

  • Making AI trustworthy - Challenges and technology readiness
  • Making AI trustworthy why robustness and fairness matter
  • Preventing breakdown and brittleness of models
  • Safeguards that prevent abuse and malicious behavior of AI models
  • Addressing adversarial perturbations, examples and attacks
  • Explainability, causality, and social good
  • Deep learning models
  • Understanding how the changing cognitive capabilities for AIs will lead to new ways for human-machine collaboration

Moderator: myers KarenKaren Myers, PhD, Lab Director, SRI International's Artificial Intelligence Center


Panelists:

Roberta StempfleyRoberta Stempfley, Director CERT Division, Software Engineering Institute, Carnegie Mellon University


stefik MarkMark Stefik, Research Fellow, Lead, Explainable AI, PARC


Victor LoVictor S.Y. Lo, PhD, Head of Data Science and Artificial Intelligence, Workplace Solutions, Fidelity Investments


chen pinyuPin-Yu Chen, PhD, Chief Scientist, RPI-IBM AI Research Center, Research Staff Member, Trusted AI Group, IBM Thomas J. Watson Research Center


4:15 SPOTLIGHT: Treatment Optimization and Personalization through Integration of Causal Inference and Uplift Modeling

Victor LoVictor S.Y. Lo, Head of Data Science and Artificial Intelligence, Workplace Solutions, Fidelity Investments

Scientific estimates of treatment effect can be drawn from a randomized experiment. When a randomized experiment is infeasible, one may extract evidence from observational data using Causal Inference. This talk introduces Causal Inference for assessing overall treatment effect of a program (e.g. for marketing), and extend it to uplift model development using observation data, aiming at optimizing impact at the individual level. A practical example will be given for illustration.

4:45 Close of AI World 2019