Search Mailing List Archives
[theory-seminar] "Information Lattice Learning" - Haizi Yu (Friday, March 4th, 2pm PT)
tpulkit at stanford.edu
Wed Mar 2 21:55:21 PST 2022
We continue with the Information Theory Forum (IT Forum<https://web.stanford.edu/group/it-forum/talks/>) talks this week @Fri, March 4th, 2pm PT with Dr. Haizi Yu. The talks are hosted and accessible via Zoom.
If you want to receive reminder emails, please join the IT Forum mailing list<https://mailman.stanford.edu/mailman/listinfo/information_theory_forum>.
Details for this week’s talk are below:
Information Lattice Learning
Haizi Yu, University of Chicago
Fri, 4th March, 2pm PT
Can AI learn rules and abstractions of raw data? How little priors and how little data are needed to do so? How interpretable can the learned rules and the rule-learning process be? Whereas humans are extremely flexible in abstracting raw patterns and distilling rules, current AI systems are mostly good at either applying human-distilled rules (rule-based AI) or capturing patterns in a task-driven fashion (pattern recognition), but not at learning patterns in a human-interpretable way similar to human-induced theory and knowledge. We develop a generic white-box paradigm called Information Lattice Learning (ILL) to distill human-interpretable understanding of data in a human-like manner. We build the ILL framework based on the core idea of computational abstraction and project statistical learning and inference onto lattices of abstractions. The resulting framework generalizes Shannon's information lattice and further brings learning into the picture. ILL targets applications where it can augment human intelligence and human creativity. We first deploy ILL in knowledge discovery, where we implement automatic theorists to learn music theory from scores, chemical laws from molecules, as well as rules on neurogenesis and team formation from single-cell RNA-seq and social behavioral datasets, respectively. For instance, ILL is capable of reconstructing, in explicit form, about 70% of a standard music theory curriculum from only 370 of Bach's chorales, while also discovering new rules that interest music researchers. We also demonstrate ILL's near-human performance on truly-few-shot learning, where we achieve state-of-the-art results in character classifications from few and only few training examples---no extra pretraining or validation data. For instance, we can reach 80%/90% MNIST test accuracy by using only the first/first four training images per class. We further demonstrate ILL's use cases in human-AI co-creation systems for crowd artistic creations.
Haizi Yu is a postdoctoral scholar in the Knowledge Lab at University of Chicago. He completed his Ph.D. degree in Computer Science from University of Illinois at Urbana-Champaign, his M.S. degree in Computer Science from Stanford University, and his B.S. degree in Automation from Tsinghua University. His research interest includes general and explainable artificial intelligence, interpretable machine learning, automatic concept learning and knowledge discovery, as well as music intelligence.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the theory-seminar