**Seminar Tháng 4/2023:**

**Talk 10: Multiscale Random Models of Deep Neural Networks**

**Báo cáo mời:** **Prof. Stéphane Mallat, Collège de France**

**Bio: **Stéphane Mallat is an applied mathematician, Professor at the Collège de France on the chair of Data Sciences. He is a member of the French Academy of sciences, of the Academy of Technologies and a foreign member of the US National Academy of Engineering. He was a Professor at the Courant Institute of NYU in New York for 10 years, then at Ecole Polytechnique and Ecole Normale Supérieure in Paris. He also was the co-founder and CEO of a semiconductor start-up company. Stéphane Mallat received many prizes for his research in machine learning, signal processing and harmonic analysis. He developed the multiresolution wavelet theory and algorithms at the origin of the compression standard JPEG-2000, and sparse signal representations in dictionaries through matching pursuits. He currently works on mathematical models of deep neural networks, for data analysis and physics.

**Thời gian: **15:00 đến 16:30, Thứ Hai, ngày 24/04/2023.

**Địa điểm: Viện Nghiên cứu cao cấp về Toán (VIASM), **Số 157 Chùa Láng, Đống Đa, Hà Nội.

**Hình thức: **Trực tiếp tại VIASM và trực tuyến

**Đăng ký tham dự**** tại đây**

**Abstract: **Deep neural networks have spectacular applications but remain mostly a mathematical mystery. An outstanding issue is to understand how they circumvent the curse of dimensionality to generate or classify data. Inspired by the renormalization group in physics, we explain how deep networks can separate phenomena which appear at different scales, and capture scale interactions. It provides high-dimensional model, which approximate the probability distribution of complex physical fields such as turbulences. Learning becomes similar to a compressed sensing problem, where low-dimensional discriminative structures are identified with random projections. Applications to image classification are shown.

**Seminar Tháng 10/2022:**

**Talk 5: Reinforcement Learning Game Tree**

**Báo cáo mời:** **GS. Jeff Edmonds, York University, Canada**

**Bio**: Professor Jeff Edmonds received his PhD in 1992 at the University of Toronto. His thesis proved lower bounds on time-space tradeoffs. He did his post doctorate work at the ICSI in Berkeley on secure data transmission over networks for multi-media applications. He joined York University in 1995. More info about Prof. Jeff Edmonds is here https://lassonde.yorku.ca/users/jeff.

**Thời gian:** 14:00 đến 15:30, ngày 27/10/2022

**Địa điểm: Viện Nghiên cứu cao cấp về Toán (VIASM)**

**Hình thức:** Trực tiếp tại VIASM và trực tuyến.

**Đăng ký tham dự tại đây.**

**Link tham dự trực tuyến: **

- Join Zoom Meeting: https://zoom.us/j/8948173518?pwd=RXJ3bXU0dkdSWmp1UXFadVlEOGhEdz09
- Meeting ID: 894 817 3518
- Passcode: 888888

**Abstract**: The goal of Reinforcement Learning is to get an agent to learn how to solve some complex multi-step task, e.g. make a pina colada or win at Go. At the risk of being non-standard, Jeff will tell you the way he thinks about this topic. Both "Game Trees" and "Markoff Chains" represent the graph of states through which your agent will traverse a path while completing the task. Suppose we could learn for each such state a value measuring "how good" this state is for the agent. Then competing the task in an optimal way would be easy. If our current state is one within which our agent gets to choose the next action, then she will choose the action that maximizes the value of our next state. On the other hand, if our adversary gets to choose, he will choose the action that minimizes this value. Finally, if our current state is one within which the universe flips a coin, then each edge leaving this state will be labelled with the probability of taking it. Knowing that that is how the game is played, we can compute how good each state is. The states in which the task is complete is worth whatever reward the agent receives in the said state. These values somehow trickle backwards until we learn the value of the start state. The computational challenge is that there are way more states then we can ever look at.

**Seminar Tháng 07/2022:**

**Talk 2: The Long March of Theoretical Exploration of Boosting.**

**Báo cáo mời: GS. Zhi-Hua ZHOU, Nanjing University.**

**Thời gian: **15:00 đến 17:00 ngày 08/07/2022.

**Địa điểm: Viện Nghiên cứu cao cấp về Toán (**VIASM)

**Hình thức: **Kết hợp trực tiếp tại VIASM và trực tuyến tại Link tham dự:

- Join Zoom Meeting: https://zoom.us/j/8948173518?pwd=RXJ3bXU0dkdSWmp1UXFadVlEOGhEdz09
- Meeting ID: 894 817 3518
- Passcode: 888888

và **Livestream** tại: https://

**Abstract**: AdaBoost is a famous mainstream ensemble learning approach that has greatly influenced machine learning and related areas. A fundamentally fascinating mystery of Adaboost lies in the phenomenon that it seems resistant to overfitting, which has inspired a lot of theoretical investigations. In this talk, we will briefly introduce the long history of learning theory studies and debates about Boosting, where the recently concluding result discloses the importance of minimizing margin variance when maximizing margin mean during learning process, which provides new inspiration for the design of powerful learning algorithms such as ODMs (Optimal margin Distribution Machines).

*Seminar gồm hai phần, phần 1 từ 15:00-16:00 với video bài giảng của GS Zhi-Hua Zhou (dài 43 phút), phần 2 từ 16:00 với Q&A và trao đổi về machine learning. Ban tổ chức gợi ý người tham gia lấy video và dành thời gian tìm hiểu trước nội dung "The Long March of Theoretical Exploration of Boosting", nghĩ câu hỏi và vấn đề để trao đổi trong phần 2.*

Video bài giảng: Tại đây

**Đăng ký tham dự:**Tại đây