OIST OIST @iconip2025 APNNS APNNS JNNS JNNS

Tutorials

2025 ICONIP Tutorials

We are pleased to advise that the following Tutorials are confirmed for the 2025 program. We would like to thank the organisers for the time they took to put forward and manage these sessions.
Some tutorial sessions will be recorded, and the recording will be made available to registered participants for one month following the event.

1. Mathematical Theories of Deep Foundation Models

Lecturer: Taiji Suzuki (The University of Tokyo)

2. The Free-energy Principle for AI Researchers and Neuroscientists

Lecturer: Takuya Isomura (RIKEN Center for Brain Science)

3. AI for Human Neuroscience Research: Generative AI Modeling and Large-Scale Analysis

Lecturers: Saori Tanaka (ATR, NAIST), Okito Yamashita (ATR, RIKEN API), Yu Takagi (Nagoya Institute of Technology)

4. Improved Explainability with Spiking Neural Networks for Spatiotemporal Brain Data Modelling: Hands-on NeuCube-based-SNN Tutorial

Organizers: Maryam Doborjeh (Auckland University of Technology), Nikola Kasabov (Auckland University of Technology)

5. A Methodology for Designing Brain-Like AI Software

Organizers: Yoshimasa Tawatsuji (The University of Tokyo), Hiroshi Yamakawa (The University of Tokyo), Yudai Suzuki (The University of Tokyo / Whole Brain Architecture Initiative)

Abstracts

1. Mathematical Theories of Deep Foundation Models

Lecturer: Taiji Suzuki (The University of Tokyo)

Description & Outline: This lecture explains the mathematical theory for understanding the learning capabilities of deep foundation models. While the development of the deep foundation models is driven by the scaling law, a theoretical understanding of the learning principles behind the scaling law is increasingly important.

Generalization is essential for biological intelligence, as organisms must adapt to changing environments and select appropriate actions. To achieve superior generalization, it is necessary to acquire compressed representations that avoid rote memorization, making representation learning and feature learning fundamental. It has been theoretically shown that deep learning naturally achieves feature learning through its deep structure, thereby gaining various advantages in generalization. This is particularly crucial for diffusion models and Transformers.

However, due to the non-convexity of the loss function, it is not obvious whether appropriate features can be acquired by stochastic gradient descent. This lecture will also cover the theoretical guarantees for this process. Furthermore, feature learning is significant not only during pre-training but also during test-time inference. This will be demonstrated concisely using in-context learning as an example. Additionally, as a theory of test-time inference, the principles by which chain-of-thought and reinforcement learning improve learning efficiency will be introduced.

More detail is available at https://url.au.m.mimecastprotect.com/s/tB9tCGv0oyCA4jR5qSKfMsBa1AI?domain=ibis.t.u-tokyo.ac.jp

2. The Free-energy Principle for AI Researchers and Neuroscientists

Lecture: Takuya Isomura (RIKEN Center for Brain Science)
Sponsored by Unified Theory Project

Description & Outline: The free-energy principle is a brain theory that has received considerable attention in the fields of neuroscience, artificial intelligence, and robotics. This tutorial introduces the foundations of the free-energy principle and demonstrates its empirical applications to account for various brain functions in terms of variational Bayesian inference. Specifically, its application in motor control and planning is referred to as active inference and is considered a new, biologically grounded control theory. Mathematically, the dynamics of neural activity and synaptic plasticity that minimise an energy function can be cast as performing Bayesian inference that minimises variational free energy. This equivalence licenses the adoption of the free-energy principle as a unified characterisation of artificial and biological neural networks. The virtue of this perspective is that it enables the formal association of neural network properties with prior and posterior beliefs that characterise inference and learning. This tutorial will introduce the fundamental mathematics of the free-energy principle, explore its real-world applications—including its use in brain data analysis—and include a hands-on session using simple simulation codes implemented in a Jupyter Notebook. Link to python codes: https://github.com/takuyaisomura/reverse-engineering-py

3. AI for Human Neuroscience Research: Generative AI Modeling and Large-Scale Analysis

3-1. Population Analysis of Large-Scale Human MRI Datasets
Lecturer: Saori Tanaka (ATR, NAIST)

3-2. Human Brain Dynamics Study via Multi-modal Integration and Machine Learning
Lecturer: Okito Yamashita (ATR, RIKEN API)

3-3. Modeling Human Brain Activity with Generative AI
Lecturers: Yu Takagi (Nagoya Institute of Technology)

The abstracts and contents of the tutorial are available at the following link.
https://url.au.m.mimecastprotect.com/s/rkyvCvl1rKiAxppORhQfZsQA-MG?domain=xsaori.github.io

4. Improved Explainability with Spiking Neural Networks for Spatiotemporal Brain Data Modelling: Hands-on NeuCube-based-SNN Tutorial

Organizers: Maryam Doborjeh (Auckland University of Technology), Nikola Kasabov (Auckland University of Technology)

Description & Outline: This tutorial introduces participants to the principles and applications of Spiking Neural Networks (SNNs) for modelling complex, dynamic brain data. It begins with an overview of biologically inspired SNN models, their learning algorithms—including STDP and supervised methods—and their suitability for temporal brain data. A comparative look at key platforms and techniques for SNN-based modelling will be presented, highlighting opportunities and challenges in working with large, noisy, and spatiotemporal neural signals like EEG and fMRI. The second part of the tutorial will provide a practical demonstration of NeuCube, a software environment developed specifically for modelling spatiotemporal brain data using SNNs. We will walk through how to encode EEG and fMRI data into spike trains, train SNNs using biologically plausible learning mechanisms, and run classification or prediction models on conditions such as tinnitus, stroke, and dementia. Participants will observe how NeuCube supports data visualization, network construction, and dynamic analysis, enabling researchers, students, and professionals to model and explore brain activity in a biologically meaningful way. As interest grows in brain-inspired AI and interpretable neural modelling, SNNs offer a powerful approach to capture the dynamic and sparse nature of brain signals. This tutorial addresses the gap between SNN theory and practical application, helping the community bridge neuroscience, data science, and AI through spatiotemporal modelling of brain data.

5. A Methodology for Designing Brain-Like AI Software

Organizers: Yoshimasa Tawatsuji (The University of Tokyo), Hiroshi Yamakawa (The University of Tokyo), Yudai Suzuki (The University of Tokyo/Whole Brain Architecture Initiative)

Description & Outline: This tutorial presents a structured methodology for designing and evaluating brain-inspired AI systems using the Brain Reference Architecture (BRA). Participants will learn to construct Brain Information Flow (BIF) diagrams, Hypothetical Component Diagrams (HCDs), and Function Realization Graphs (FRGs). Through hands-on review exercises, they will evaluate BRA datasets based on anatomical plausibility and computational coherence. The session also introduces probabilistic generative modeling for capturing functional properties in brain-morphic architectures.

More detail is available at https://wba-initiative.org/26684/

[ Important Dates ]

Early registration deadline: August 15th