top of page
  • Facebook
  • Twitter
  • Instagram

Summaries

Burcu AyÅŸen Ürgen (PhD), Bilkent University

Predictive Processing from Brains to Agents in the Wild

Over the past century, neuroscience has made major empirical progress but has struggled to converge on unifying theories of brain function. The rise of human neuroimaging and later machine learning and network approaches shifted the field from asking where activity happens to what neural populations encode, and reframed the brain as a distributed interacting system. Predictive processing has emerged as a leading candidate for integration, proposing that the brain works as a hierarchical generative model that constantly predicts and updates sensory input, with recent AI making these ideas more explicit and testable. Yet both neuroscience and AI still rely heavily on domain-specific models, renewing interest in naturalistic and embodied paradigms that study cognition in ecologically valid settings. In this talk, I trace this trajectory, introduce predictive processing, and use examples from action perception to argue that more naturalistic, multimodal approaches will reshape how we explain brain function. 

Bahar Güntekin (PhD), Istanbul Medipol University

Modulation of Brain Oscillations to Increase Cognitive Functions

In her talk titled “Modulation of Brain Oscillations to Increase Cognitive Functions,” Bahar Güntekin presents an overview of how oscillatory activity delta, theta, alpha, beta and gamma rhythms supports core cognitive processes such as memory, attention, and information integration. Drawing on her extensive research in EEG and event-related oscillation, she highlights how these rhythms change across the lifespan, how they are disrupted in neurodegenerative conditions, and how they can be strengthened through rhythmic sensory stimulation and other neuromodulatory approaches. Her work illustrates the potential of oscillation‑based interventions to enhance cognitive performance and promote healthier brain function.

Furkan Özçelik (PhD), Yale University

Deciphering Brain's Visual Language: Neural Decoding with Deep Learning

Can what we see, or what goes through our minds, be understood by looking only at brain signals? In this talk, we first examine how the brain processes the visual world and the methods used to obtain neural signals. We then summarize early attempts at neural decoding in the literature and present the historical development of the field. In the final part, we discuss how Deep Learning models, which have revolutionized artificial intelligence, have transformed the ability to analyze neuroscientific data and what new possibilities this technology has opened up for the field.

Alper T. ErdoÄŸan (PhD), Koç University

How Do Brain Networks Learn and Compute? A Historical Path and an Unsolved Mystery From Representation and Credit Assignment

Backpropagation powers modern deep learning, but it seems unlikely to be the literal learning mechanism used by brains, given its reliance on precise error transport and tightly coordinated updates across layers. What the brain does instead remains an active, unsolved research problem. This talk does not claim a final answer; rather, it traces a historical path of ideas, constraints, and candidate mechanisms that illuminate the space of possibilities. The talk moves from early neuron models and Hebbian plasticity to representation learning via PCA-style objectives and sparse coding, with links to classic ideas about sensory coding. Building on these foundations, I will present our work on similarity matching and correlative information maximization as principled routes to component analysis with local learning rules. I will then turn to supervised credit assignment, survey leading alternatives to backprop (predictive coding, equilibrium propagation, target propagation), and conclude with our approach—Error-Broadcast and Decorrelation (EBD)—as a flexible route to training networks with more biologically realistic learning signals.

bottom of page