Publications

AI & Learning Dynamics & Statistical Learning

Xin-Ya Zhang and Chao Tang. PNAS, 2025.
Artificial neural networks can adapt to tasks while freely exploring possible solutions, similar to how humans balance curiosity with goal-driven behavior. We show that during training, such networks naturally operate near a critical state. This state emerges from a balance between randomness and task relevance and leaves measurable signatures, including stable power-law statistics in parameter updates and multiscale patterns in the geometry of the loss landscape. Our findings reveal that neural network learning is a nonequilibrium process shaped by fundamental statistical principles, offering a general explanation for scaling laws in parameter updates and guiding the design of more interpretable and efficient intelligent systems.

X.-Y. Zhang, X. Ru, Z. Liu, J. M. Moore and G. Yan. Journal of Physics: Complexity, 2025.

X. Ru, X.-Y. Zhang, Z. Liu, J. M. Moore and G. Yan. NeurIPS, 2023. Spotlight

X. Ru, J. M. Moore, X.-Y. Zhang, Y. Zeng and G. Yan. AAAI, 2023. Oral

Z. Liu, X. Ru, J. M. Moore, X.-Y. Zhang and G. Yan. IEEE Transactions on Network Science and Engineering, 2024.

AI & Cognitive Systems & Neuroscience

X.-Y. Zhang*, S. Bobadilla-Suarez, X. Luo, M. Lemonari, S. L. Brincat, M. Siegel, E. K. Miller and B. C. Love*. Nature Communications, 2025. (* corresponding author)
Although effects were most prominent in frontal areas, representations stretched along task-relevant dimensions in all sites considered: V4, MT, lateral PFC, frontal eye fields (FEF), lateral intraparietal cortex (LIP), and inferotemporal cortex (IT). Spike timing was crucial to this code. A deep learning model was trained on the same visual input and rewards as the monkeys. Despite lacking an explicit selective attention or other control mechanism, by minimizing error during learning, the model's representations stretched along task-relevant dimensions, indicating that stretching is an adaptive strategy.

X.-Y. Zhang*,#, H. Lin#, Z. Deng, M. Siegel, E. K. Miller and G. Yan. Communications Biology, 2026. (* corresponding author)
While neural trajectories often appear as low dimensional manifolds at a global scale, they are locally highly tangled. Much like the fibers of a rope, what appears as a smooth curve from afar reveals intricate interweaving upon closer inspection. We show that capturing these fine grained features requires high dimensional embeddings and non linear transformations, which enable our ANN model to spontaneously align with the functional specialization of brain regions without any prior functional knowledge.

Complex Systems & Statistical Physics

X.-Y. Zhang, J. M. Moore, X. Ru and G. Yan. Physical Review Letters, 2024. Editors' Suggestion & Featured in Physics & Physical Review Letters collection of the year 2024
We uncovered a fundamental and elegant scaling law in the synapse-resolution Drosophila connectomes. This discovery challenges the well-known exponential distance rule previously established in inter-areal brain networks and carries functional significance, aligning with the maximum entropy of information communication and the functional criticality balancing integration and segregation. Our findings establish a direct link between brain geometry and topology, hinting at new opportunities for developing brain geometry-inspired artificial intelligence.
X.-Y. Zhang*,#, Y. Yao#, Z. Han and G. Yan. Communications Physics, 2025. (* corresponding author)
Most real-world complex networks are spatially embedded, with nodes positioned in physical space. In such systems, distance-based connectivity shapes not only the information transmission efficiency but also the network robustness and vulnerability behavior. Here, we systematically examined how spatial distance influences network robustness using k-core percolation across a pair of models of long-range connectivity. For structural connected components, we found that long-range connectivity can trigger explosive phase transitions, but with a delayed threshold. For spatial neighborhoods, two core phenomena emerge: spatial diffusion and clustering. Despite varying connectivity models, spatial neighborhoods consistently demonstrate self-organized criticality throughout the percolation process.

Why Temporal Networks Are More Controllable: Link Weight Variation Offers Superiority