Publications

Journal papers

X.-Y. Zhang#,*, H. Lin#, Z. Deng, M. Siegel, E. K. Miller and G. Yan. Communications Biology, 2026. (* corresponding author)
While neural trajectories often appear as low dimensional manifolds at a global scale, they are locally highly tangled. Much like the fibers of a rope, what appears as a smooth curve from afar reveals intricate interweaving upon closer inspection. We show that capturing these fine grained features requires high dimensional embeddings and non linear transformations, which enable our ANN model to spontaneously align with the functional specialization of brain regions without any prior functional knowledge.

Xin-Ya Zhang and Chao Tang. PNAS, 2025.
Artificial neural networks can adapt to tasks while freely exploring possible solutions, similar to how humans balance curiosity with goal-driven behavior. We show that during training, such networks naturally operate near a critical state. This state emerges from a balance between randomness and task relevance and leaves measurable signatures, including stable power-law statistics in parameter updates and multiscale patterns in the geometry of the loss landscape. Our findings reveal that neural network learning is a nonequilibrium process shaped by fundamental statistical principles, offering a general explanation for scaling laws in parameter updates and guiding the design of more interpretable and efficient intelligent systems.

X.-Y. Zhang*, S. Bobadilla-Suarez, X. Luo, M. Lemonari, S. L. Brincat, M. Siegel, E. K. Miller and B. C. Love*. Nature Communications, 2025. (* corresponding author)
Although effects were most prominent in frontal areas, representations stretched along task-relevant dimensions in all sites considered: V4, MT, lateral PFC, frontal eye fields (FEF), lateral intraparietal cortex (LIP), and inferotemporal cortex (IT). Spike timing was crucial to this code. A deep learning model was trained on the same visual input and rewards as the monkeys. Despite lacking an explicit selective attention or other control mechanism, by minimizing error during learning, the model's representations stretched along task-relevant dimensions, indicating that stretching is an adaptive strategy.

X.-Y. Zhang#,*, Y. Yao#, Z. Han and G. Yan. Communications Physics, 2025. (* corresponding author)
Most real-world complex networks are spatially embedded, with nodes positioned in physical space. In such systems, distance-based connectivity shapes not only the information transmission efficiency but also the network robustness and vulnerability behavior. Here, we systematically examined how spatial distance influences network robustness using k-core percolation across a pair of models of long-range connectivity. For structural connected components, we found that long-range connectivity can trigger explosive phase transitions, but with a delayed threshold. For spatial neighborhoods, two core phenomena emerge: spatial diffusion and clustering. Despite varying connectivity models, spatial neighborhoods consistently demonstrate self-organized criticality throughout the percolation process.

5. Geometric Scaling Law in Real Neuronal Networks