TMS Instrumentation Workshop V22.214.171.124 _HOT_ Full Source
The Neuroscience Gateway (NSG) has been serving the computational neuroscience community since early 2013. Its initial goal was to reduce technical and administrative barriers that neuroscientists face in accessing and using high performance computing (HPC) resources needed for large scale neuronal modeling projects. For this purpose, NSG provided tools and software that require and run efficiently on HPC resources available as a part of the US XSEDE (Extreme Science and Engineering Discovery Environment) program that coordinates usage of academic supercomputers. Since around 2017 experimentalists such as cognitive neuroscientists, psychologists and biomedical researchers started to use NSG for their neuroscience data processing, analysis and machine learning work. Data processing workloads are more suitable on high throughput computing (HTC) resources that are suitable for single core jobs typically run to process individual data sets of subjects. Machine learning (ML) workloads require use of GPUs for well-known ML frameworks such as TensorFlow. NSG is adapting to respond to the needs of experimental neuroscientists by providing HTC resources, in addition to already enabling successfully the computational neuroscience community for many years by providing HPC resources. Data processing focused work of experimentalists also require NSG to add various data functionalities, such as ability to transfer/store large data to/on NSG, validate the data, process same data by multiple users, publish final data products, visualize the data, search the data etc. These features are being add to NSG currently. Separately there is a demand from the neuroscience community to make NSG an environment where neuroscience tool developers can test, benchmark, and scale their newly developed tools and eventually disseminate their tools via the NSG for neuroscience users.
TMS Instrumentation Workshop v126.96.36.199 Full Source
The poster will describe NSG from its beginning and how it is evolving for the future needs of the neuroscience community such as: (i) NSG has been successfully serving primarily the computational neuroscience community, as well as some data processing focused neuroscience researchers, until now; (ii) new features are added to make it a suitable and efficient dissemination environment for lab-developed neuroscience tools. These will allow tool developers to disseminate their lab-developed tools on NSG taking advantage of the current functionalities that are being well served on NSG for the last seven years such as a growing user base, an easy user interface, an open environment, the ability to access and run jobs on powerful compute resources, availability of free supercomputer time, a well-established training and outreach program, and a functioning user support system. All of these well-functioning features of NSG will make it an ideal environment for dissemination and use of lab-developed computational and data processing neuroscience tools; (iii) NSG is being enhanced such that it can have more seamless access to HTC resources provided by the Open Science Grid (OSG) and commercial cloud. This will allow data processing and machine learning oriented workloads to be able to take advantage of HTC and cloud resources including GPUs; (iv) New data management features are being added to NSG and these include the ability to transfer/upload large data, validate uploaded data, share and publish data etc.
Despite the wide variety of available models of the cerebral cortex, a unified understanding of cortical structure, dynamics, and function at different scales is still missing. Key to progress in this endeavor will be to bring together the different accounts into unified models. We aim to provide a stepping stone in this direction by developing large-scale spiking neuronal network models of primate cortex that reproduce a combination of microscopic and macroscopic findings on cortical structure and dynamics. A first model describes resting-state activity in all vision-related areas in one hemisphere of macaque cortex [1,2], representing each of the 32 areas with a 1 mm2 microcircuit  with the full density of neurons and synapses. Comprising about 4 million leaky integrate-and-fire neurons and 24 billion synapses, it is simulated on the Jülich supercomputers. The model has recently been ported to NEST 3, greatly reducing the construction time. The inter-area connectivity is based on axonal tracing  and predictive connectomics . Findings reproduced include the spectrum and rate distribution of V1 spiking activity , feedback propagation of activity across the visual hierarchy , and a pattern of functional connectivity between areas as measured with fMRI . The model is available open-source [ -6.github.io/multi-area-model/] and uses the tool Snakemake  for formalizing the workflow from the experimental data to simulation, analysis, and visualization. It serves as a platform for further developments, including an extension with motor areas  for studying visuo-motor interactions, incorporating function using a learning-to-learn framework , and creating an analogous model of human cortex . It is our hope that this work will contribute to an increasingly unified understanding of cortical structure, dynamics, and function.