Displaying 1 - 8 of 8
Motion correction of fMRI data is a widely used step prior to data analysis. In this study, a comparison of the motion correction tools provided by several leading fMRI analysis software packages was performed, including AFNI, AIR, BrainVoyager, FSL, and SPM2. Comparisons were performed using data from typical human studies as well as phantom data. The identical reconstruction, preprocessing, and analysis steps were used on every data set, except that motion correction was performed using various configurations from each software package. Each package was studied using default parameters, as well as parameters optimized for speed and accuracy. Forty subjects performed a Go/No-go task (an event-related design that investigates inhibitory motor response) and an N-back task (a block-design paradigm investigating working memory). The human data were analyzed by extracting a set of general linear model (GLM)-derived activation results and comparing the effect of motion correction on thresholded activation cluster size and maximum t value. In addition, a series of simulated phantom data sets were created with known activation locations, magnitudes, and realistic motion. Results from the phantom data indicate that AFNI and SPM2 yield the most accurate motion estimation parameters, while AFNI's interpolation algorithm introduces the least smoothing. AFNI is also the fastest of the packages tested. However, these advantages did not produce noticeably better activation results in motion-corrected data from typical human fMRI experiments. Although differences in performance between packages were apparent in the human data, no single software package produced dramatically better results than the others. The "accurate" parameters showed virtually no improvement in cluster t values compared to the standard parameters. While the "fast" parameters did not result in a substantial increase in speed, they did not degrade the cluster results very much either. The phantom and human data indicate that motion correction can be a valuable step in the data processing chain, yielding improvements of up to 20% in the magnitude and up to 100% in the cluster size of detected activations, but the choice of software package does not substantially affect this improvement.
We present a novel data smoothing and analysis framework for cortical thickness data defined on the brain cortical manifold. Gaussian kernel smoothing, which weights neighboring observations according to their 3D Euclidean distance, has been widely used in 3D brain images to increase the signal-to-noise ratio. When the observations lie on a convoluted brain surface, however, it is more natural to assign the weights based on the geodesic distance along the surface. We therefore develop a framework for geodesic distance-based kernel smoothing and statistical analysis on the cortical manifolds. As an illustration, we apply our methods in detecting the regions of abnormal cortical thickness in 16 high functioning autistic children via random field based multiple comparison correction that utilizes the new smoothing technique.
Sensitivity, specificity, and reproducibility are vital to interpret neuroscientific results from functional magnetic resonance imaging (fMRI) experiments. Here we examine the scan-rescan reliability of the percent signal change (PSC) and parameters estimated using Dynamic Causal Modeling (DCM) in scans taken in the same scan session, less than 5 min apart. We find fair to good reliability of PSC in regions that are involved with the task, and fair to excellent reliability with DCM. Also, the DCM analysis uncovers group differences that were not present in the analysis of PSC, which implies that DCM may be more sensitive to the nuances of signal changes in fMRI data.
Although there are many imaging studies on traditional ROI-based amygdala volumetry, there are very few studies on modeling amygdala shape variations. This paper presents a unified computational and statistical framework for modeling amygdala shape variations in a clinical population. The weighted spherical harmonic representation is used to parameterize, smooth out, and normalize amygdala surfaces. The representation is subsequently used as an input for multivariate linear models accounting for nuisance covariates such as age and brain size difference using the SurfStat package that completely avoids the complexity of specifying design matrices. The methodology has been applied for quantifying abnormal local amygdala shape variations in 22 high functioning autistic subjects.
We present a new tensor-based morphometric framework that quantifies cortical shape variations using a local area element. The local area element is computed from the Riemannian metric tensors, which are obtained from the smooth functional parametrization of a cortical mesh. For the smooth parametrization, we have developed a novel weighted spherical harmonic (SPHARM) representation, which generalizes the traditional SPHARM as a special case. For a specific choice of weights, the weighted-SPHARM is shown to be the least squares approximation to the solution of an isotropic heat diffusion on a unit sphere. The main aims of this paper are to present the weighted-SPHARM and to show how it can be used in the tensor-based morphometry. As an illustration, the methodology has been applied in the problem of detecting abnormal cortical regions in the group of high functioning autistic subjects.
Muscle electrical activity, or "electromyogenic" (EMG) artifact, poses a serious threat to the validity of electroencephalography (EEG) investigations in the frequency domain. EMG is sensitive to a variety of psychological processes and can mask genuine effects or masquerade as legitimate neurogenic effects across the scalp in frequencies at least as low as the alpha band (8-13 Hz). Although several techniques for correcting myogenic activity have been described, most are subjected to only limited validation attempts. Attempts to gauge the impact of EMG correction on intracerebral source models (source "localization" analyses) are rarer still. Accordingly, we assessed the sensitivity and specificity of one prominent correction tool, independent component analysis (ICA), on the scalp and in the source-space using high-resolution EEG. Data were collected from 17 participants while neurogenic and myogenic activity was independently varied. Several protocols for classifying and discarding components classified as myogenic and non-myogenic artifact (e.g., ocular) were systematically assessed, leading to the exclusion of one-third to as much as three-quarters of the variance in the EEG. Some, but not all, of these protocols showed adequate performance on the scalp. Indeed, performance was superior to previously validated regression-based techniques. Nevertheless, ICA-based EMG correction exhibited low validity in the intracerebral source-space, likely owing to incomplete separation of neurogenic from myogenic sources. Taken with prior work, this indicates that EMG artifact can substantially distort estimates of intracerebral spectral activity. Neither regression- nor ICA-based EMG correction techniques provide complete safeguards against such distortions. In light of these results, several practical suggestions and recommendations are made for intelligently using ICA to minimize EMG and other common artifacts.
EEG and EEG source-estimation are susceptible to electromyographic artifacts (EMG) generated by the cranial muscles. EMG can mask genuine effects or masquerade as a legitimate effect-even in low frequencies, such as alpha (8-13 Hz). Although regression-based correction has been used previously, only cursory attempts at validation exist, and the utility for source-localized data is unknown. To address this, EEG was recorded from 17 participants while neurogenic and myogenic activity were factorially varied. We assessed the sensitivity and specificity of four regression-based techniques: between-subjects, between-subjects using difference-scores, within-subjects condition-wise, and within-subject epoch-wise on the scalp and in data modeled using the LORETA algorithm. Although within-subject epoch-wise showed superior performance on the scalp, no technique succeeded in the source-space. Aside from validating the novel epoch-wise methods on the scalp, we highlight methods requiring further development.
We present a novel weighted Fourier series (WFS) representation for cortical surfaces. The WFS representation is a data smoothing technique that provides the explicit smooth functional estimation of unknown cortical boundary as a linear combination of basis functions. The basic properties of the representation are investigated in connection with a self-adjoint partial differential equation and the traditional spherical harmonic (SPHARM) representation. To reduce steep computational requirements, a new iterative residual fitting (IRF) algorithm is developed. Its computational and numerical implementation issues are discussed in detail. The computer codes are also available at http://www.stat.wisc.edu/-mchung/softwares/weighted.SPHARM/weighted-SPHARM.html. As an illustration, the WFS is applied i n quantifying the amount ofgray matter in a group of high functioning autistic subjects. Within the WFS framework, cortical thickness and gray matter density are computed and compared.