<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Dave Warren's Cog Neuro Blog]]></title><description><![CDATA[This is my professional blog covering cog neuro topics and anything else that I think is relevant to what I do.]]></description><link>https://david-e-warren.me/blog/</link><generator>Ghost 1.16</generator><lastBuildDate>Wed, 22 Apr 2026 06:05:01 GMT</lastBuildDate><atom:link href="https://david-e-warren.me/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Anaconda Python on air-gapped computers]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Installing the Anaconda Python distribution on an air-gapped (i.e., non-networked) computer is possible but not trivial.  These steps worked for me.</p>
<h3 id="1createalocalenvironmentunderanacondathatcontainsthedesiredpackages">1. Create a local environment under Anaconda that contains the desired packages</h3>
<pre><code>$ conda create --name myenv python=3.6.7
$ conda activate myenv
$ conda install pillow pip
</code></pre>
<p>Choice</p></div>]]></description><link>https://david-e-warren.me/blog/anaconda-python-on-air-gapped-computers/</link><guid isPermaLink="false">5dfa81caf1adef0491ee3217</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Wed, 18 Dec 2019 20:15:28 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Installing the Anaconda Python distribution on an air-gapped (i.e., non-networked) computer is possible but not trivial.  These steps worked for me.</p>
<h3 id="1createalocalenvironmentunderanacondathatcontainsthedesiredpackages">1. Create a local environment under Anaconda that contains the desired packages</h3>
<pre><code>$ conda create --name myenv python=3.6.7
$ conda activate myenv
$ conda install pillow pip
</code></pre>
<p>Choice of Python version, packages, etc., should match your needs.  Ensure that your local installation works as desired before proceeding.</p>
<h3 id="2writealistofpackagesinstalledintheenvironmenttoatextfilespecfile">2. Write a list of packages installed in the environment to a text file (spec file)</h3>
<pre><code>$ conda list --explicit &gt; /path/for/spec_file.txt
</code></pre>
<h3 id="3retrieveallofthepackagesfromtheremoteserverstolocalstorageegpkgsdirectory">3. Retrieve all of the packages from the remote servers to local storage (e.g., ./pkgs directory)</h3>
<pre><code>$ cat /path/for/spec_file.txt | grep &quot;^http&quot; | xargs -I xxx wget -P /path/for/pkgs xxx 
</code></pre>
<h3 id="4editthespecfilesothatitreferencesyourlocalpkgsdirectory">4. Edit the spec file so that it references your local pkgs directory</h3>
<p>I found that the following format and directory structure made things simpler:</p>
<pre><code>$ cat spec_file.txt

# This file may be used to create an environment using:
# $ conda create --name &lt;env&gt; --file &lt;this file&gt;
# platform: linux-64
@EXPLICIT
pkgs/pillow-6.2.1-py36hd70f55b_1.tar.bz2
pkgs/python-3.6.7-h357f687_1006.tar.bz2
pkgs/pip-19.3.1-py36_0.tar.bz2
</code></pre>
<pre><code>$ tree .

.
├── pkgs
│   ├── pillow-6.2.1-py36hd70f55b_1.tar.bz2
│   ├── pip-19.3.1-py36_0.tar.bz2
│   └── python-3.6.7-h357f687_1006.tar.bz2
└── spec_file.txt
</code></pre>
<h3 id="5downloadthelatestminiconda">5. Download the latest Miniconda</h3>
<p>From <a href="https://docs.conda.io/en/latest/miniconda.html">https://docs.conda.io/en/latest/miniconda.html</a></p>
<h3 id="6putspecfilepackagesandminicondainstalleronausbdrive">6. Put spec file, packages, and Miniconda installer on a USB drive</h3>
<h3 id="7copyfilesfromusbdrivetoairgappedcomputer">7. Copy files from USB drive to air-gapped computer</h3>
<h3 id="8installminiconda">8. Install Miniconda</h3>
<p>Use the Windows installer or <code>bash Miniconda3-latest-Linux-x86_64.sh</code> on Linux.<br>
Depending on your installation choices, you may need to open the Anaconda prompt (Windows) or run <code>source ~/.bashrc</code> (Linux) to access the <code>conda</code> command in the next step.</p>
<h3 id="9createyournewenvironmentwiththefileoptionpointingtoyourspecfile">9. Create your new environment with the &quot;--file&quot; option pointing to your spec file</h3>
<pre><code>$ conda create --name myenv --file /path/to/spec_file.txt
</code></pre>
<p>Note that the packages you downloaded must be located at the paths indicated in the spec file (relative paths are fine).</p>
<h3 id="10switchtoandtestthenewenvironment">10. Switch to and test the new environment</h3>
<pre><code>$ conda activate myenv
$ python test.py
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[Recording physiological signals during fMRI scans]]></title><description><![CDATA[<div class="kg-card-markdown"><p><strong>TL;DR</strong> Attach the physio devices, switch on physio recording in the <a href="https://www.cmrr.umn.edu/multiband/">CMRR Multiband EPI Sequences</a> <code>Sequence/Special</code> card by selecting <code>DICOM</code> from the drop-down menu, record your EPI data, and enjoy.</p>
<h2 id="introduction">Introduction</h2>
<p>Physiological signals such as heart rate and respiration can influence the fMRI BOLD signal (more below), so</p></div>]]></description><link>https://david-e-warren.me/blog/physiological-data-from-mri/</link><guid isPermaLink="false">5d9ce959f1adef0491ee3209</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Thu, 10 Oct 2019 22:28:38 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p><strong>TL;DR</strong> Attach the physio devices, switch on physio recording in the <a href="https://www.cmrr.umn.edu/multiband/">CMRR Multiband EPI Sequences</a> <code>Sequence/Special</code> card by selecting <code>DICOM</code> from the drop-down menu, record your EPI data, and enjoy.</p>
<h2 id="introduction">Introduction</h2>
<p>Physiological signals such as heart rate and respiration can influence the fMRI BOLD signal (more below), so recording these signals can be useful for later processing and analysis.  At the suggestion of a <a href="https://www.jonathanpower.net/">friendly expert</a>, I explored how to record physiological signals at our institution's research MRI facility.</p>
<p>Happily, this proved to be pretty straightforward on our Siemens Prisma.  However, it took me a while to find the necessary information on-line, hence this post.</p>
<h2 id="record">Record</h2>
<p>If you're using a recent build of the <a href="https://www.cmrr.umn.edu/multiband/">CMRR Multiband EPI Sequences</a>, you're 99% of the way there.  The hardworking CMRR experts have baked physiological recording into the sequence software itself; all you need to do is activate it.  At the scanner console, browse to the <code>Sequence/Special</code> card and look to the top-right for a drop-down menu.</p>
<p><img src="https://david-e-warren.me/blog/content/images/2019/10/cmrr_mb_epi_special.png" alt="cmrr_mb_epi_special"></p>
<p>Lifting flagrantly from the <a href="https://www.cmrr.umn.edu/multiband/Multi-Band_C2P_Instructions_R016a.pdf">sequence manual</a>:</p>
<blockquote>
<p>Physio. recording: Control recording of physiological signals (cardiac, respiration, ECG, external) to text or encoded DICOM files. Text files will be stored in C:\MedCom\Log\Physio\ on the host computer (MRC). Legacy uses the classic CPmuSequence/IdeaCmdTool logging facility (this is the only option available on VAxx/VBxx systems and VD11x; it is known to be unreliable on VD11x/VD13x/VE11x). DICOM and File use the new online/ICE logging available since VD13A, which is recommended for use when available. File writes the log data to individual text files for each signal (this option was named Online in previous versions). DICOM embeds these logs in a special DICOM “image” stored in the database (sample Matlab code for reading the special DICOM files is provided in the GitHub repository). Multiple enables both Legacy and File options (not recommended for routine use).</p>
</blockquote>
<p>To test this, we put a volunteer in the scanner, hooked up the wireless respiratory and pulse-ox sensors, selected the <code>DICOM</code> option from drop-down menu on the <code>Special</code> card, and started recording EPI data.  Lo and behold, we found a DICOM file associated with the study.  Success!<sup>1</sup></p>
<h2 id="extract">Extract</h2>
<p>With the physio data saved to a DICOM, the next step was to extract the recordings.  The CMRR repo has a Matlab <a href="https://github.com/CMRR-C2P/MB/blob/master/extractCMRRPhysio.m">script</a> for this purpose, but I wanted to get a Python version working.  This was down partly to Python chauvinism and partly to wanting a better understanding of what extraction is doing.</p>
<p>My draft implementation is available in a <a href="https://gist.github.com/DavidEWarrenPhD/ceb344408440ea396a1fece839d0c9ce">Github gist</a> .  The script takes the path to a physio DICOM as an argument (along with an optional output directory).  The physiological recordings are then extracted to their original filenames in the output directory (same as the DICOM's if not supplied).</p>
<pre><code class="language-physio_convert.py">usage: physio_convert.py [-h] [--path PATH] [--outdir OUTDIR]

optional arguments:
  -h, --help       show this help message and exit
  --path PATH      path of physio DICOM file
  --outdir OUTDIR  directory for output
</code></pre>
<p>Depending on the data you collected at scan time, running the script should produce two or three of the following (the <code>*</code> is shared date-time and scan information):</p>
<table>
<thead>
<tr>
<th>Signal</th>
<th>Filename</th>
<th style="text-align:left">Sampling Rate (Hz)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pulse</td>
<td><code>Physio_*_PULS.log</code></td>
<td style="text-align:left">200</td>
</tr>
<tr>
<td>Respiration</td>
<td><code>Physio_*_RESP.log</code></td>
<td style="text-align:left">50</td>
</tr>
<tr>
<td>EPI</td>
<td><code>Physio_*_Info.log</code></td>
<td style="text-align:left">Per parameters<sup>2</sup></td>
</tr>
</tbody>
</table>
<h2 id="inspect">Inspect</h2>
<p>The recordings themselves may require additional attention prior to use (this will depend what you want to do with them).  As a first step, I wanted to visualize the data to ensure that we got something reasonable.</p>
<p>All signals are timestamped by clock &quot;TICS&quot;, which on our scanner appear to be 400 Hz/2.5 msec timesteps. Although all signals are stamped with codes corresponding to this 400-Hz &quot;heartbeat&quot; signal, different signals are sampled and recorded at different rates.  On our Prisma, we observed the sampling rates reported in the table above.</p>
<p>Aligning the signals is relatively painless using Python and Pandas.  Plotting the data, we see the expected sinusoids for pulse (blue) and respiration (orange) as well as a steady pulse for EPI volumes (green).  😎</p>
<p><img src="https://david-e-warren.me/blog/content/images/2019/10/physio.png" alt="physio"></p>
<p>Regarding use of the physio recordings as covariates for fMRI processing and analysis, I'll defer to the expertise of other investigators.  Some useful resources might include:</p>
<ul>
<li>Glover, G. H., Li, T. Q., &amp; Ress, D. (2000). Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR. Magnetic Resonance in Medicine, 44(1), 162–167.</li>
<li>Birn, R. M., Diamond, J. B., Smith, M. A., &amp; Bandettini, P. A. (2006). Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI. NeuroImage, 31(4), 1536–1548. <a href="https://doi.org/10.1016/j.neuroimage.2006.02.048">https://doi.org/10.1016/j.neuroimage.2006.02.048</a></li>
<li>Power, J. D., Lynch, C. J., Silver, B. M., Dubin, M. J., Martin, A., &amp; Jones, R. M. (2019). Distinctions among real and apparent respiratory motions in human fMRI data. NeuroImage, 201, 116041. <a href="https://doi.org/10.1016/j.neuroimage.2019.116041">https://doi.org/10.1016/j.neuroimage.2019.116041</a></li>
</ul>
<h2 id="acknowledgments">Acknowledgments</h2>
<p>Thanks to the CMRR team for making this easy, to our scanner tech Lisa for her patience, and to Jonathan for reassurance that getting physio recordings from new scanners isn't too arduous.</p>
<p>The Aguirre lab has a <a href="https://cfn.upenn.edu/aguirre/wiki/public:pulse-oximetry_during_fmri_scanning">resource page</a> that provided useful hints.</p>
<h2 id="footnotes">Footnotes</h2>
<p><sup>1</sup> Acquiring physio recording without the CMRR MB-EPI sequence is left as an exercise for the reader.<br>
<sup>2</sup> The timing information in the <code>Info</code> file includes start/stop for each volumes, slices, and (if used) multiple echoes.  Having this information might be useful for various reasons including converting DICOM to NIFTI accurately, although tools such as <code>dcm2niix</code> should have the necessary timing information built-in.</p>
</div>]]></content:encoded></item><item><title><![CDATA[AFNI Bootcamp: Hardware and Software]]></title><description><![CDATA[<div class="kg-card-markdown"><p>My wife and I were pleased to attend the AFNI Bootcamp held in Lincoln, Nebraska a couple of weeks ago, and I'll write more about that elsewhere.  For now, I'll just provide some notes on hardware and software for prospective campers.</p>
<h3 id="hardware">Hardware</h3>
<p>I wanted to run AFNI natively under Linux</p></div>]]></description><link>https://david-e-warren.me/blog/afni-bootcamp-hardware-and-software/</link><guid isPermaLink="false">59f8043d20cec1059d2f8763</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Sun, 21 Aug 2016 00:12:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>My wife and I were pleased to attend the AFNI Bootcamp held in Lincoln, Nebraska a couple of weeks ago, and I'll write more about that elsewhere.  For now, I'll just provide some notes on hardware and software for prospective campers.</p>
<h3 id="hardware">Hardware</h3>
<p>I wanted to run AFNI natively under Linux rather than on a virtual machine (more on that in the software section), so I bought a pair of older machines.  Happily, they performed quite well.  I purchased two used <a href="http://www.thinkwiki.org/wiki/Category:T420">Lenovo Thinkpad T420</a> laptops on a popular online auction site for about <a href="http://ktgee.net/post/49423737148/thinkpad-guide#intel">$225 each</a>.  These machines aren't speed demons, but they more than met the challenges presented by a week of AFNI <sup>1</sup>.  Here are the rough specs for the laptops:</p>
<ul>
<li>Intel Core i5 (~2.5 GHz, 3MB L3, 1333 MHz FSB)</li>
<li>8 GB RAM</li>
<li>Intel integrated graphics 3000</li>
<li>160 GB SSD</li>
<li>14.0&quot; display, 1600×900 pixels</li>
</ul>
<p>Of course, when you're throwing terabytes of neuroimaging data at AFNI, you'll want as much processing power as possible, but these computers were fine for didactic purposes.</p>
<h3 id="software">Software</h3>
<p>I've never had a great experience running Linux through a VM — it works, but it's clunky and doubly so on a laptop.  So, when the computers arrived, I installed the recently-released Ubuntu 16.04 LTS distribution.  As I'd hoped, Ubuntu worked flawlessly on the Thinkpads, including conveniences such as two-finger scrolling, volume control button functionality, and the patented Thinkpad top-down keyboard light.</p>
<p>Also as expected, installing AFNI presented a few quirks and challenges.  The AFNI guide to installing on Linux is generally quite good, but I found a few inconsistencies that I'll note here.</p>
<ul>
<li><code>libmotif</code> installation: <code>libmotif4</code> is not in the default package repositories, so you'll need to add to the base set.  Here's how to do that from a terminal:</li>
</ul>
<pre><code>sudo su # Become root user
echo 'deb http://cz.archive.ubuntu.com/ubuntu trusty main universe' &gt;&gt; /etc/apt/sources.list.d/extra.list
apt-get update
exit # Exit root user
sudo apt-get install -y tcsh xfonts-base python-qt4                    \
                        libmotif4 libmotif-dev motif-clients           \
                        gsl-bin netpbm gnome-tweak-tool libjpeg62      \
                        libxp6
</code></pre>
<ul>
<li>Library linking: the installation procedure describes symlinking <code>libgsl</code>, but the indicated link doesn't work.  Here's the fix:</li>
</ul>
<pre><code>sudo ln -s /usr/lib/x86_64-linux-gnu/libgsl.so.19 /usr/lib/libgsl.so.0
</code></pre>
<ul>
<li>Bootcamp files: the <a href="https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/background_install/bootcamp_stuff.html">files</a> distributed for the bootcamp lectures are intended to be installed in the user's home directory, but the instructions still describe placing the files in a <code>CD</code> subdirectory.  Basically, the following section of the main instructions is outdated:</li>
</ul>
<pre><code>curl -O https://afni.nimh.nih.gov/pub/dist/edu/data/CD.tgz
tar xvzf CD.tgz
cd CD
tcsh s2.cp.files . ~
cd ..
</code></pre>
<p>You can still use the <code>CD</code> subdirectory (I did), but the lecturers will expect things to be installed in the canonical spots.  Depending on your comfort with UNIX-like operating systems, you may appreciate the simplicity of simply having everything you need in your home directory.</p>
<p>Otherwise, everything went swimmingly!</p>
<p><small><sup>1</sup> This may be more of a commentary on the slowing of processor speed gains than on the stunning legacy of these laptops.</small></p>
</div>]]></content:encoded></item><item><title><![CDATA[Reviewing hiatus]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Just a quick note to indicate that I'm taking a hiatus from new peer-reviewing activity for the remainder of 2016.  My peer-review burden has been substantial so far this year, and I need to reserve some time for my own professional writing.  I look forward to resuming normal peer-review activity</p></div>]]></description><link>https://david-e-warren.me/blog/reviewing-hiatus/</link><guid isPermaLink="false">59f8043d20cec1059d2f8764</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Sat, 20 Aug 2016 23:01:59 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Just a quick note to indicate that I'm taking a hiatus from new peer-reviewing activity for the remainder of 2016.  My peer-review burden has been substantial so far this year, and I need to reserve some time for my own professional writing.  I look forward to resuming normal peer-review activity in 2017.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Job opportunity: Research assistant]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I'm currently seeking well qualified, highly motivated applicants for a research assistant position in my lab at <a href="http://unmc.edu/">UNMC</a> in Omaha, NE.  For more information, visit <a href="https://unmc.peopleadmin.com/postings/27573">https://unmc.peopleadmin.com/postings/27573</a>.  Looking forward to interviewing some exceptional people!</p>
</div>]]></description><link>https://david-e-warren.me/blog/hiring-research-assistant/</link><guid isPermaLink="false">59f8043d20cec1059d2f8762</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Mon, 29 Feb 2016 16:26:43 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>I'm currently seeking well qualified, highly motivated applicants for a research assistant position in my lab at <a href="http://unmc.edu/">UNMC</a> in Omaha, NE.  For more information, visit <a href="https://unmc.peopleadmin.com/postings/27573">https://unmc.peopleadmin.com/postings/27573</a>.  Looking forward to interviewing some exceptional people!</p>
</div>]]></content:encoded></item><item><title><![CDATA[New Year, New Job]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Here's the email announcement that I circulated about my new position:</p>
<blockquote>
<p>Dear friends,</p>
<p>It's a new year, and I'm pleased to report that I have accepted and started a new job: as of January 1st, I'm an assistant professor in the Department of Neurological Sciences at the University of Nebraska</p></blockquote></div>]]></description><link>https://david-e-warren.me/blog/new-year-new-job/</link><guid isPermaLink="false">59f8043d20cec1059d2f875f</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Fri, 08 Jan 2016 17:09:26 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Here's the email announcement that I circulated about my new position:</p>
<blockquote>
<p>Dear friends,</p>
<p>It's a new year, and I'm pleased to report that I have accepted and started a new job: as of January 1st, I'm an assistant professor in the Department of Neurological Sciences at the University of Nebraska Medical Center.  I'm very excited to start my career as an independent scientist in a terrific environment here at UNMC with great colleagues including our department chair, Dr. Matt Rizzo.  My new contact information follows:</p>
<blockquote>
<p>Dr. David E. Warren<br><br>
Department of Neurological Sciences<br><br>
University of Nebraska Medical Center<br><br>
988440 Nebraska Medical Center<br><br>
Omaha, NE 68198-8440<br></p>
</blockquote>
<blockquote>
<p>Phone, office: 402-559-5805<br><br>
Email: <a href="mailto:david.warren@unmc.edu">david.warren@unmc.edu</a><br></p>
</blockquote>
<p>I'm in the process of setting up my lab, and my research will continue to explore topics in memory and other cognitive domains in neurological populations using techniques such as neuropsychology, neuroimaging, and eye-tracking.  To help with these efforts, I expect to make several hires (research assistant, post doc, etc.) in the upcoming months.  Formal advertisements for those positions will follow but for now, please do keep me and UNMC in mind for promising undergraduate or graduate students who are looking to take the next step in an academic career.  UNMC and affiliate institutions in the Omaha area offer a broad range of doctoral programs in which I expect to enroll graduate students, while senior graduate students seeking post-doctoral positions should find UNMC's resources attractive.  Nebraska's neuroscience community is growing rapidly, and I'm looking forward to adding outstanding individuals to our team.</p>
<p>In closing, let me say &quot;Thank you!&quot; to everyone who helped me get to this point by contributing mentorship, training, teaching, camaraderie, or other support of any kind.  I look forward to catching up with everyone very soon.  All the best,</p>
<p>Dave</p>
</blockquote>
</div>]]></content:encoded></item><item><title><![CDATA[Volume rendering of neuroimaging data with Python and VTK]]></title><description><![CDATA[<div class="kg-card-markdown"><p><img src="https://david-e-warren.me/blog/volume-rendering-of-neuroimaging-data-with-python-and-vtk/../../../static/img/blog_vtk1_vtk_volume_render_activation_composite.jpg" alt="Volume rendering example"></p>
<p>Pictures of neuroimaging data are incredibly compelling, but creating those images can be a challenge.  While many tools are available, none of them are ideal for easily creating lots of images.  I decided to explore whether my favorite programming language (Python) could be used to quickly create many images of</p></div>]]></description><link>https://david-e-warren.me/blog/volume-rendering-of-neuroimaging-data-with-python-and-vtk/</link><guid isPermaLink="false">59f8043d20cec1059d2f875e</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Fri, 04 Dec 2015 16:13:18 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p><img src="https://david-e-warren.me/blog/volume-rendering-of-neuroimaging-data-with-python-and-vtk/../../../static/img/blog_vtk1_vtk_volume_render_activation_composite.jpg" alt="Volume rendering example"></p>
<p>Pictures of neuroimaging data are incredibly compelling, but creating those images can be a challenge.  While many tools are available, none of them are ideal for easily creating lots of images.  I decided to explore whether my favorite programming language (Python) could be used to quickly create many images of brain data.</p>
<p>If this is interesting, please continue reading.  Or if you'd like to get the demonstration scripts right now, head to the <a href="https://bitbucket.org/dewarrn1/vtk_render_demo">repository</a>.</p>
<h2 id="backgroundwhatsvolumerendering">Background: what's volume rendering?</h2>
<p>Specifically, I wanted to be able to perform <strong>volume rendering</strong> of large batches of neuroimaging data.  Volume rendering is the process of creating a 2D image of 3D (or really n-dimensional) data.  It's very hard to do correctly, but fortunately smart people have done a lot of the difficult work already.  This means that complex multidimensional data can be turned into  human-interpretable 2D images with relatively little effort.</p>
<p><a href="http://www.vtk.org/">VTK</a> is a sophisticated <strong>v</strong>isualization <strong>t</strong>ool<strong>k</strong>it (get it?) that supports rendering of 3D scenes including volume rendering (among many other functions).  It also plays nicely with Python, meaning that you can use VTK's functionality while scripting in a high-level language.  This was ideal for me, so I worked up a demonstration project.</p>
<p>VTK is very powerful and — even with excellent Python support — quite different from any programming interface that I'd ever used.  Critically, some of the most challenging tasks related to volume rendering with VTK had already been worked out by <a href="https://www.twitter.com/somada141">Adamos Kyriakou</a> which he laid out beautifully in a <a href="https://pyscience.wordpress.com/2014/11/16/volume-rendering-with-python-and-vtk/">post</a> on his <a href="https://pyscience.wordpress.com/">PyScience</a> blog.  Adamos's simple approach and clear explanations made my work here tractable.  For an introduction to rendering brain data with Python, check out his stuff.</p>
<p>While rendering a brain volume with VTK was clearly possible, my goal was a bit more complex than single-volume rendering.  Technically, I wanted to blend multiple volumes together.  This meant overlaying colors representing one set of numbers (e.g., brain activation) on colors representing another set of numbers (e.g.,  brain structure).  You see images of this kind frequently in neuroscience journals and even the popular press, but often the images aren't made with what I would consider true volume rendering, instead using <em>surface rendering</em> (a separate topic that I won't get into here).</p>
<p>After investing a bit of time learning about VTK, I was able to produce some decent images.  I go into more detail about how this was accomplished in the repository's <a href="https://bitbucket.org/dewarrn1/vtk_render_demo">README</a>.  Here are a pair of examples:</p>
<p><img src="https://david-e-warren.me/blog/volume-rendering-of-neuroimaging-data-with-python-and-vtk/../../../static/img/blog_vtk1_lesion.jpg" alt="MNI atlas with lesion overlay"></p>
<p>First, a synthetic lesion.  This shows the extent of damage that might follow a bad middle cerebral artery stroke superimposed on a healthy template brain.</p>
<p><img src="https://david-e-warren.me/blog/volume-rendering-of-neuroimaging-data-with-python-and-vtk/../../../static/img/blog_vtk1_rsfc.jpg" alt="MNI atlas with RSFC overlay"></p>
<p>Second, some resting-state functional connectivity data.  Using the same template as a base, I've superimposed functional connectivity of the default mode network (DMN) seeded from the posterior cingulate/precuneus region.</p>
<p>These images are attractive and informative, but they aren't novel in and of themselves.  Similar images can be generated manually using GUI-driven applications; some of these can even be scripted to batch-render images.  However, the means by which these images were generated, and the potential ease with which large numbers of similar images could be generated, appears to be novel.  The scripts and comments in the README file included with the repository lay out the approach(es) that I took, so I'll close this post with broader comments about rationale, some hurdles to clear, and what I'd like to provide soon.</p>
<h2 id="rationalewhyvolumerendering">Rationale: why volume rendering?</h2>
<p>Brain images can be generated using many different techniques, some of which are much simpler and more straightforward than volume rendering.  Why bother?  Some reasons:</p>
<ul>
<li>Clipping: Rendering volumes lets you clip the volume to suit your needs, slicing it where you want to show a given effect.  Here are a few examples showing the results of clipping a solid volume with different planes.<br>
<img src="https://david-e-warren.me/blog/volume-rendering-of-neuroimaging-data-with-python-and-vtk/../../../static/img/blog_vtk1_clip_demo.jpg" alt="Rendered volumes using different clipping planes"></li>
<li>Opacity: Rendering volumes also lets you decide what values are opaque, translucent, and transparent.  Here are examples that make CSF more visible than usual, a typical rendering, and one that makes gray matter more translucent.<br>
<img src="https://david-e-warren.me/blog/volume-rendering-of-neuroimaging-data-with-python-and-vtk/../../../static/img/blog_vtk1_opacity_demo.jpg" alt="Rendered volumes using different opacity settings"></li>
<li>Color: Similarly, you get to decide how values correspond to colors.  Here are examples of rendering the same template and activation map using different color palettes.<br>
<img src="https://david-e-warren.me/blog/volume-rendering-of-neuroimaging-data-with-python-and-vtk/../../../static/img/blog_vtk1_color_demo.jpg" alt="Rendered volumes using different color palettes"></li>
</ul>
<p>The intersection of these advantages makes volume rendering of brain data (or any data) very appealing.</p>
<h2 id="outstandingissues">Outstanding issues</h2>
<p>Overall, I was very impressed with the images rendering using VTK and Python, but there were certainly some frustrations.</p>
<p>First, you can't render multiple overlapping translucent volumes in VTK at the moment.  This prompted the current approach of mixing template and overlay images instead.  It works, but it's a bit clunky and slow.  A native VTK solution for blending two or more volumes would be terrific.</p>
<p>Second, the VTK renderer that I used couldn't produce images in an off-screen window.  This is a minor annoyance, but it would be nice to have images be saved directly to disk instead of flashing by on the desktop during the rendering process.</p>
<p>Third, I'm a novice VTK user.  I ended up using Numpy to work with arrays of data, then saving the results back out to NIFTI formatted data files, then loading those using VTK.  This is slower and certainly less elegant that doing more data manipulation with native VTK calls.  Suggestions for improvements would be welcome!</p>
<h2 id="whatsnext">What's next?</h2>
<p>I'd like to build this approach out into a simple, functional tool that can batch-render images combining a template volume with an overlay.  There are complexities that will need to be overcome, but this approach serves my immediate needs and I'd very much like to pass it along to others.  If you think that you'd use such a tool, do please let me know.</p>
</div>]]></content:encoded></item><item><title><![CDATA[vmPFC and Observational Learning]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is a placeholder post for now, but our manuscript describing deficits in observational learning in patients with focal vmPFC damage is out at Cerebral Cortex.  More at the link:</p>
<blockquote class="twitter-tweet" lang="en"><p><a href="http://t.co/JFF5KRoZA0">http://t.co/JFF5KRoZA0</a> CC just posted our manuscript describing deficits in observational learning after vmPFC lesion. <a href="https://twitter.com/dharshsky">@dharshsky</a> <a href="https://twitter.com/hashtag/article?src=hash">#article</a></p>&mdash;</blockquote></div>]]></description><link>https://david-e-warren.me/blog/vmpfc-and-observational-learning/</link><guid isPermaLink="false">59f8043d20cec1059d2f875b</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Mon, 27 Apr 2015 14:17:28 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>This is a placeholder post for now, but our manuscript describing deficits in observational learning in patients with focal vmPFC damage is out at Cerebral Cortex.  More at the link:</p>
<blockquote class="twitter-tweet" lang="en"><p><a href="http://t.co/JFF5KRoZA0">http://t.co/JFF5KRoZA0</a> CC just posted our manuscript describing deficits in observational learning after vmPFC lesion. <a href="https://twitter.com/dharshsky">@dharshsky</a> <a href="https://twitter.com/hashtag/article?src=hash">#article</a></p>&mdash; David E. Warren (@DavidEWarrenPhD) <a href="https://twitter.com/DavidEWarrenPhD/status/592692794092855296">April 27, 2015</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script></div>]]></content:encoded></item><item><title><![CDATA[Scientific Python: Anaconda]]></title><description><![CDATA[<div class="kg-card-markdown"><p>As I've mentioned before, I love to use <a href="http://python.org">Python</a> for data processing and statistical analysis.  In this entry, I'll describe my recent experience with the <a href="https://store.continuum.io/cshop/anaconda/">Anaconda Python distribution</a> published by <a href="http://www.continuum.io">Continuum Analytics</a>.  Thus far, using Anaconda has been very straightforward and I'm sufficiently impressed to recommend it.</p>
<h3 id="theproblem">The problem</h3>
<p>A</p></div>]]></description><link>https://david-e-warren.me/blog/scientific-python-anaconda/</link><guid isPermaLink="false">59f8043d20cec1059d2f8759</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Tue, 24 Feb 2015 02:24:03 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>As I've mentioned before, I love to use <a href="http://python.org">Python</a> for data processing and statistical analysis.  In this entry, I'll describe my recent experience with the <a href="https://store.continuum.io/cshop/anaconda/">Anaconda Python distribution</a> published by <a href="http://www.continuum.io">Continuum Analytics</a>.  Thus far, using Anaconda has been very straightforward and I'm sufficiently impressed to recommend it.</p>
<h3 id="theproblem">The problem</h3>
<p>A recent round of computer upgrades at the lab means we're all experiencing the minor trauma of reinstalling software.  Python is one of my most important tools, so getting it up and running was essential.  Unfortunately, while Python's famous maxim about coming with “<a href="https://docs.python.org/2/tutorial/stdlib.html#batteries-included">batteries included</a>” is absolutely true for the base libraries, difficulties with installing and managing third-party libraries can waste a lot of time.  That's because despite a terrific community and tons of great third-party packages, Python's built-in package management software remains rudimentary.<sup>1</sup></p>
<h3 id="strikebatteriesstrikeplutoniumincluded"><strike>Batteries</strike> Plutonium included</h3>
<p>Enter <a href="https://store.continuum.io/cshop/anaconda/">Anaconda</a>.  This  distribution provides a base installation of the Python language along with almost 200 widely-used Python packages.  That includes mainstays such as <a href="http://www.numpy.org/">NumPy</a> and <a href="http://www.scipy.org/">SciPy</a>, relative newcomers such as <a href="http://pandas.pydata.org/">pandas</a> and <a href="http://scikit-learn.org/stable/">scikit-learn</a>, and perhaps lesser known but extremely useful entries such as <a href="https://github.com/ilanschnell/bitarray">bitarray</a> and <a href="https://github.com/python-excel/xlrd">xlrd</a>.  If there's any downside, it's that you probably won't use all of these additional packages.  However, sacrificing a bit of disk space seems a small price to pay for simple Python package management.<sup>2</sup></p>
<p>In our lab I've tried Anaconda on my Windows desktop and on our lab's <a href="https://www.suse.com/">SUSE Linux</a> workstations.  In both cases, installation was painless, even without root access on the Linux boxes.  Compared to my <a href="http://david-e-warren.me/blog/scientific-python-on-opensuse/">previous exertions</a> that got full-blown scientific Python working on those systems, this was a snap.  Notably, Anaconda also  handles non-Python libraries that some third-party packages require (e.g., <code>scikit-learn</code> uses LibSVM for support vector calculations), meaning still fewer installation hassles.  Python's built-in tools, which are getting much better at managing native Python packages, don't yet handle these external dependencies.</p>
<h3 id="ludicrousspeed">Ludicrous speed</h3>
<p>Python's free numerical libraries are arguably as fast as those of the other high-level programming languages<sup>3</sup>, but more speed is always welcome when you're working with large datasets. On that note, another benefit of Anaconda is that it can get you access to CA's proprietary speed-boosting <a href="http://continuum.io/anaconda-addons">extensions</a>.  For most users, access to these features costs money; academics can <a href="https://store.continuum.io/cshop/academicanaconda">apply</a> (painlessly) for a free license.</p>
<p>So how fast is it?  As Barf the Dog might say, “<a href="https://www.youtube.com/watch?v=mk7VWcuVOf0">They've gone to plaid!</a>”  I installed CA's <code>accelerate</code> library, and without changing a line of my scripts, NumPy suddenly recruited multiple cores for time-consuming calculations.  I've found the speed increase to be very significant for calculations using Python's <code>scikit-learn</code> library among others.</p>
<h3 id="wrappingup">Wrapping up</h3>
<p>Anaconda seems to be a great solution to a common problem — managing many potentially overlapping Python package requirements.  I've been very pleased so far, and will post again with further impressions and tips in the future.  If you have experience with Anaconda or other Python distributions that you'd like to share, please contact me or post below.</p>
<h4 id="disclosures">Disclosures</h4>
<p>Python is an open source programming language, and the base Anaconda distribution is likewise free for use.  However, the publisher (<a href="http://www.continuum.io">Continuum Analytics</a>) appears to be a for-profit enterprise.  I have no financial interest in or other relationship with CA, but as I mentioned earlier, I did take advantage of CA’s open offer of free academic access to their professional tier of products.  I don't believe that this unduly influenced my opinion of CA or Anaconda, but I feel that full disclosure is always the best policy.</p>
<h4 id="notes">Notes</h4>
<p><sup>1</sup> <code>setuptools</code>, <code>easy_install</code>, and <code>pip</code> definitely beat manual installation and may suffice for <a href="https://en.wikipedia.org/wiki/Unix-like">UNIX-like</a> operating systems.  Unfortunately, these tools often aren't enough for Windows users such as me, and it's all too easy to become mired in Python's own <a href="https://en.wikipedia.org/wiki/Dependency_hell">dependency hell</a>.  If you're on Windows but still want to try managing packages manually, I highly recommend <a href="http://www.lfd.uci.edu/~gohlke/">Christophe Gohlke</a>'s amazing <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/">collection</a> of pre-complied Windows binaries for Python — I relied on these almost exclusively before trying Anaconda.</p>
<p><sup>2</sup> There's also a stripped-down <a href="http://conda.pydata.org/miniconda.html">Miniconda</a> distribution if you want to install packages only as needed.</p>
<p><sup>3</sup> It's my understanding that NumPy, MATLAB, and the rest all rely on math libraries such as BLAS and LAPACK under the hood, making their effective speed very similar.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Scientific Python on openSUSE]]></title><description><![CDATA[<div class="kg-card-markdown"><p>The <a href="http://python.org">Python</a> programming language has a terrific and rapidly growing scientific ecosystem, and I'm currently using some of those tools to apply <a href="https://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machines</a> to neuroimaging data.  Our neuroimaging lab uses <a href="http://www.opensuse.org/en/">openSUSE</a> Linux, and getting the necessary Python packages installed proved to be a bit tricky.  With some help</p></div>]]></description><link>https://david-e-warren.me/blog/scientific-python-on-opensuse/</link><guid isPermaLink="false">59f8043d20cec1059d2f8757</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Sat, 08 Nov 2014 04:19:50 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>The <a href="http://python.org">Python</a> programming language has a terrific and rapidly growing scientific ecosystem, and I'm currently using some of those tools to apply <a href="https://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machines</a> to neuroimaging data.  Our neuroimaging lab uses <a href="http://www.opensuse.org/en/">openSUSE</a> Linux, and getting the necessary Python packages installed proved to be a bit tricky.  With some help from <a href="http://stackoverflow.com">Stack Overflow</a>, the problems were resolved, and I'm happily fitting models to data as I write this:</p>
<blockquote class="twitter-tweet" lang="en"><p>Ghost town at work, spent the whole day with Python and brain data. Flashback to grad school! <a href="https://twitter.com/hashtag/python?src=hash">#python</a> <a href="https://twitter.com/hashtag/dejavu?src=hash">#dejavu</a> <a href="https://twitter.com/hashtag/aloneatlast?src=hash">#aloneatlast</a></p>&mdash; David E. Warren (@DavidEWarrenPhD) <a href="https://twitter.com/DavidEWarrenPhD/status/530880651038560256">November 8, 2014</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<h4 id="libraryissues">Library issues</h4>
<p>As I mentioned, there were some hurdles to clear before I could get to <a href="https://en.wikipedia.org/wiki/Hacker_%28programmer_subculture%29">hacking</a>.  <a href="http://numpy.org">Numpy</a> and <a href="http://scipy.org">Scipy</a> are the workhorses of any Scientific Python installation, and both take advantage of high-performance math libraries such as <a href="http://www.netlib.org/blas/">BLAS</a> and <a href="http://www.netlib.org/lapack/">LAPACK</a>.  Unfortunately, those libraries aren't installed by default in our lab, and our sysadmin was (reasonably) wary of disrupting systemwide settings by installing new packages.  Happily, I was able to compile and install everything from source with lots of help and a few tweaks.</p>
<p>Here's the system information:</p>
<pre><code>&gt; cat /etc/SuSE-release
openSUSE 12.3 (x86_64)
VERSION = 12.3
CODENAME = Dartmouth

&gt; uname -a
Linux somecomputer 3.7.10-1.40-desktop #1 SMP PREEMPT Thu Jul 10 11:22:12 UTC 2014 (9b06319) x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p>And this is the Stack Overflow answer that got me started:  <a href="http://stackoverflow.com/a/9173550/719469">http://stackoverflow.com/a/9173550/719469</a>.</p>
<p>A few changes were necessary.  First, we use a <code>tcsh</code> shell, so environment variables have to be set using a different format.  Second, compile options had to be altered for our system.  Details follow.</p>
<p>For BLAS:</p>
<pre><code>mkdir -p ~/path/to/src/
cd ~/path/to/src/
wget http://www.netlib.org/blas/blas.tgz
tar xzf blas.tgz
cd BLAS

## NOTE: For openSUSE, I needed to edit a couple of lines in BLAC/make.inc before proceeding:
# -OPTS     = -O3
# -NOOPT    = -O2
# +OPTS     = -O2 -fPIC -m64
# +NOOPT    = -O0 -fPIC -m64

## NOTE: The selected fortran compiler must be consistent for BLAS, LAPACK, NumPy, and SciPy.
## For GNU compiler on 32-bit systems:
#g77 -O2 -fno-second-underscore -c *.f                     # with g77
#gfortran -O2 -std=legacy -fno-second-underscore -c *.f    # with gfortran
## OR for GNU compiler on 64-bit systems:
#g77 -O3 -m64 -fno-second-underscore -fPIC -c *.f                     # with g77
gfortran -O3 -std=legacy -m64 -fno-second-underscore -fPIC -c *.f    # with gfortran
## OR for Intel compiler:
#ifort -FI -w90 -w95 -cm -O3 -unroll -c *.f

# Continue below irrespective of compiler:
ar r libfblas.a *.o
ranlib libfblas.a
rm -rf *.o
setenv BLAS ~/path/to/src/BLAS/libfblas.a
</code></pre>
<p>For LAPACK:</p>
<pre><code>mkdir -p ~/path/to/src/
cd ~/path/to/src/
wget http://www.netlib.org/lapack/lapack.tgz
tar xzf lapack.tgz
cd lapack-*/
cp INSTALL/make.inc.gfortran

# Again, for openSUSE the following changes to make.inc were necessary:
# -OPTS     = -O2 -frecursive
# -NOOPT    = -O0 -frecursive
# +OPTS     = -O2 -frecursive -m64 -fPIC
# +NOOPT    = -O0 -frecursive -m64 -fPIC

make.inc          # on Linux with lapack-3.2.1 or newer
make lapacklib
make clean
setenv LAPACK ~/path/to/src/lapack-*/liblapack.a
</code></pre>
<h4 id="pippip">Pip pip</h4>
<p>With BLAS and LAPACK installed, I was able to use a Python <a href="https://virtualenv.pypa.io/en/latest/">virtual environment</a> to get all of the necessary packages installed.  The <code>pip</code> utility is very handy for this purpose if you're on a Linux system.  A few invocations of <code>pip install</code> later, I had Numpy, Scipy, and the rest of the tools that I needed in place to get on with my project.</p>
<p>Big thanks to Stack Overflow user “<a href="http://stackoverflow.com/users/923794/cfi">cfi</a>” for his excellent answer to a complicated question!</p>
</div>]]></content:encoded></item><item><title><![CDATA[vmPFC and false memory]]></title><description><![CDATA[<div class="kg-card-markdown"><h3 id="notrememberingwhatyoushouldnt">Not remembering what you shouldn't</h3>
<p>Are there brain regions that help us to fill in gaps in memory?  What would happen when those regions are damaged?  My co-authors and I recently published a <a href="http://david-e-warren.me/publications#WarrenDRMvmPFC">short report</a> in the <a href="http://www.jneurosci.org/">Journal of Neuroscience</a> describing findings that may begin to address this question.  A</p></div>]]></description><link>https://david-e-warren.me/blog/vmpfc-and-false-memory/</link><guid isPermaLink="false">59f8043d20cec1059d2f8756</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Fri, 17 Oct 2014 03:58:31 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h3 id="notrememberingwhatyoushouldnt">Not remembering what you shouldn't</h3>
<p>Are there brain regions that help us to fill in gaps in memory?  What would happen when those regions are damaged?  My co-authors and I recently published a <a href="http://david-e-warren.me/publications#WarrenDRMvmPFC">short report</a> in the <a href="http://www.jneurosci.org/">Journal of Neuroscience</a> describing findings that may begin to address this question.  A very nice <a href="http://www.jneurosci.org/content/34/41/13569.full">follow-up piece</a> discussed our work last week, and we wrote a brief <a href="http://www.jneurosci.org/content/34/41/13569/suppl/DC1">response</a>.</p>
<p>We know that memory is at least partially reconstructed when recalled.  That is, a remembered experience isn't a perfect reproduction of that experience; instead, a remembered experience is the gist of a particular experience with some of the detail filled in from other, similar experiences.  If you've ever remembered something typically true but wrong about a specific episode (e.g., Aunt Alice being at Thanksgiving last year when she actually didn't make it, or Derek Jeter starting for the Yankees in a game when he was actually on the DL), you've experienced this kind of memory error.</p>
<p>Recent functional neuroimaging data have pointed toward medial prefrontal cortex (mPFC) as a brain region that could potentially support this kind of generalized, or <em>schematic</em>, memory.  Here at <a href="http://uiowa.edu">UIowa</a>, we're very fortunate to have access to an extraordinary resource in the form of a registry of neurological patients with focal brain injuries, including many with damage limited to the ventral mPFC (vmPFC).</p>
<h4 id="howitworked">How it worked</h4>
<p>We tested these individuals with a <a href="https://en.wikipedia.org/wiki/Deese%E2%80%93Roediger%E2%80%93McDermott_paradigm">memory task</a> designed to produce benign false memories for words that were not studied.  The task is simple: participants listened to lists of words such as <em>bed</em>, <em>pillow</em>, <em>blanket</em>, etc., and then recalled as many words as possible.  Our healthy normal comparison participants showed the expected effect by often recalling a critical word that was missing from the list, such as <em>sleep</em>.  People with vmPFC damage recalled these non-studied words <strong>less often</strong> than the participants without brain injuries<sup>1</sup>.  This suggests that the vmPFC may normally play a role in filling gaps in memory with plausible information, and that damage to vmPFC reduces this effect.</p>
<h4 id="canbraindamageimprovememory">Can brain damage improve memory?</h4>
<p>We observed that damage to the vmPFC made people less susceptible to false memories while leaving their true memory unaffected.  Does this mean that people with specific patterns of brain damage have <em>better</em> memory?  Not necessarily.  Two key points:</p>
<ol>
<li>Memory is a cognitive process that we define by studying healthy individuals.  That is, if healthy normal people typically remember things in a certain way, that's what we should call normal memory.  From this perspective, false memories for related words would be normal and vmPFC damage made memory abnormal.</li>
<li>The same characteristics of memory that make healthy normal people susceptible to false memories in the DRM task probably help memory performance more often than not.  For example, outside of a laboratory setting, how often would you expect to encounter a dozen words related to <em>sleep</em> and later be penalized for thinking about and recalling <em>sleep</em>?  Most of the time, the influence of generalized world knowledge such as schemas and semantic relations is beneficial to memory performance, and may have evolved to relieve pressure on other memory systems.</li>
</ol>
<h4 id="takehomemessage">Take-home message</h4>
<p>Our memory often takes shortcuts, using general knowledge derived from experience to fill in gaps.  Although often harmless or even useful, this can lead to false memories.  These memory effects may be supported in part by a specific brain region, the vmPFC.  We're continuing to investigate other ways in which vmPFC may influence memory.</p>
<p><small><sup>1</sup> Important note: people with vmPFC damage still had some false recall, but it was substantially reduced.  Critically, their memory was otherwise identical to that of healthy individuals.</small></p>
</div>]]></content:encoded></item><item><title><![CDATA[Brain networks, interrupted]]></title><description><![CDATA[<div class="kg-card-markdown"><h3 id="ournewpaper">Our new paper</h3>
<p>I recently had the good fortune to be the lead author of an <a href="http://david-e-warren.me/publications#WarrenCNN">article</a> published in <a href="http://pnas.org">PNAS</a>.  I'm proud of how the paper turned out, and anyone who's interested should take a look.  If the article is more than you care to tackle, you might try the</p></div>]]></description><link>https://david-e-warren.me/blog/brain-networks-interrupted/</link><guid isPermaLink="false">59f8043d20cec1059d2f8754</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Fri, 26 Sep 2014 03:42:26 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h3 id="ournewpaper">Our new paper</h3>
<p>I recently had the good fortune to be the lead author of an <a href="http://david-e-warren.me/publications#WarrenCNN">article</a> published in <a href="http://pnas.org">PNAS</a>.  I'm proud of how the paper turned out, and anyone who's interested should take a look.  If the article is more than you care to tackle, you might try the very nice UIowa <a href="http://now.uiowa.edu/2014/09/network-measures-predict-neuropsychological-outcome-after-brain-injury">press release</a>.</p>
<h3 id="whatsitabout">What's it about?</h3>
<p>Quick summary: we investigated the consequences of brain damage to locations thought to be important to the function of brain networks – we called them <em>hubs</em>.  Hubs in brain networks are notoriously difficult to define, so we tested the consequences of brain damage to two different types of hubs.  We found that damage to a certain kind of hub caused widespread impairment in thinking and behavior, while similar damage to other hubs had much more limited effects.  We attributed these differences in severity of impairment to the role that the first type of hubs play in the network organization of the brain.  With more research, we think that our findings have the potential to help doctors make decisions and predictions about treatment based on brain imaging.</p>
<h3 id="expandingetal">Expanding et al.</h3>
<p>Before writing more, I should acknowledge the contributions of all of the other authors at <a href="http://www.uiowa.edu/">UIowa</a> and <a href="http://www.wustl.edu/">Wash U</a>, particularly my co-first author, <a href="http://scholar.google.com/citations?user=mlNP7HgAAAAJ&amp;hl=en">Jonathan D. Power</a>.  Jonathan's first take on where the hubs of brain networks might reside (in a great Neuron <a href="http://www.sciencedirect.com/science/article/pii/S0896627311007926">article</a>) laid the theoretical foundation for this project.  Testing that theory in a lesion population required an intensive collaboration between the lab of Jonathan's doctoral adviser, Steve Petersen, and my post-doctoral adviser, Dan Tranel.  Their support was critical to the success of the project.</p>
<p>Lots of other people also deserve credit for their contributions, and they're all appropriately acknowledged in the author list.  To pick out a couple of key contributions, Joel Bruss helped out tremendously with the processing and visual inspection of neuroanatomical data, and he made some lovely figures, too.  Natalie Denburg and Eric Waldron acted as blind raters for the neuropsychological data, and did an admirable job of not asking what all this was about.  More efforts should also be acknowledged, but I'll confine myself to saying that it was a great group to work with and I'm looking forward to continuing our collaboration.</p>
<h3 id="nextup">Next up</h3>
<p>Having taken the first step down this road, we're eager to ask and potentially answer more questions.  Specifically, we're planning to:</p>
<ul>
<li>explore the effects of damage to more and different brain hubs</li>
<li>study how damage to brain hubs affects brain activity</li>
<li>work with neurosurgery patients before and after their procedures to monitor any brain network changes</li>
</ul>
<p>I can't wait to get started!  If you've got questions or comments, post them below or feel free to <a href="https://david-e-warren.me/blog/contact">contact</a> me.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Associate what?]]></title><description><![CDATA[<div class="kg-card-markdown"><p>In the fall of 2014, I was very pleased to accept an offer to assume a non-tenure-track junior faculty position in the University of Iowa Neurology Department.  It's a great temporary position while I'm on the job market, and I'm excited to be at Iowa for another year.</p>
<p>One odd</p></div>]]></description><link>https://david-e-warren.me/blog/whats-in-a-name-not-much/</link><guid isPermaLink="false">59f8043d20cec1059d2f8753</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Wed, 10 Sep 2014 21:48:29 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>In the fall of 2014, I was very pleased to accept an offer to assume a non-tenure-track junior faculty position in the University of Iowa Neurology Department.  It's a great temporary position while I'm on the job market, and I'm excited to be at Iowa for another year.</p>
<p>One odd thing about the job is the name.  Other institutions call this kind of position by names such as &quot;Research Scientist&quot;, but the title of my position is &quot;Associate&quot;.  Associate what?  Associate nothing, apparently.  Definitely not associate professor, and I want to make sure that's clear, hence this post.</p>
<p>Whatever the title, it's terrific to be faculty, even if I'm still on the job market this year.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Relational memory, part I]]></title><description><![CDATA[<div class="kg-card-markdown"><h3 id="arbitrarybutrelated">Arbitrary but related</h3>
<p>Many of the things that we need to remember are related only arbitrarily.  Consider:</p>
<ul>
<li>You're at a party, and your wife introduces to one of her co-workers whom you've never met before.  You smile, shake hands, and try frantically to remember that this new face belongs to</li></ul></div>]]></description><link>https://david-e-warren.me/blog/relational-memory/</link><guid isPermaLink="false">59f8043d20cec1059d2f8752</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Tue, 22 Jul 2014 02:50:29 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><h3 id="arbitrarybutrelated">Arbitrary but related</h3>
<p>Many of the things that we need to remember are related only arbitrarily.  Consider:</p>
<ul>
<li>You're at a party, and your wife introduces to one of her co-workers whom you've never met before.  You smile, shake hands, and try frantically to remember that this new face belongs to <em>Alice</em>.</li>
<li>You exit the mall, bags in hand, and are confronted with a sea of cars and a burning question – <em>Where did I park?</em></li>
<li>You know that you've got two meetings this afternoon, but were you supposed to get coffee with your friend <em>before</em> or <em>after</em> chatting with your advisor?</li>
</ul>
<p>Memory binds the pieces of these experiences together.  What's less obvious is that they all rely on the same type of memory – relational memory.  As the first real content for this blog, I'll be giving a breezy overview of relational memory theory (RMT) intended for a general audience with a minimum of jargon.  In future posts, I'll elaborate and offer more specifics.  But for now, the basics of relational memory and RMT.</p>
<h3 id="multiplememoriesmultiplesystems">Multiple memories, multiple systems</h3>
<p>The earlier examples describe what we often think of as memory – they focus on events, places, and facts.  However, we can store information about earlier experiences in lots of different ways, all of which count as memory.  Here are a few examples:</p>
<ul>
<li>Your significant other is really excited about riding bikes in the park, so despite being years out of practice you saddle up and find yourself gliding along with no effort.</li>
<li>Your student submitted his report in a bizarre font that you can barely read at first, but by the third or fourth page you're barely noticing it.</li>
<li>You pull into a surprisingly empty parking lot at work only to realize that it's Saturday and you've followed your daily commute instead of running errands.</li>
</ul>
<p>In each case there's memory at work even though what’s been learned is a skill or habit.  However, memories of this kind seem very different from memories for facts and events.</p>
<p>The distinction between these two kinds of memory is a key component of RMT.  The theory suggests that different kinds of memory depend on unique memory systems, and that memory systems belong to one of two groups.  Non-relational memory systems are the second type that we considered, and are well-suited for gaining motor skills such as bicycling, tuning perceptual systems to read a difficult font, or habit learning that can lead to commuter's coma.</p>
<p>Non-relational learning is typically very incremental and slow; non-relational memories are usually very durable but inflexible.  These are great properties for lots of memories.  Having learned to ride a bike, you'll retain that motor skill throughout your life with little or no practice.  However, we don't always have time to learn new information by practicing.  You may only hear a new acquaintance's name once; you probably won't have the luxury of parking in the same spot every day for a year; and you aren't going to rehearse your daily schedule until you know it by heart.  Slow, durable learning is great, but sometimes you need to learn <em>fast</em>.</p>
<h3 id="relationalmemory">Relational memory</h3>
<p>Relational memory is used every time you need to bind together two or more pieces of information together, especially when those pieces of information were not previously related to each other.  Say you're trying to bind Alice's name to Alice's face during your first meeting: there's nothing in her face that can clue you in to the name; and there's nothing about the name that would prompt you to think of a particular face.  The relation between these two pieces of information is therefore <em>arbitrary</em>.  Non-relational memory systems might eventually help you learn the arbitrary &quot;Alice&quot;-face relationship, but the hundreds of introductions it would take might get on her nerves.  Relational memory offers a shortcut – having met Alice once (or twice or three times), you have a decent chance of remembering her name.</p>
<p>Relational memory is strikingly different from non-relational memory, and not just in terms of learning speed.  Relational memory is also extremely flexible.  Meeting Alice again later on, you might recognize her in a new outfit and a work setting.  One lonely bit of information, such as Alice's face, can be enough to trigger memory for all kinds of related information, including her name, the party where you met, and so on.  And despite the speed with which relational memories are formed, they can be extremely durable, lasting for years.  The strengths of relational memory, its speed and style of learning, complement non-relational memory in important ways.</p>
<h3 id="wheretonext">Where to next?</h3>
<p>I'll pause here for now, but plan to write more about RMT before too long.  Likely topics:</p>
<ul>
<li>RMT in the brain
<ul>
<li>Brain regions supporting relational memory</li>
<li>Brain regions supporting non-relational memory</li>
</ul>
</li>
<li>The continued evolution of RMT
<ul>
<li>Development during my time at UIUC</li>
<li>Contributions by other scientists</li>
</ul>
</li>
<li>My own experience studying RMT
<ul>
<li>Developing RMT tasks</li>
<li>Working with amnesic patients</li>
</ul>
</li>
<li>Challenges for RMT
<ul>
<li>Future directions and how to stay relevant</li>
<li>Alternative accounts of memory function</li>
</ul>
</li>
</ul>
<p>But for now, I'll just close by acknowledging the guys who put it all together.</p>
<h3 id="origins">Origins</h3>
<p>All credit for originating RMT belongs to <a href="http://msl.beckman.illinois.edu/people">Neal J. Cohen</a> and <a href="http://www.bu.edu/cogneuro/about-us/people/">Howard Eichenbaum</a>.  If you're interested in reading an authoritative account, check out their respective publications at the above websites or <a href="http://www.ncbi.nlm.nih.gov/pubmed?term=cohen+nj%5Bau%5D+eichenbaum+h%5Bau%5D">PubMed</a>, or one of their <a href="http://amzn.com/0262531321">two</a> <a href="http://amzn.com/0195178041">books</a> on the topic (non-affiliate links, I promise).</p>
</div>]]></content:encoded></item><item><title><![CDATA[Kickoff]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I'm Dave Warren, and as of this writing, I'm a post-doc at the University of Iowa working with Prof. Dan Tranel.  I'm adding a blog to my website to offer my thoughts on a variety of topics that interest me professionally, including but not limited to:</p>
<ul>
<li>Cognitive neuroscience</li>
<li>Memory research</li></ul></div>]]></description><link>https://david-e-warren.me/blog/kickoff/</link><guid isPermaLink="false">59f8043d20cec1059d2f874f</guid><dc:creator><![CDATA[David E. Warren]]></dc:creator><pubDate>Mon, 16 Jun 2014 04:28:17 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>I'm Dave Warren, and as of this writing, I'm a post-doc at the University of Iowa working with Prof. Dan Tranel.  I'm adding a blog to my website to offer my thoughts on a variety of topics that interest me professionally, including but not limited to:</p>
<ul>
<li>Cognitive neuroscience</li>
<li>Memory research</li>
<li>Methodological issues</li>
<li>Programming/scripting</li>
<li>Professional issues</li>
</ul>
<p>I'm planning to update once a week or so, with more frequent short updates posted to my Twitter account (<a href="https://twitter.com/DavidEWarrenPhD">@DavidEWarrenPhD</a>).  More soon!</p>
</div>]]></content:encoded></item></channel></rss>