-Because the full history dumps from the WMF foundation are split into many
-files, it is can be appropriate to parse these dumps in parallel. Although the
-specific ways you choose to do this will vary by the queuing system you use,
-we've included examples of the scripts we used with Condor on the Harvard/MIT
-Data Center (HMDC) in the "examples/" directory. They will not work without
-modification for your computing environment because they have our environment
-hardcoded in but they will give you an idea of where you might want to start.
-
-Additionally, there is a third step `03-assemble_redirect_spells.R` that
-contains R code that will read in all of the separate RData files, assmebles
-the many smaller dataframes into a single data.frame, and then saves that
-unified data.frame into a single RData file.
+Running Code in Parallel
+-----------------------------------------
+
+Because the full history dumps from the WMF foundation are split into
+many files, it is usually appropriate to parse these dumps in
+parallel. Although the specific ways you choose to do this will vary
+by the queuing or scheduling system you use, we've included examples
+of the scripts we used with Condor on the Harvard/MIT Data Center
+(HMDC) in the `examples/` directory of the source code. They will not
+work without modification for your computing environment because they
+have configuration options and paths for our environment
+hardcoded. That said, they may give you an idea of where you might
+want to start.
+
+In this parallel code there is a third file
+`03-assemble_redirect_spells.R` that contains R code that will read in
+all of the separate RData files created in paralell processing,
+assemble the many smaller dataframes into a single dataframe, and then
+saves that unified data.frame into a single RData file.