Category Archives: data visualization

Don’t Know Much About History: Visualizing the Scale of Major 20th Century Conflicts (Details)

Check out the code here while you read the article.

 

Executive Summary:

I used advanced programming features of R to make cleaning and organizing the data possible, especially in an efficient and highly iterative manner. I further used closures, function factories, and multiple layers of abstraction to make the code more robust to changes/refactorization and much easier to understand.

 

A general overview of the project is available here. (separate blog post)

 

This project was a challenge of programming fundamentals: execute a complicated task reliably in minimal time with readable code. Along the way I discovered some of the essential features in R—including R’s superfast data.table, unit testing methods and other features from the “Hadleyverse”/tidyverse, and built-in functional programming aspects—and wrote other features myself, such as efficient string editing through factors and conveniences allowed by R’s “non-standard evaluation” (for more reference on this see Hadley Wickham’s Advanced R).

 

First, an ode to data.table: it’s arguably the fastest data wrangling package out there, at least among the top data science languages like Python and R. It’s ~2x faster at common tasks than Python’s Pandas (and unspeakably faster than R’s native data.frame). I also like to think it has better syntax, as it’s written more like a query than a series of method calls. It allows value setting by reference, which prevents copying of the whole dataset just for slight modifications. It provides both imperative := and functional `:=`() forms, and even a for loop-specific form set() for those few times when one-line solutions aren’t enough. It allows easy control of output format (whether in vector or data.table form) and provides the convenience of non-standard evaluation without removing the flexibility of standard evaluative forms. It is tremendously flexible, expressive, powerful, and concise, which makes it an excellent choice for any data wrangling project. It really shines with larger (≥1GB) datasets, especially with its parallelized read and write methods, and enabled me to iterate quickly when designing my data cleaning pipeline for edge cases in the entire datasets I set out to process.

 

Functional programming was essential to staying DRY (Don’t Repeat Yourself—minimizing duplication) in this project. Having four similar datasets that were best kept separate to operate on, I created function factories for creating closures that would process each dataset as appropriate, and further bundled the closures together so one could essentially call a function on a dataset and get the expected results, despite the underlying implementation for each dataset perhaps differing from others. I wrote this program with a scalable design, as I had initially planned to start with just World War Two data and expand it once working with that data to also include the other datasets you can see in the final product. If my data source releases any more datasets, I’ll easily be able to fit them into the current framework as well.

 

The size of the dataset and the intricacy of the code provided an opportunity for automated unit testing to catch issues early. As I had to iterate through various designs of the data cleaning code many times to eliminate each separate issue I noticed, unit testing caught any unintended side-effects as soon as they happened and ensured that obscure data cleanliness issues didn’t rear their heads many steps later. Hadley’s testthat package proved very useful for this.

 

The raw string data presented in an inconsistent and hideous format. In order for tooltips to appear well-formatted to a human reader, I made the textual data conform to standard conventions of punctuation and capitalization. The extensive presence of proper nouns in particular required a fine-tuned approach that simple regular expressions couldn’t solve, so I created string formatting methods that I sought to apply to each row in certain columns. However, initial versions of my string formatting code, though successful, took several hours (half a day) to run, requiring that I rethink the programming or computer science aspects of my approach. Some vectorized (or easily vectorizable) methods only took a few minutes each to peel through a column of my largest dataset with over 4.5 million observations. So I figured out a way to effectively vectorize even the seemingly most unvectorizable methods. For instance, instead of splitting each row’s text into words and rerunning the word-vectorized proper noun capitalization method on all rows in the dataset, I discovered it was significantly more efficient (in R) to unlist the list of words in each row into a single large character vector (containing all words in all rows, one after another, with no special separation between words in a particular row and words in the next row), run the proper noun capitalization on that single long character vector, and then repartition the (now capitalized) words back into a list of words in each row. This single change improved the efficiency of the data formatting by nearly a factor of 10. Still, it was taking over an hour to run, and I investigated further ways to improve efficiency. I realized in particular that the Unit_Country field was editing the same ~5 strings in the same way over and over again for each row (of over 4.5 million rows), which was horribly inefficient. Much better would be to map each row’s string to a short table of possible string values, apply the edits to the reference table, and then unmap or reference the updated values as necessary. This is (more or less) exactly what factors do! Applying the same string formatting methods to the factor levels brought the total runtime down to a few minutes, which made fast iteration through data cleaning method variants easy. Still though, the methods seemed to take unusually long (over one second per operation) to update just a few string values, so I investigated further and discovered that R was copying the entire dataset each time a factor’s levels were altered (even using Hadley’s forcats package). I then found a way to update a factor’s levels in place by reference (thanks again to data.table), which brought the total runtime down to less than a minute, for a total efficiency improvement of over 1,000x. This efficiency improvement was essential for using iterative design to get the labels just right. Extensive use of the pipe (%>%) made writing the code substantially clearer and also makes it substantially easier to read, often obviating comments.

 

Additionally, I created a reusable fill_matching_values() method that fixes inconsistencies in one-to-one code-to-value mappings such as those found in the datasets for this project. Many of the values appeared as though they had been entered through Optical Character Recognition (OCR) technology, and there were plenty of labeling mistakes that needed fixing. The algorithm identifies the most common mapped value for each code and replaces all mapped value with that presumably correct modal (most common) value; analogously, the algorithm then fills in missing and corrects incorrect codes using the most common code that appears for each value. I first coded up a working form of this algorithm using basic data.table syntax and then vastly improved its efficiency (~10-fold) by implementing the lookup table using data.table’s more advanced, speed-optimized join syntax. Furthermore, I easily filtered the intermediate lookup table for duplicate or conflicting mappings, allowing my human eyes to double-check only the mistaken mappings and their corrections. (And I’m glad I did: I found a few misspellings that were more common than the correct versions, which were easily fixed manually in my data cleaning pipeline.)

 

Stylistically, the code still had a long way to go. I found myself repeatedly performing the same few operations on each column’s factor levels: dropping levels, renaming levels based on a formula, renaming specific levels with specific new names, renaming levels through regular expressions, grouping rare levels together under an “other” label, and running the same level formatting on multiple different columns. I created helper functions, almost like a separate package, for each of these situations. The formula-based level renaming function is an example of a functional: pass it a function, and it gives you back values (the edited levels). Different operations could be applied to the same column/vector by invisibly returning the results of each operation and using chaining.

 

Creating compositions of data cleaning functions greatly simplified the writing process and improved the readability of my code. I defined a functional composition pipe %.% (a period, or “dot”) mimicking the syntax in fully functional programming languages like Haskell, but found that my reverse functional composition pipe %,% (a comma) made the code even more readable. (Instead of f() %.% g(), representing f(g()), which places the first-executed operation last on the line, I sided with f() %,% g(), representing g(f()), which has the functions execute in the same order in which they appear in the line of code, separated by commas, as if to suggest “first f(), then g()”.)

 

Other infix functions simplified common tasks in R:

%||% provides a safe default (on the right) when the item on the left is null.

%[==]% serves a dual purpose: when a[a == b] is intended, it prevents writing a textually long expression a twice and/or prevents unnecessary double evaluation of a computationally expensive expression a.

The related %[!=]%, %[>]%, %[<]%, %[>=]%, %[<=]% are self-explanatory given the above.

%[]% analogously simplifies the expression of a[a %>% b], using non-standard evaluation and proper control of calling frames (the environment in which to evaluate the expression).

%c% (the c looks like the set theory “is an element of” symbol) creates a safe version of the %in% operator that assumes that the left-side component is a scalar (vector of length 1), warning and using only the first element of a longer vector if otherwise.

 

I also fixed a subtle bug in the base R tabulate() method as applied to factor vectors: when the last level or last few levels are not present in the vector, it completely drops them from its results. Under the hood, the default tabulate method evaluates factors from the first level to the level with the highest ordinal value (level indices 1 through max(int(levels)), reporting missing levels only when they’re not last in the list of levels.

 

I explored using parallel processing for the multicolumn data.table operations using mclapply() instead of lapply(), but the former was slower in every instance across every possible number of execution threads. I gained a small performance benefit from using the just-in-time compiler.

 

Finally, onto the finished product:

On the main (overview) tab, I plot out aerial bombing events on a map. Extensive controls allow the user to select which data to plot, filter the data by several criteria, and alter the appearance of the map to highlight various aspects of the data. Above the map, informational boxes display summations of various salient features of the data for quick numerical comparisons between conflicts and data subsets.

 

I allow the user to filter each dataset by six criteria, providing a comprehensive exploratory approach to interacting with the data: dates (Mission_Date), the target country (Target_Country), the target type (Target_Type), the country of origin (Unit_Country), the type of aircraft flown (Aircraft_Type), and the type of weapon used (Weapon_Type). In order to allow speedy performance when filtering millions of rows based on these arbitrary criteria on the fly, I set these columns as keys for each dataset, which causes data.table to sort the rows such that binary search may be used (as opposed to searching through the entire dataset). Furthermore, I wrote a full-structured method that queries the datasets using the minimum criteria specified, saving time. I update the possible choices for each filtration criterion based on the unique choices (levels) that still remain in the filtered datasets. Since this latter case involves performing an expensive and deterministic operation that returns a small amount of data—and potentially executing the same query multiple times—it is a perfect candidate for improvement through memoization, which I performed effortlessly with the memoise package.

 

The maps utilize a few features to better visualize points. A few different map types are available (each better at visualizing certain features), and map labels and borders may individually be toggled on or off. If the filtered data for plotting contains too many rows, a random sample of the filtered data is used for plotting a subset of the points. The opacity of points depends logarithmically on the number currently plotted (creating a plot of more numerous and more transparent points or more sparse and more opaque points). The opacity also depends linearly upon the zoom level of the map, so points get more opaque as one zooms in. The radius marks the relative damage radius of the different bombings (an approximation based on the weight of the explosives dropped, not based on historical data). The radii stay the same absolute size on the screen—regardless of zoom level—until one zooms in far enough to see the actual approximate damage radius, which I calculated based on the weight (or equivalent weight in TNT) of the warhead(s).

 

As hinted at earlier, each data point has a corresponding tooltip that displays relevant information in an easy-to-read format. (Each tooltip appears only when its dot on the map is clicked so as to not crowd out the more important, visually plotted data.) I included links to the Wikipedia pages on the aircraft and city/area of each bombing event for further investigation. (The links, pieced together based on the standard Wikipedia page URL format and the textual name of the aircraft or cities/areas, work in the majority of cases due to Wikipedia’s extensive use of similar, alternative URLs that all link to a given page’s standard URL. The few cases in which the tooltip’s link reaches a non-existent page occur either—most commonly—when an associated page for that aircraft or city/area has not been created, or when the alternative URL does not exist or link properly.)

 

The data are also displayable in a columnar format (in the data tab) for inspection of individual points by text (sortable by the data’s most relevant attributes).

 

Each dataset has its own tab (named for the conflict that the dataset represents) that graphs salient features of the data against each other. These graphs include a histogram and a fully customizable “sandbox” plot. The same filters (in the left-most panel) that the user has chosen for the map-based plots *also* apply for the graphical plots on these tabs, allowing the user to investigate the same subset of data spatially (the map plot), temporally (the histogram), and relationally (the “sandbox” plot).

 

The histogram and “sandbox” plot for each dataset are generated through function factories that create the same plots (with some minor dataset-specific differences, such as choices for categorical and continuous variables) for each dataset despite their differences, providing a uniform interface and experience for disparate datasets. Furthermore, the histogram and sandbox plot are responsive to user input, creating the clearest possible graph for each choice of variables and options. For example: choosing a categorical variable for the independent/horizontal axis creates a violin plot with markings for each quartile boundary; choosing a continuous variable creates a scatter plot with a line of best fit.

 

For those interested in investigating the historical events themselves (as opposed to their data), I’ve included tabs that relate to these events through three different perspectives: that of a pilot, that of a commander, and that of a civilian. The pilot tab shows interesting historical footage—some documentary-style, and some genuine pilot training videos from the time—related to each conflict. The commander tab shows the progression and major events of each conflict on printed maps. The civilian tab shows, using a heatmap, the intensity of the bombings over the full courses of the wars.

 

Throughout this project, I came to appreciate the utility of multiple layers of abstraction for effective program design. Throughout the project, I have used layers of abstraction to mirror structurally the way a human would think about organizing and implementing each aspect of such a project. In this way, I’ve saved myself a great amount of time (in all stages of the project: writing, refactoring, testing, and deploying), made the program more resilient to change, and saved the reader (including my future self) a magnitude of headaches when poring through the code.

 

For one, the UI itself is laid out very simply in nearly WYSIWYG (what you see is what you get) form using abstracting place-holder functions that represent entire functional graphical components. The codes themselves are organized extensively into several files beyond the prototypical ui.R, server.R, and global.R used in most Shiny apps. This made (for me) and makes (for the reader) finding the appropriate code sections much easier and more efficient (for instance, by localizing all parameters in a parameters.R file, and further grouping them by function, instead of having them spread out through multiple other files, or—even worse—hard-coding them), and also comes with the added benefit of allowing some more generalizable code sections to be easily repurposed in other projects in the future.

Don’t Know Much About History: Visualizing the Scale of Major 20th Century Conflicts (Overview)

Check out the app here while reading the article.

 

Executive Summary:

I built a comprehensive app to plot, filter, and analyze World War Two and other major 20th century conflict data using R Shiny. Users can investigate these historical events spatially (on maps), temporally (through histogram data), and relationally (through scatter and bar plots), and even animate the progression of these conflicts over time.

 

Skip straight to the technical parts! (separate blog post)

 

I. Introduction

History is about human experiences of the past, and few of us have access to these shared stories anymore.

Sure, my grandfather served in the Pacific Theater in the Navy, but he died before I was born, and I got to hear a few stories from my great uncles who flew supply runs during World War Two as well, but I really don’t have a sense for what living through this period in history was like. Neither do most people alive at this point.

For this first project, I wanted to use the power of data visualization in R through Shiny to give people a sense of the immense scale of the conflicts of the 20th century that I can hardly understand myself.

(Why Shiny? Because Shiny makes “app-ifying” data through R incredibly easy while still providing comprehensive functionality—I’d highly recommend it.)

 

II. Motivations – Why I Chose These Data

I obtained these datasets, which were just published within the past several months, from data.mil. I was frankly shocked to see the US military jumping on the Open Data bandwagon, so I couldn’t help but take a peek at the dataset. Within it I found an enormous collection of data points that each carry heavy weight as a human story. I decided to make a tool for anyone passingly curious in, say, WW2 history to be able to explore the history in an interactive and intuitive way.

The data themselves are records of aerial bombing operations performed by the United States military (reportedly every single bombing run made, with a few caveats) along with a small collection of records of aerial bombing operations performed by other nations in conjunction with the US military surrounding four important conflicts of the 20th century: World War One, World War Two, the Korean War, and the Vietnam War* (or “Vietnam Era Conflict”, as it used to be classified by the Library of Congress due to an official declaration of war never having been made). I have no way of proving that these data are exactly representative of the larger conflicts (though data.mil insists they are comprehensive), so I will shy away from making large-scale comparisons, though the data appear representative and complete enough for me to feel confident in the utility of this tool I’ve created. Each point represents a story that had significant impact on the people and places involved, even if much of the residue of these conflicts has been washed away by the ages, as craters have been filled in and entire cities have been rebuilt. I implore the user to dive into the individual point data first to best get a sense of the importance of this project.

These datasets also provided significant challenges to myself as a programmer: many numerical data were missing, inaccurate, or incomplete, textual entries were difficult to process, and the sheer volume of the data (nearly 5 million total observations altogether across hundreds of columns) required a special level of care to make such an exploratory tool possible for people to enjoy using.

My hope is that anyone—from the amateur historian to the student with a passing curiosity in these histories—will be able to develop a sense of these events through this tool.

 

III. Overview

One of the most important senses I want to give people is a sense of scale, both in terms of space and magnitude or intensity.

To give people a sense of the shear amount of land (and sea) area affected by these conflicts, I’ve plotted out the unique targets of aerial bombing operations for all of these wars on a central “overview” map. Each conflict has a separate color to make comparisons easy (compare WW1—when airplanes were just beginning to be used in combat—and WW2—just some 25 years later—for instance). The data are sampled heavily in order to make plotting (especially within the platform of Shinyapps.io) feasible. I have also allowed the opacity to change based on how many points are plotted for improved visibility–notice that few points appear distinct and many points become a cloud. Even just from a bird’s eye view, you can see how much territory was affected by these conflicts, and you can see how the conflict was distributed across space and time. Feel free to adjust the map and label style to your liking as you explore the spatial distributions.

I wanted to give people an otherworldly sense of magnitude by displaying the number of missions flown, bombs dropped, and weight of bombs dropped in the info boxes at the top. These update based on your selections of wars to include and specific date ranges to inspect. You may find a few noticeable distinctions between the different wars based on the magnitude of their bombing campaigns.

It should be noted that the weight shown in pounds is just the weight of the deliverable or the warhead itself—that is, a 10-pound explosive propelled by 90 pounds of rocket fuel would be counted as a 10-pound bomb. Astute observers may have already noticed the relative magnitude of the atomic bombs dropped near the end of the Second World War as well—they have been listed according to their weight in TNT explosive equivalent.

Each point comes with a clickable tooltip that provides a little snippet of the event as far as the data can show. The text has been heavily processed to appear reasonable and well-formatted to the human eye, and missing or incomplete data has been replaced with general descriptions. I consider this a key feature of the app, as many memorials around the world are made most effective by showing individual names and stories. Often just a small snippet (an amount that’s digestible) is more impactful than a full report, as there’s simply too much information in its entire form, and focusing on a few small points that one can grasp onto (along with an understanding the scale of the conflict) can provide one with the best intuitive understanding.

 

IV. Sandbox

I mentioned that I wanted to create a tool for exploring history through data, and while the overview plots are intuitive and informative, I wanted to give any user, regardless of programming skill, the ability to further investigate trends and patterns the data. I call it the sandbox, as it’s a tool that allows the user to create new things in an exploratory and creative manner. It lets people explore trends between whatever two variables they’d like as a hypothesis generator.

 

V. Animation

I also wanted to give users a sense of the progression of these conflicts, so I added an animation component to all of the graphs and maps. The app will automatically cycle through the entire date range of the conflicts selected by year, by month, or by week, to show the different stages of each war. For instance, with World War Two, you can see the European Theatre open up before the Pacific Theatre, followed by the liberation of France, followed by the sudden crescendo of bombings throughout Japan, culminating in the atomic bombings at Hiroshima and Nagasaki.

 

VI. Historical Interests

Have you ever wondered where the expression “taking flak” comes from? Watch a “How to Avoid Flak” training video (genuinely used for World War Two pilots) in the Pilot tab. For an overview of major battles and events, see the Commander tab. For an eye on the sky as a civilian, see a heatmap of bombing intensity in the Civilian tab.

 

Feedback:

Needless to say, if you have any suggestions (or find any errors in the app or its code), please don’t hesitate to contact me.

 

Dig Deeper:

That’s it for the surface of the project. The true magic lies in what’s under the hood: how the data were prepared. To learn more about that, see my technical blog post on this project.