For data science, elites use use array languages such as J or k. However, Python is ubiquitous and thus one would assume it is capable of the same things.

Here, the task is to compute the excess deaths per person by state. The CDC provides a data set for excess deaths by state; we can couple this with census estimates of population.

To do so:

  1. Filter rows to those which are relevant: unweighted estimates from March 2020 to November 2011 (for which data collection has caught up).

  2. Compute excess deaths by state: find excess dates by subtracting observed from expected, and then sum, grouping by state.

  3. Join excess deaths by state with state population estimates, and then divide to get excess deaths per person.


excessTable:("DSI I SS "; enlist ",") 0: `:excess.csv popTable:(" S I";enlist",") 0:`:pop.csv popTable:`State xkey `State xcol popTable

Then the desired result is given by the expression:

`ObservedNumber xdesc select State,ObservedNumber%POPESTIMATE2019 from ((select sum ObservedNumber-AverageExpectedCount by State from excessTable where Type=`Unweighted, Outcome=`Allcauses, WeekEndingDate > 2020.03.13, WeekEndingDate < 2021.11.13) ij popTable)

We can time it with the builtin \t command; it takes around 430μs.

(to be perfectly rigorous in comparison, we should also include the time it takes to key popTable: it takes around ~2.5μs)


import pandas as pd

excess_table = pd.read_csv("excess-j.csv") pop_table = pd.read_csv("nst-est2019-alldata.csv").rename(columns={"NAME": "State"})

# augment with difference column excess_table["Excess"] = excess_table["Observed Number"]-excess_table["Average Expected Count"] filtered = excess_table[(excess_table["Type"] == "Unweighted") & (excess_table["Outcome"] == "All causes") & (excess_table["Week Ending Date"] > "2020-03-13") & (excess_table["Week Ending Date"] < "2021-11-13")] joined = filtered.groupby("State").sum().merge(pop_table, left_on="State", right_on="State", how="inner") joined["Excess Per Person"] = joined["Excess"] / joined["POPESTIMATE2019"] joined[["State", "Excess Per Person"]].sort_values(by="Excess Per Person", ascending=False)

I benchmarked this in a simpleminded way, viz.

import pandas as pd

from time import time

excess_table = pd.read_csv("excess-j.csv") pop_table = pd.read_csv("nst-est2019-alldata.csv").rename(columns={"NAME": "State"})

t = time()

for i in range(100): ...

delta = time() - t print('t: %0.2fs' % delta)

So we can calculate that this takes 12ms. This is decidedly slower, but I think the bigger drawback is the overly prolix style that Python programmers are used to.


jd'csvprobe /replace excess-j.csv' jd'csvcdefs /replace /h 1 /v 11 excess-j.csv' jd'csvscan excess-j.csv' jd'csvrd excess-j.csv excess'

jd'csvprobe /replace nst-est2019-alldata.csv' jd'csvcdefs /replace /h 1 /v 20 nst-est2019-alldata.csv' jd'csvscan nst-est2019-alldata.csv' jd'csvrd nst-est2019-alldata.csv statePop'

jd'reads /table calculated sum "Observed Number", sum "Average Expected Count" by State from excess where "Week Ending Date"> "2020-03-13" and "Week Ending Date" < "2021-11-13" and Type="Unweighted" and Outcome="All causes"' excess_col =: -/ ,"2 > {: jd'reads "Observed Number", "Average Expected Count" from calculated' jd'createcol calculated excess int';excess_col

jd'ref calculated State statePop NAME' excess_pp =: %/,"2 > {: jd'reads excess,statePop.POPESTIMATE2019 from calculated,calculated.statePop' jd'createcol calculated excessPp float';excess_pp jd'reads State,excessPp from calculated order by excessPp desc'

This is at least a little bit nicer than Pandas/Python because the database is implemented using the language's high-level constructs, so we can manipulate columns as we would vectors. But in general it is the same oblique imperative style.

Benchmarking is with the builtin (6!:2) verb. Once read, the setup takes 13ms, i.e. on par with Pandas/Python.


data <- read.csv("excess.csv")

popTable <- read.csv("nst-est2019-alldata.csv") library(dplyr)

data$Excess <- data$ObservedNumber - data$AverageExpectedCount data$WeekEndingDate <- as.Date(data$WeekEndingDate, format="%Y-%m-%d")

excessTable <- data |> filter(Type=='Unweighted' & Outcome=='Allcauses' & WeekEndingDate < '2021-11-13' & WeekEndingDate > '2020-03-13') |> group_by(State) |> summarise(Excess = sum(Excess))

joined <- merge(x = excessTable, y = popTable, by.x = 'State', by.y = 'NAME')

joined$ExcessPP <- joined$Excess / joined$POPESTIMATE2019

select(joined, State, ExcessPP) |> arrange(desc(ExcessPP))

I benchmarked using rbenchmark:


benchmark("r" = { ... }, replications=100 )

The R procedure took 19ms. The pipe operator makes that much reasonably elegant but as a whole it suffers from the imperative style necessary with Jd and Pandas.


Q is snappy. My guess is that this is because it is an ordered database; Pandas or Jd will never be as fast without fundamentally changing their approach.

Jd is about the same as Pandas. It has a much smaller userbase but it is competently developed and thus suitable.