Reading List 2018


It’s that time of year again! (Man, I really need to blog more.)

Books

I managed to get to most of the books I wanted to read in 2017 (see below for the ones I’m rolling into next year). Here are a few extras I read:

Hillbilly Elegy

J.D. Vance

This book came on my radar not long after the 2016 presidential election. While I did gain some valuable insights into Appalachia through the lens of Vance’s personal experiences, I felt like some of the explanations he provides (basically that hillbillies—his term—have brought their misfortune on themselves through “learned helplessness” and a cultural opposition to education) are unnuanced and overly simplistic.

The Cuckoo’s Egg

Cliff Stoll

I found this one after reading the Wikipedia article about Markus Hess. It was a fascinating look into the state of computer security and the Internet in the 1980s, and the writing and pacing made for a really good detective story.

What Hedge Funds Really Do

Philip J. Romero and Tucker Balch

This book was required reading for Machine Learning for Trading (also part of my masters degree program at Georgia Tech). It’s a pretty quick read (one afternoon/sitting) and provided me a decent overview of hedge funds’ history, how they work, and the theory behind risk mitigation. (Spoiler: the only decent way to make money with hedge funds is to run them.)

A Random Walk Down Wall Street

Burton G. Malkiel

This is sort of a classic in terms of literature on investing in the stock market, so I figured I’d pick it up. It’s surprisingly long—a little over 450 pages—and I got tired about three quarters of the way through. The general sentiment is to invest in a diverse portfolio early in your career and invest as much as you can over time, adjusting your portfolio to become more conservative as you age. Nothing really groundbreaking, but his asides, anecdotes, and worked examples are worth slogging through the more repetitious parts.

Liar’s Poker

Michael Lewis

This one has actually been on my “to-read” list since the 2008 financial crisis. I’d heard a lot about the culture of 1980s Wall Street, and it sounds like most of it was true: unchecked greed, no-neck traders gleefully ripping off anyone they could, little (if any) true oversight. This was a good mix of horrifying and entertaining, but the more I think about it, the more I find it the former than the latter. (I doubt much, if anything, has really changed in the banking industry over the intervening years. Maybe there’s less cocaine now.)

The Big Short

Michael Lewis

I really enjoyed Liar’s Poker, so I figured I’d give this a shot as well. This book focuses on the forces that led to the 2008 financial crisis (specifically, the proliferation of credit default swaps and collateralized debt obligations). I highly recommend this book (I haven’t seen the film, but I’ve heard it’s good); however, if you just want the details of the crisis without the story elements, this video is a good TL;DR.

The Manager’s Path

Camille Fournier

I’d been meaning to read this since I first heard Camille (who was my CTO when I worked at Rent the Runway) was working on it, and I was not disappointed. I highly, highly recommend this book to managers in all industries, but especially those in tech (I found the chapters on managing more than one team and managing managers particularly relevant).

The First 90 Days

Michael D. Watkins

I moved from Hulu to Fox Networks Group in July, and I figured then was as good a time as any to re-read The First 90 Days. While I think some of the more salient points can be distilled into a blog post (which I’ll plan to tackle in early 2018), I think it’s still worth a full read, especially if you’re joining and/or leading a new team.

Practical Machine Learning

Ted Dunning and Ellen Friedman

This is really more of a long blog post/tutorial on PredictionIO’s universal recommender and its correlated cross-occurrence algorithm for making recommendations, but I still enjoyed it. It’s a very quick read (only fifty or sixty pages).

 


 

Here are the books I want to read in 2018, starting with the five books I’ve rolled over from 2017.

Type-Driven Development with Idris

Edwin Brady

I did some reading and work on Idris in 2016, so I’m really excited to read Edwin’s book this year.

Mindset: The New Psychology of Success

Carol S. Dweck

I mentioned this book in a talk I gave at RailsConf in May of 2016. I read a couple of Dweck’s papers while preparing for that talk, and I think there’s a tremendous amount I can learn from Mindset for my growth as both an individual contributor and as an engineering leader.

First, Break All the Rules

Marcus Buckingham and Curt Coffman

I’ve heard good things about this book at each of the last four places I’ve worked, so it’s probably high time I read it.

A Billion Voices: China’s Search for a Common Language

David Moser

This book was recommended by my Mandarin teacher, and I think it’ll be really interesting to see how modern Mandarin arose from a highly fragmented set of spoken languages and traditions.

The Three-Body Problem

Cixin Liu

This is the only novel on my reading list, but it’s been recommended to me by three completely separate groups of friends, so I’m expecting to really enjoy it. (It’s not that I don’t enjoy reading novels—I do—but they’ve been crowded out by my interests in poetry, philosophy, and computation over the past handful of years.)

The Practice of Programming

Brian W. Kernighan and Rob Pike

I’ll read pretty much anything by Kernighan and Pike, and I’m particularly hoping for some good insights on the tougher aspects of engineering (balancing tradeoffs, identifying the human roots of technical problems, &c).

English Grammar for Students of German

Cecile Zorach et al.

My German has gone a bit downhill since I first started learning in high school (wer rastet, der rostet, nicht wahr?), and while my pronunciation is still pretty good and vocabulary comes back to me quickly, I don’t feel I have as firm a grammatical footing as I’d like. Hopefully reading this book and practicing my spoken German helps with that!

Working Effectively with Legacy Code

Michael Feathers

This book has been recommended to me by too many people for me to continue not reading it.

High Output Management

Andrew S. Grove

This was recommended to me at a conference I attended earlier this year. Andrew Grove was employee number three at Intel and has built some really impressive teams in the course of his career, so I’m looking forward to reading his story and learning from his mistakes.

Eichmann in Jerusalem

Hannah Arendt

“A report on the banality of evil” seems particularly relevant in our current political climate.

What I Learned Losing a Million Dollars

Brendan Moynihan

I mentioned looking forward to learning from Andrew Grove’s mistakes in High Output Management, and that’s doubly true for this book. Most of the narrative finance books I’ve read are more analyses of other people’s realizations and mistakes; I’m hoping to glean something a bit more qualitative (in terms of the psychology of investing) from this first-person account.

Consciousness Explained

Daniel Dennett

I’ve been a big fan of Dennett’s work since college (I focused on philosophy of physics and theory of mind), and I’m particularly interested in this book and its view of consciousness as a continuum.

Papers

In addition to the papers I set out to read in 2017, I picked up a ton more (grad school will do that to you). Omitting some of the less compelling papers, they were:

The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86)

Hovav Shacham

I read this paper for my introduction to information security class, and it was a good introduction to return-to-libc attacks (I got to write my own exploit as part of my coursework!).

On the Effectiveness of Address-Space Randomization

Hovav Shacham et al.

Another InfoSec paper (which is true of the next several, so I’ll just note when we move on from that topic). This one focused on the benefits of address-space layout randomization, or ASLR, in preventing attackers from jumping to known points in memory.

ASLR-Guard: Stopping Address Space Leakage for Code Reuse Attacks

Kangjie Lu et al.

ASLR is only effective if the address layout is truly random and no information about the randomization or ensuing layout is leaked. This paper discusses the possibility of information leakage and proposes ASLR-Guard as a measure to mitigate it (by rendering leaked pointer information useless).

Code-Pointer Integrity

Volodymyr Kuznetsov et al.

This paper covers Code-Pointer Integrity (CPI) as an augmentation to tools like ALSR. By enforcing CPI, it’s possible to prevent any kind of control-flow hijacking by an attacker (such as ROP exploits).

Control-Flow Bending: On the Effectiveness of Control-Flow Integrity

Nicolas Carlini and Mathias Payer

Another method of preventing control-flow hijacking, but with somewhat higher overhead than CPI.

Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices

Nadia Heninger, Zakir Durumeric, Eric Wustrow, and J. Alex Halderman

I found this paper really fascinating. It explores RSA and DSA failures caused by malfunctioning random number generators, particularly devices that generate predictable random numbers due to insufficient entropy in the input pool during boot (see Figure 5 in the paper). As is the case with all good security papers: really interesting, really scary.

Q-Learning in Continuous State and Action Spaces

Chris Gaskett, David Wettergreen, and Alexander Zelinsky

Moving on from InfoSec, we now enter the realm of reinforcement learning! (I took Reinforcement Learning over the summer.) My first exposure to Q-learning was in Machine Learning for Trading (linked above), and that implementation relied on a Q-value table (requiring discrete action and state spaces). This paper covers Q-learning in environments with continuous action and/or state spaces.

Learning to Predict by the Methods of Temporal Differences

Richard S. Sutton

This is another paper I found kind of mind-blowing. The general idea here is to incrementally improve predictions not by comparing predictions to actual outcomes, but by comparing successive predictions. It’s the kind of thing that sounds like it shouldn’t work, but it absolutely does! It’s a bit long (36 pages), but definitely worth the read.

Knows What It Knows: A Framework For Self-Aware Learning

Lihong Li, Michael Littman, and Thomas J. Walsh

Knows-What-It-Knows, or KWIK, learning is a pretty cool idea, and the paper is a quick (pun super intended) read. The idea is to apply PAC (probably, approximately correct)-learnability to mistake-bound models, resulting in a learner that’s able to determine when it doesn’t know the answer. The example provided in section 2 of the paper is a great way to develop intuition around KWIK learning and is only about a page long.

Residual Algorithms: Reinforcement Learning with Function Approximation

Leemon Baird

Another cool one. The idea here is that there are reinforcement learning algorithms (such as Q-learning) that perform well using lookup tables, but perform poorly when direct function approximation is used. Baird shows that residual algorithms permit direct approximation while maintaining stability and convergence.

Stable Function Approximation in Dynamic Programming

Geoffrey J. Gordon

This is one of the papers that really underscored for me the interrelation of reinforcement learning and dynamic programming (which each rely on the same mathematical underpinning: the Bellman equations). Though the equations themselves are not referenced, the proofs of convergence for multiple temporal difference learning algorithms that rely on function approximation (see also Baird, above) are really cool.

Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning

Richard S. Sutton, Doina Precup, and Satinder Singh

We read a few different papers on temporal abstraction in reinforcement learning for my RL class, but this one was my favorite. I particularly like the idea of options in this paper, which are temporally variable, closed-loop policies that agents may take in the course of learning a game (e.g. picking up an object, traversing a room). Abstracting time out in this way (rather than thinking in terms of discrete steps, each of uniform spatiotemporal duration) allows us to incorporate more intuitive, semantically valuable actions into Markov processes.

A Polynomial-time Nash Equilibrium Algorithm for Repeated Games

Michael Littman and Peter Stone

This is another cool one: determining a polynomial-time algorithm for finding Nash equilibria in general-sum games (e.g. multiplayer online games).

A Polynomial-time Nash Equilibrium Algorithm for Repeated Stochastic Games

Enrique Munoz de Cote and Michael Littman

Similar to the above, but for stochastic (non-deterministic) games.

Markov Games as a Framework for Multi-Agent Reinforcement Learning

Michael Littman

This paper extends Q-learning beyond MDPs (Markov decision processes) and into Markov games. (Another good paper on this topic, also by Littman, is “Friend-or-Foe Q-Learning in General-Sum Games”.)

Correlated Q-Learning

Amy Greenwald and Keith Hall

This is the last RL paper I’ll list here (and, I think, the last one I read for my class). Greenwald and Hall introduce correlated Q-learning, which is a Q-learning algorithm that relies on correlated equilibria to generalize both Nash-Q and Friend-or-Foe-Q (see Littman’s paper, above). I really enjoyed reading this paper, and it was a great place to end my reading for the course, knitting together Q-learning, Nash equilibria, MDPs, and Markov games.

Practical Network Support for IP Traceback

Stefan Savage, David Wetherall, Anna Karlin and Tom Anderson

This is the first of a few papers I read for Network Security (as before, I’ll note when we switch gears). This one focuses on identifying the source of packet flooding attacks without the cooperation of the associated ISP (and using a surprisingly elegant sampling algorithm that relies on very little edge data to reconstruct a packet’s path).

A DoS-Limiting Network Architecture

Xiaowei Yang, David Wetherall, and Thomas Anderson

This paper introduced me to the idea of capabilities, which are a method by which routers can identify desired packets (packets encoding this kind of information are said to employ a capability). While the overhead involved in adopting this architecture is nontrivial, implementing capabilities (or their functional equivalent) would be a huge step forward in securing computer networks.

ZMap: Fast Internet-Wide Scanning and Its Security Applications

Zakir Durumeric, Eric Wustrow, and J. Alex Halderman

This paper taught me about ZMap, which is, well, “a modular, open-source network scanner specifically architected to perform Internet-wide scans and capable of surveying the entire IPv4 address space in under 45 minutes from user space on a single machine.” It’s a really cool tool, and I encourage you to check it out.

Anomalous Payload-Based Network Intrusion Detection

Ke Wang and Salvatore J. Stolfo This was the first of two papers I read for one of my Network Security projects, which involved using PAYL to detect anomalous traffic. I particularly enjoyed this paper because it sits at the intersection of two of my main interests: information security and machine learning.

Polymorphic Blending Attacks

Prahlad Fogla et al.

This was the second of two papers I read for the Network Security project I mention above. In this paper, the authors discuss polymorphic blending attacks, which can be used to hide anomalous traffic from detectors like PAYL. I even got to try my hand at obfuscating malevolent traffic and getting it past a PAYL-based model I’d built!

Dynamic Routing Between Capsules

Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton

These next two papers are on capsule networks, which are an augmentation to more traditional convolutional neural networks. The idea is to encode positioning information in the networks themselves to make them more robust to affine transformations; this series of Medium posts is a really good step-by-step breakdown of the material.

Matrix Capsules with EM Routing

Geoffrey E. Hinton et al.

This paper is also on capsule networks, but focuses on the expectation maximization algorithm used to route signals between layers. This Medium post (from the same series as the post linked above) is a great explanation of (and addition to) the paper.

White Paper: The Universal Recommender

Jérôme Kunegis, Alan Said and Winfried Umbrath

Finally, I read the white paper on the Universal Recommender (used by PredictionIO, mentioned under Books). The underlying correlated cross-occurrence algorithm is interesting, and I think this is something of a must-read for anyone considering using the Universal Recommender (or honestly, anyone interested in building ML-based recommendation systems).

I also read a bunch of cryptocurrency and cryptographic token white/yellow papers (including those for Ethereum (white and yellow), Golem, Basic Attention Token, Civic, WAX, and EOS). Since these aren’t peer-reviewed academic papers, though, I haven’t broken them out as I have for the others (though I do find them extremely interesting).

 


 

Finally, here the papers I want to read in 2018 (starting with two rolled-over ones from 2017):

Google DeepMind

I followed some of the news about AlphaGo’s victory over Lee Sedol last year, but didn’t get a chance to read the paper, so it’s on my to-read list for the new year.

Everything You Always Wanted to Know About Synchronization but Were Afraid to Ask

Tudor David, Rachid Guerraoui, and Vasileios Trigonakis

Synchronization is hard! Hopefully reading this paper makes it less so (for me, anyway).

Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm

David Silver et al.

A continuation of the type of work explored in “Mastering the Game of Go…” (above). In this case, AlphaZero attained a superhuman level of play in chess, shogi, and go after only twenty-four hours of self-play, with no prior domain knowledge.

Improving Elevator Performance Using Reinforcement Learning

Robert H. Crites and Andrew G. Barto

This is the only paper I didn’t read during my RL class, so I want to be sure to get to it.

The Max K-Armed Bandit: A New Model of Exploration Applied to Search Heuristic Selection

Vincent A. Cicirello and Stephen F. Smith

I became interested in k-armed bandits when we covered them in my RL class, and given that this won the best AAAI paper award in 2005, I figure this should be well worth the read.

Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer

Peter W. Shor

Quantum computing sort of fell off my radar this year, and given all the activity in cryptocurrency markets over the past twelve months, I figure now’s the time to read up on the sorts of technology that would turn those markets upside-down.

Off we go!