I'm currently moving all my old content over from the UMD CS Department webservers. Please bear with me as I fill in missing content, fix broken links, re-design style sheets, etc. Thanks.


Here are some projects — small & large, scientific & artistic — that I've fooled around with in my spare time. I hope you find them to be some combination of useful, interesting, and diverting.

False Leveler:
Artificial histogram matching

False Leveler is a Processing program I created to do histogram matching with random, artificially-created destination histograms. Rather than repeating what I wrote on the project's page, I'll just show you some examples.

"False Leveler (Carnival #01)"
"False Leveler (Rust #01)"

lldist.rb: Calculate the distance between lat/lon pairs

This is a Ruby function to find the distance between two points given their latitude and longitude. Latitude is given in degrees north of the equator (use negatives for the Southern Hemisphere) and longitude is given in degrees east of the Prime Meridian (optionally use negatives for the Western Hemisphere).

include Math
DEG2RAD = PI/180.0
def lldist(lat1, lon1, lat2, lon2)
  rho = 3960.0
  theta1 = lon1*DEG2RAD
  phi1 = (90.0-lat1)*DEG2RAD
  theta2 = lon2*DEG2RAD
  phi2 = (90.0-lat2)*DEG2RAD
  psi = acos(sin(phi1)*sin(phi2)*cos(theta1-theta2)+cos(phi1)*cos(phi2))
  return psi*rho

A couple of notes:

  1. This returns the distance in miles. If you want some other unit, redefine rho with the appropriate value for the radius of the earth in your desired unit (6371 km, 1137 leagues, 4304730 passus, or what have you).
  2. This assumes the Earth is spherical, which is a decent first approximation, but is still just that: a first approximation.*

I am currently writing a second version to account for the difference between geographic and geocentric latitude which should do a good job of accounting for the Earth's eccentricity. The math is not hard, but finding ground truth to validate my results against is, since the online calculators I've tried to check against do not make their assumptions clear. I did find a promising suite of tools for pilots, and I'd hope if you're doing something as fraught with consequences as flying that you've accounted for these sorts of things.


I've been putting this to use in a new project, and I've noticed an edge case that didn't crop up before. Due to some floating-point oddities, trying to get the distance between a point and itself will throw an error. In that case the value passed to acos() should be 1.0 but ends up being 1.0000000000000002 on my system. Since the domain of acos() is [-1,1] this is no good.
If you want to be on the safe side you can replace this:
psi = acos(sin(phi1)*sin(phi2)*cos(theta1-theta2)+cos(phi1)*cos(phi2))
with this:
val = sin(phi1)*sin(phi2)*cos(theta1-theta2)+cos(phi1)*cos(phi2)
val = [-1.0, val].max
val = [ val, 1.0].min
psi = acos(val)
and that will take care of things. (Yes, this is a verbose way of doing things, but I often prefer the verbose-and-overt to the laconic-and-recondite.)

Roy's Mirror –or– Through a Glass, Dithered

This is a simple animation I whipped up in Processing based on Roy Lichtenstein's series of mirror paintings.

This page has a little more background, the complete code, and an version which will run in your browser using your webcam. (For certain browsers, anyway.) If the animation doesn't work in your browser, here's how it looks:


TwiGVis: Twitter Mapping

TwiGVis (TWItter Geography VISualizer) is a program I created to visualize all of the tweets my research group had collected durring Hurricanes Sandy and Irene. It will give you both still images, like below, and video output.

Later I modified it to also display data from the Red Cross on their mobile donations campaign.

Though I initially created it only for internal use in my lab, we received such good feedback at some talks that I've posted the code online. You can download the code and sample data here. Use and modify it as you wish, according to a GPL license. Read more about the project and see more examples here. I hope some of you find this useful.

Example video output of TwiGVis

Happy mapping!


Recently I had to crop a lot of animated gifs down for a project. This isn't hard to do with ImageMagick

$ convert in.gif -coalesce -repage 0x0 -crop WxH+X+y +repage out.gif

…but it does require some repetive typing and mental arithmetic and rather mysterious incantations if you don't grok what all the coalescing and repaging is about. (I don't.) So I put together this bash script to handle that for me. Didn't Paul Graham say that if your coding is repetive then the problem isn't your project, it's that you're doing it wrong. Specifically you're operating at the wrong level of abstraction.

Because I found myself repetitively wrangling these ImageMagick commands into shape, I decided it was time to sidestep that problem and make a script to do the drudgery for me.

if [ -z "$1" ]; then 
  echo usage: $0 infile left right [top] [bottom]
echo -e "  opening \t ${1}"
BASE=`echo ${1} | sed 's/\(.*\).gif/\1/'`
T=${4-0} #use argv[4] or 0, if undef
B=${5-0} #use argv[5] or 0, if undef
W0=`identify ${1} | head -1 | awk '{print $3}' | cut -d 'x' -f 1`
H0=`identify ${1} | head -1 | awk '{print $3}' | cut -d 'x' -f 2`
echo -e "  current size \t ${W0}x${H0}"

let "W1 = $W0 - ($L + $R)"
let "H1 = $H0 - ($T + $B)"
echo -e "  new size \t ${W1}x${H1}"

echo -e "  saving to \t ${NEWNAME}"

`convert ${1} -coalesce -repage 0x0 -crop ${W1}x${H1}+${L}+${T} +repage ${NEWNAME}`

Simply save this as something like gifcrop.sh, and then run it like so:

$ gifcrop.sh in.gif 10 20 30 40

That will take 10 pixels off the left, 20 off the right, 30 from the top and 40 from the bottom. The result gets saved as incrop.gif. The final two arguments are optional, since most of the time I found myself adjusting the width but leaving the height alone. So these two commands are identical:

$ gifcrop.sh in.gif 10 20 0 0
$ gifcrop.sh in.gif 10 20

This all depends on the format of the results that ImageMagick gives you from the identify command, which is used to get the current size of the input image. You may need to adjust these two lines:

W0=`identify ${1} | head -1 | awk '{print $3}' | cut -d 'x' -f 1`
H0=`identify ${1} | head -1 | awk '{print $3}' | cut -d 'x' -f 2`

On my machine, identify foo.gif | head -1 gives me this output:

foo.gif[0] GIF 250x286 250x286+0+0 8-bit sRGB 128c 492KB 0.000u 0:00.030

The awk command isolates the 250x286 part, and the cut command pulls out the two dimensions from that.

Reel Shadows: Boid Animations

This is an abstract algorithmic animation project I've been working on. In a nutshell, it's my interpretation of what would happen if you projected a movie onto a flock of birds instead of a screen.

You can read about my methods and the videos that inspired it as well as see some final renders here. This video should give you a flavor.


Monet Blend

Generally speaking, I don't see in color well. Not that I'm color blind. But when I look at a scene I notice shape and form much more than I notice color. When I was a child my art teacher had to coax me into bothering to color anything in; I was satisfied with line drawings. I spent a lot of time designing and making paper models, but always out of plain, unadorned white cardboard.

There are exceptions to this preference for space over color. Monet, Turner and Rothko all make me sit up and notice color. I especially like the various series Monet did of the same scene from the same vantage point under varying conditions. I love seeing multiple works from a series side-by-side in galleries so I can compare them.

This project was born out of the desire to be able to look at multiple pieces in the same series of Monet paintings at the same time. Swiveling my head back and forth rapidly only works so well, and it earns me extra weird looks from other patrons. (Plus, I feel like I need to correct my weakness w.r.t. to color, and if I'm going to learn about color I might as well learn from the best.)

What I've done is write a program which loads two Monet images and blends them together. Just averaging the two would create a muddled, uninspired mess. So I use a noise function to decide at each pixel how much to draw from image 1 and how much from image 2.

Blending matrix
Combination of "Sunset, Pink Effect" and "Hazy Sunshine" using Blending matrix.

You can see an example of this blending matrix above. Darker pixels in the blending matrix will have a color more similar to "Sunset, Pink Effect," while lighter pixels are closer to "Hazy Sunshine." A pixel which is exactly 50% (halfway between white and black) will be given a color halfway between the color of the corresponding pixel in each image.

The blending matrix is a function of time, so the influence of each source image over the output changes over time, allowing me to see different parts of each source image over time.

By changing parameters I can control how smooth or muddled the noise is, how bimodal the distribution is, how fast it moves through time/space, etc.

Currently I'm using a simple linear interpolation between the two source images, which is then passed through a sigmoid function. There are at least a dozen other ways I could blend two colors. I need to explore them more thoroughly, but from what I've tried I like this approach. It's conceptually parsimonious and visually pleasing enough.

The examples above show colors interpolated in RGB space. The results are good, but can get a little wonky if the two source colors are too dissimilar. Interpolating between two colors is a bit of black magic. AFAICT there is no one gold standard way to go about it. I've tried using HSV space but wasn't too pleased with the results. After that I wrote some code which does the interpolation in CIE-Lab color space. I think the results are very slightly better than RGB, but it's difficult for me to tell. I'll render out a couple of examples using that technique and maybe you can judge for yourself.

If I wanted to get sophisticated about this I should also write in a method to do image registration on the sources images. I have another semi-completed project which could really use that, so once I get around to coding it for that project I'll transfer it over to this one as well. (Although that other project needs to do it on photographs, and this on paintings, and the latter is a lot trickier.)

Diffusion Patterns

This is an animation process inspired by Leo Villareal's and Jim Campbell's work with LEDs.

Since I have neither the money nor space for hardware, I'm settling for this software version.

A grid of nodes is initiated with random wirings between adjacent nodes. Each node is given a fixed amount of charge, and the charge flows between wired nodes over time. At each time step a random number of wires are created or destroyed, which keeps the system from settling into a fixed state.

I use a variation of Blinn's metaball algorithm to render each node, which I know isn't a great match for the look of LEDs when it comes to photorealism, but I like it for this purpose and I've been looking for an excuse to play around with metaballs anyway.

Metaballs are typically coded to have constant mass/charge/whatever and varying location. I've flipped that so their charge is variable and location is constant. Visually I think it's actually a pretty good match for the things Jim Campbell does with LEDs behind semi-opaque plexiglass sheets. (Or could be if I tweaked it with that in mind as a goal.)

I'd like to take this same rendering process and use it for a lot of other processes besides the charge-diffusion algorithm that's running in the background here.

Card Matching

As part of my research, I've been building a neural network system to play a memory game you might have played as a child. It's variously known as "Concentration," "Memory," "Pelmanism," "Pairs" and assorted other names. The basic idea is that several pairs of cards are face-down on a table, and you have to find all the pairs by turning over two cards at a time.

I want to compare my system's performance to that of humans, but don't have any information about human performance to use as a benchmark. To clear that hurdle, I built an online version of the game for people to play so I can record their behavior. You can play my card matching game by going here.

In addition to collecting data for my research, coding this game also gave me an excuse to learn jQuery.

WynneWord Crossword Generator

To be added

Celtic Knot Animations

I wrote some code to generate Celtic knotwork patterns.

"Knot Variation #6"
Music by Flatwound, "The Long Goodbye."

Noise Portrait

To be added


To be added


This doesn't even rise to the level of a 'project,' just a snippet of LaTeX I put together so that I could use asterisms (⁂) when writing papers. I use them to mark off sections of text which will need further attention when editing.

As I said, this isn't really a project, but I'm putting it up here because hopefully it will lead to me cleaning up and posting more of the LaTeX macro file I've been piecing together over the last year. (And who knows, maybe there's some other STEM grad student who gets as excited over obscure typographical marks as I do.)

Updated: I've got a much, much simpler solution than the one I gave below, and it appears to get rid of the weird beginning-of-paragraph bug I ran in to. I haven't tested it extensively, but it seems to work better than the solution I posted previously, and it's certainly much easier to understand.


For the record, here's the old version:

          }% end resizebox
        }% end raisebox
  }% end smash

There are other macros floating around out there that will create asterisms, but the ones I tried don't work if you're not using single-spacing/standard leading. This one will — best I can tell — in addition to working with different sized text, etc.

Jared Sylvester
August, 2015