/* Compute what is computable and make computable what is not so */
Jared Sylvester
About Me
I'm a consultant/machine learning researcher/data scientist at Booz Allen Hamilton. In 2014 I got my doctorate in Computer Science from the University of Maryland, where I focused on the field of biologically-inspired AI. My dissertation was on executive function and cognitive control (working memory, decision making, etc.) using neural-inspired systems rather than rule-based ones — essentially trying to get computers to act in ways that are a little more like our brains and also get artificial neural networks to act a little more like traditional computers.
I also did a lot of work at the Center for Complexity in Business, applying data-driven computational techniques to marketing, finance and other business domains. I've also worked in fields including data mining, biometrics, circuit design, cognitive psychology, finance, social networks, and marketing.
Donald Knuth said “Science is what we understand well enough to explain to a computer. Art is everything else we do.”* This is a moving boundary, and I'm interested in the application of algorithmic techniques to liminal fields on both sides of the frontier.
My non-scientific interests include algorithmic animation (I've posted some of my work here), calligraphy (ditto), and baking bread (sadly it's a little tough to put my output from this hobby online). I've also been trying to teach myself some woodworking and archery.
I live in Maryland with my wife, two toddlers, and a Westie.
C.V. / Résumé
(Last updated December, 2020.)
Please note that my resume requires pre-publication review before being revised. This is an extremely bureaucratic and drawn-out process, so while the list of publications here has been updated, my description of my current job role is about two years out of date.
Research
(Everything I have written below refers to work I did in grad school. I should really get around to writing a synopsis of the work I've done since.)
Dissertation
For my dissertation I worked with Jim Reggia on exploring neural models of cognitive control. Most cognitive control models are built using symbolic, rule-based paradigms. Such systems are both biologically implausible and often tend towards the homuncular. What neural models do exist are typically very narrowly designed for a particular task and require a great deal of human intervention to tailor them to the objective as well as exhibiting problems scaling to larger problem spaces.
I am exploring creating a more generalizable model of cognitive control using a neural paradigm by creating networks which learn not only memories of environmental stimuli but also the steps necessary for completing the task. The steps are stored in a memory formed by a sequential attractor network I developed so that they can be visited in order. I call my model GALIS, for "Gated Attractor Learning Instruction Sequences."
By generating behavior from the learned contents of a memory rather than the explicit structure of the network itself I believe it will be much easier for the model's behavior to change. Rather than having to rebuild the ‘hardware’ of the network, you can instead load different ‘software’ by training the memory on different patterns. Furthermore, making the model's behavior readily mutable opens the door to it improving its performance as it gains experience. That, in turn, should allow the model to learn the behavior necessary to completing a task on its own.
Basing behavior on memory contents rather than architecture is not unlike the shift from clockwork automata like Vaucanson's ‘Digesting Duck’ to the Jacquard Loom. The latter was an important step in the history of computation because its behavior could be changed simply by swapping in a different set of punchcards — i.e., by changing the contents of its memory. Of course GALIS surpasses the Jacquard loom because the loom was only able to follow instructions, not conduct any computation of its own. GALIS, on the other hand, determines endogenously when and how to modify its working memory, produce outputs, etc.
Business & Social Network Analysis
In addition to my dissertation research I'm also working with Bill Rand in UMD's Smith School of Business's Center for Complexity in Business. I'm working on a couple of projects, but the main one for me is an effort to model social interactions in an MMORPG with a freemium business model. Our goal is to be able to model who will convert from free to a paid user based on there location in the in-game social graph and the characteristics of them and their friends. We're using a variety of techniques, including agent-based modeling, logistic regressions and assorted machine learning techniques.
Other neural networks research
Prior to GALIS I worked with Jim on two other projects. The first is a computational model of working memory formation. This was done in conjunction with a wide-ranging study at UMD's Center for Advanced Study of Language into the role of working memory in language tasks. This study of working memory lead into the cognitive control research I am doing now. I have also used machine learning methods to analyze the results of some CASL studies to see if it is possible to determine who will benefit from working memory training based on pre-test results. Please see the 2011 tech report below for more.
The second project, begun in Spring 2007, deals with symmetries in topographic Self-Organizing Maps. By limiting the radius of competition and choosing multiple winners for standard Hebbian learning we can generate cortices with global patterns of symmetric maps. Please see the 2009 Neural Computation paper below for details.
Undergrad
At Notre Dame I did Machine Learning research. I worked on creating and testing a system called EVEN, for ‘Evolutionary Ensembles.’ It is a genetic algorithm framework for combining multiple classifiers for machine learning and data mining. It is very flexible, with the ability to combine any type of base classifiers using different fitness metrics. This work was done with Nitesh Chawla, who advised me for my final two years at Notre Dame.Publications
Journals
- Raff, E., Zak, R., Sylvester, J., Cox, R., Yacci, P., & McLean, M. "An investigation of byte n-gram features for malware classification." Journal of Computer Virology, vol. 14(1), pp. 1–20. February, 2018.
- Sylvester, J. & Reggia, J. "Engineering Neural Systems for High-Level Problem Solving." Neural Networks, vol. 79, pp. 37–52. 2016.
- Reggia, J., Monner, D., & Sylvester, J. "The Computational Explanatory Gap." Journal of Consciousness Studies, vol. 21(9–10), pp. 153–178. 2014.
- Darmon, D., Sylvester, J., Girvan, M., & Rand, W. "Understanding the Predictive Power of Computational Mechanics and Echo State Networks in Social Media." ASE Human Journal, vol. 2(2), pp.13–25. 2013.
- Sylvester, J., Reggia, J., Weems, S., and Bunting, M. "Controlling Working Memory with Learned Instructions." Neural Networks, vol. 41, Special Issue on Autonomous Learning, pp. 23–38. 2013.
- Sylvester, J., and Reggia, J. "Plasticity-Induced Symmetry Relationships Between Adjacent Self-Organizing Topographic Maps." Neural Computation, vol. 21(12), pp. 3429–3443. 2009.
Conferences
- Raff, Sylvester, Forsyth & McLean. "Barrage of Random Transforms for Adversarially Robust Defense." IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 16–20 June, 2019.
- Fleshman, Raff, Sylvester, Forsyth & McLean. "Non-Negative Networks Against Adversarial Attacks". AAAI Workshop on Artificial Intelligence for Cyber Security (AICS). 27 January, 2019.
- Raff & Sylvester. "Linear Models with Many Cores and CPUs: A Stochastic Atomic Update Scheme." IEEE Conference on Big Data. 10–13 December, 2018.
- Raff, Sylvester & Nicholas. "Engineering a Simplified 0-Bit Consistent Weighted Sampling." ACM Conference on Information and Knowledge Management (CIKM). 22–26 October, 2018.
- Raff & Sylvester. "Gradient Reversal Against Discrimination: A Fair Neural Network Learning Approach." The 5th IEEE International Conference on Data Science and Advanced Analytics (DSAA). 1–4 October, 2018.
- Raff & Sylvester. "Gradient Reversal Against Discrimination." Fairness, Accountability & Transparency in Machine Learning (FAT/ML). 15 July, 2018.
- Sylvester & Raff. "What about applied fairness?" ICML: The Debates. 15 July, 2018.
- Raff, E., Sylvester, J. & Mills, S. "Fair Forests: Regularized Tree Induction to Minimize Model Bias." AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES). February, 2018.
- Raff, E., Barker, J., Sylvester, J., Brandon, R., Catanzaro, B., and Nicholas, C. "Malware detection by eating a whole EXE." AAAI Workshop on Artificial Intelligence for Cyber Security (AICS). February, 2018.
- Raff, E., Sylvester, J., & Nicholas, C. (2017). "Learning the PE Header: Malware Detection with Minimal Domain Knowledge." 10th ACM Workshop on Artificial Intelligence and Security (AI-Sec). 3 Nov 2017.
- Sylvester, J., Healy, J., Wang, C., & Rand, W. "Space, Time, and Hurricanes: Investigating the Spatiotemporal Relationship among Social Media Use, Donations, and Disasters." ASE Int'l Conf. on Social Computing. May, 2014.
- Rand, W., Darmon, D., Sylvester, J., & Girvan, M. "Will My Followers Tweet? Predicting Twitter Engagement using Machine Learning." European Marketing Academy Conference. June, 2014.
- Sylvester, J., & Rand, W. "Keeping Up with the (Pre-Teen) Joneses: The Effect of Friendship on Freemium Conversion." Proc. of the Winter Conference on Business Intelligence. February, 2014.
- Darmon, D., Sylvester, J., Girvan, M., & Rand, W. "Predictability of User Behavior in Social Media: Bottom-Up v. Top-Down Modeling." ASE/IEEE Int'l Conf. on Social Computing, pp. 102–107. September, 2013.
- Sylvester, J., & Reggia, J. "The Neural Executive: Can Gated Attractor Networks Account for Cognitive Control?" Ann. Mtg. of the Int'l Assoc. for Computing & Philosophy. July, 2013.
- Reggia, J., Monner, D., & Sylvester, J. "The Computational Explanatory Gap." Ann. Mtg. of the Int'l Assoc. for Computing & Philosophy. July, 2013.
-
Sylvester, J., Reggia, J., & Weems, S. "Cognitive Control as a Gated Cortical Net." Proc. of the Int'l Conf. on Biologically Inspired Cognitive Architectures, pp. 371–376. Alexandria, VA, August 2011.
- Sylvester, J., Reggia, J., Weems, S., & Bunting, M. "A Temporally Asymmetric Hebbian Network for Sequential Working Memory." Proc. of the Int'l Conf. on Cognitive Modeling, pp. 241–246. Philadelphia, PA, August 2010.
- Reggia, J., Sylvester, J., Weems, S., & Bunting, M. "A Simple Oscillatory Short-term Memory Model." Proc. of the Biologically-Inspired Cognitive Architecture Symposium, AAAI Fall Symposium Series, pp. 103–108. Arlington, VA, 2009.
- Sylvester, J., Weems, S., Reggia, J., Bunting, M., & Harbison, I. "Modeling Interactions Between Interference and Decay During the Serial Recall of Temporal Sequences." Proc. of the Psychonomic Society Annual Meeting, November 2009.
- Chawla, N., & Sylvester, J. "Exploiting Diversity in Ensembles: Improving the Performance on Unbalanced Datasets." Proc. of Multiple Classifier Systems, pp. 397–406. 2007.
- Sylvester, J., & Chawla, N. "Evolutionary Ensemble Creation and Thinning." Proc. of IEEE IJCNN/WCCI, pp. 5148–55. 2006.
- Sylvester, J., & Chawla, N. "Evolutionary Ensembles: Combining Learning Agents using Genetic Algorithms." Proc. of AAAI Workshop on Multi-agent Systems, pp. 46–51. 2005.
Reports, working papers, etc.
- Sylvester, J., Reggia, J., & Weems, S. "Predicting improvement on working memory tasks with machine learning techniques." UMD Center for Adv. Study of Languages. Technical Report. 2011.
- Sylvester, J. "Maximizing Diffusion on Dynamic Social Networks." 2009. (Submitted to satisfy the requirements for my Master's in CS. Originally written as a final project report for BMGT 808L (Complex Systems in Business). Currently being reworked for a journal submission.)
Other Talks
- Sylvester & Fleshman. "Resisting Adversarial Attacks on Machine Learning Malware Detectors." Refereed. GPU Technology Conference DC. 22–24 October, 2018.
- "Malware Detection by Eating a Whole EXE." With Edward Raff and Rob Brandon. Refereed. GPU Technology Conference. Washington, DC. 1–2 November 2017.
- "Fighting Malware with Machine Learning." With Edward Raff. Refereed. GPU Technology Conference. Washington, DC. 26–27 October 2016.
- "Predictability of User Behavior in Social Media: Bottom-Up v. Top-Down Modeling." Invited. AAAI Fall Symposium on Social Networks and Social Contagion. Alexandria, VA. 15–16 November 2013.
- "Predictability of User Behavior in Social Media: Bottom-Up v. Top-Down Modeling." With David Darmon. Refereed. 5th Ann. Complexity in Business Conference. Washington, DC. 7–8 November 2013.
- "Neurocognitive Architecture Case Study: GALIS." An informal guest lecture in CMSC 727 (Neural Computation). May 2013.
- "Attractor Network Models for Cognitive Control." Given for CASL's Lunch Lecture series. College Park, MD. 13 March 2012.
- "Modeling Cognitive Control of Working Memory as a Gated Cortical Network." Invited talk at the First Int'l Workshop on Cognitive and Working Memory Training. Introduction by Jim Reggia. Hyattsville, MD. 23–25 August 2011.
- "Oscillatory Neural Network Models of Sequential Short-Term Memory." Given for CASL's Lunch Lecture series. Introduction by Scott Weems. College Park, MD. 15 June 2010.