I'm currently moving all my old content over from the UMD CS Department webservers. Please bear with me as I fill in missing content, fix broken links, re-design style sheets, etc. Thanks.
/* Compute what is computable and make computable what is not so */
I'm a consultant/machine learning researcher/data scientist at Booz Allen Hamilton. Last year I got my doctorate in Computer Science from the University of Maryland, where I focused on the field of biologically-inspired AI. My dissertation was on executive function and cognitive control (working memory, decision making, etc.) using neural-inspired systems rather than rule-based ones — essentially trying to get computers to act in ways that are a little more like our brains and also get artificial neural networks to act a little more like traditional computers.
I also did a lot of work at the Center for Complexity in Business, applying data-driven computational techniques to marketing, finance and other business domains. I've also worked in fields including data mining, biometrics, circuit design, cognitive psychology, finance, social networks, and marketing.
Donald Knuth said “Science is what we understand well enough to explain to a computer. Art is everything else we do.”* This is a moving boundary, and I'm interested in the application of algorithmic techniques to liminal fields on both sides of the frontier.
My non-scientific interests include algorithmic animation (I've posted some of my work here), calligraphy (ditto), and baking bread (sadly it's a little tough to put my output from this hobby online). I've also been trying to teach myself some woodworking and archery.
I live in Maryland with my wife, infant son, and Westie.
C.V. / Résumé
(Last updated December, 2016.)
For my dissertation I worked with Jim Reggia on exploring neural models of cognitive control. Most cognitive control models are built using symbolic, rule-based paradigms. Such systems are both biologically implausible and often tend towards the homuncular. What neural models do exist are typically very narrowly designed for a particular task and require a great deal of human intervention to tailor them to the objective as well as exhibiting problems scaling to larger problem spaces.
I am exploring creating a more generalizable model of cognitive control using a neural paradigm by creating networks which learn not only memories of environmental stimuli but also the steps necessary for completing the task. The steps are stored in a memory formed by a sequential attractor network I developed so that they can be visited in order. I call my model GALIS, for "Gated Attractor Learning Instruction Sequences."
By generating behavior from the learned contents of a memory rather than the explicit structure of the network itself I believe it will be much easier for the model's behavior to change. Rather than having to rebuild the ‘hardware’ of the network, you can instead load different ‘software’ by training the memory on different patterns. Furthermore, making the model's behavior readily mutable opens the door to it improving its performance as it gains experience. That, in turn, should allow the model to learn the behavior necessary to completing a task on its own.
Basing behavior on memory contents rather than architecture is not unlike the shift from clockwork automata like Vaucanson's ‘Digesting Duck’ to the Jacquard Loom. The latter was an important step in the history of computation because its behavior could be changed simply by swapping in a different set of punchcards — i.e., by changing the contents of its memory. Of course GALIS surpasses the Jacquard loom because the loom was only able to follow instructions, not conduct any computation of its own. GALIS, on the other hand, determines endogenously when and how to modify its working memory, produce outputs, etc.
Business & Social Network Analysis
In addition to my dissertation research I'm also working with Bill Rand in UMD's Smith School of Business's Center for Complexity in Business. I'm working on a couple of projects, but the main one for me is an effort to model social interactions in an MMORPG with a freemium business model. Our goal is to be able to model who will convert from free to a paid user based on there location in the in-game social graph and the characteristics of them and their friends. We're using a variety of techniques, including agent-based modeling, logistic regressions and assorted machine learning techniques.
Other neural networks research
Prior to GALIS I worked with Jim on two other projects. The first is a computational model of working memory formation. This was done in conjunction with a wide-ranging study at UMD's Center for Advanced Study of Language into the role of working memory in language tasks. This study of working memory lead into the cognitive control research I am doing now. I have also used machine learning methods to analyze the results of some CASL studies to see if it is possible to determine who will benefit from working memory training based on pre-test results. Please see the 2011 tech report below for more.
The second project, begun in Spring 2007, deals with symmetries in topographic Self-Organizing Maps. By limiting the radius of competition and choosing multiple winners for standard Hebbian learning we can generate cortices with global patterns of symmetric maps. Please see the 2009 Neural Computation paper below for details.
UndergradAt Notre Dame I did Machine Learning research. I worked on creating and testing a system called EVEN, for ‘Evolutionary Ensembles.’ It is a genetic algorithm framework for combining multiple classifiers for machine learning and data mining. It is very flexible, with the ability to combine any type of base classifiers using different fitness metrics. This work was done with Nitesh Chawla, who advised me for my final two years at Notre Dame.
- Raff, E., Zak, R., Sylvester, J., Cox, R., Yacci, P., & McLean, M. "An investigation of byte n-gram features for malware classification." Journal of Computer Virology. pp. 1–20. September, 2016.
- Sylvester, J. & Reggia, J. "Engineering Neural Systems for High-Level Problem Solving." Neural Networks, vol. 79, pp. 37–52. 2016.
- Reggia, J., Monner, D., & Sylvester, J. "The Computational Explanatory Gap." Journal of Consciousness Studies, vol. 21(9–10), pp. 153–178. 2014.
- Darmon, D., Sylvester, J., Girvan, M., & Rand, W. "Understanding the Predictive Power of Computational Mechanics and Echo State Networks in Social Media." ASE Human Journal, vol. 2(2), pp.13–25. 2013.
- Sylvester, J., Reggia, J., Weems, S., and Bunting, M. "Controlling Working Memory with Learned Instructions." Neural Networks, vol. 41, Special Issue on Autonomous Learning, pp. 23–38. 2013.
- Sylvester, J., and Reggia, J. "Plasticity-Induced Symmetry Relationships Between Adjacent Self-Organizing Topographic Maps." Neural Computation, vol. 21(12), pp. 3429–3443. 2009.
- Sylvester, J., Healy, J., Wang, C., & Rand, W. "Space, Time, and Hurricanes: Investigating the Spatiotemporal Relationship among Social Media Use, Donations, and Disasters." ASE Int'l Conf. on Social Computing. (Forthcoming). May, 2014.
- Rand, W., Darmon, D., Sylvester, J., & Girvan, M. "Will My Followers Tweet? Predicting Twitter Engagement using Machine Learning." European Marketing Academy Conference (Forthcoming). June, 2014.
- Sylvester, J., & Rand, W. "Keeping Up with the (Pre-Teen) Joneses: The Effect of Friendship on Freemium Conversion." Proc. of the Winter Conference on Business Intelligence. February, 2014.
- Darmon, D., Sylvester, J., Girvan, M., & Rand, W. "Predictability of User Behavior in Social Media: Bottom-Up v. Top-Down Modeling." ASE/IEEE Int'l Conf. on Social Computing, pp. 102–107. September, 2013.
- Sylvester, J., & Reggia, J. "The Neural Executive: Can Gated Attractor Networks Account for Cognitive Control?" Ann. Mtg. of the Int'l Assoc. for Computing & Philosophy. July, 2013.
- Reggia, J., Monner, D., & Sylvester, J. "The Computational Explanatory Gap." Ann. Mtg. of the Int'l Assoc. for Computing & Philosophy. July, 2013.
Sylvester, J., Reggia, J., & Weems, S. "Cognitive Control as a Gated Cortical Net." Proc. of the Int'l Conf. on Biologically Inspired Cognitive Architectures, pp. 371–376. Alexandria, VA, August 2011.
- Sylvester, J., Reggia, J., Weems, S., & Bunting, M. "A Temporally Asymmetric Hebbian Network for Sequential Working Memory." Proc. of the Int'l Conf. on Cognitive Modeling, pp. 241–246. Philadelphia, PA, August 2010.
- Reggia, J., Sylvester, J., Weems, S., & Bunting, M. "A Simple Oscillatory Short-term Memory Model." Proc. of the Biologically-Inspired Cognitive Architecture Symposium, AAAI Fall Symposium Series, pp. 103–108. Arlington, VA, 2009.
- Sylvester, J., Weems, S., Reggia, J., Bunting, M., & Harbison, I. "Modeling Interactions Between Interference and Decay During the Serial Recall of Temporal Sequences." Proc. of the Psychonomic Society Annual Meeting, November 2009.
- Chawla, N., & Sylvester, J. "Exploiting Diversity in Ensembles: Improving the Performance on Unbalanced Datasets." Proc. of Multiple Classifier Systems, pp. 397–406. 2007.
- Sylvester, J., & Chawla, N. "Evolutionary Ensemble Creation and Thinning." Proc. of IEEE IJCNN/WCCI, pp. 5148–55. 2006.
- Sylvester, J., & Chawla, N. "Evolutionary Ensembles: Combining Learning Agents using Genetic Algorithms." Proc. of AAAI Workshop on Multi-agent Systems, pp. 46–51. 2005.
Reports, working papers, etc.
- Sylvester, J., Reggia, J., & Weems, S. "Predicting improvement on working memory tasks with machine learning techniques." UMD Center for Adv. Study of Languages. Technical Report. 2011.
- Sylvester, J. "Maximizing Diffusion on Dynamic Social Networks." 2009. (Submitted to satisfy the requirements for my Master's in CS. Originally written as a final project report for BMGT 808L (Complex Systems in Business). Currently being reworked for a journal submission.)