Daniel (Lin-Kit) Wong

Office: GHC 9015

Hello! I am a third-year PhD student in the Computer Science Department at Carnegie Mellon.
I am advised by Professor Greg Ganger and am a member of the Parallel Data Laboratory.

I am a systems builder and hacker who is interested in systems design and distributed systems.

I spent the last few years at CMU building machine learning systems. Last summer, I worked on scheduling for model parallelism in TensorFlow at Google. Before my PhD, I worked on everything from graph clustering to systems security.

Spring ‘20: I'm interested in topics from (but not limited to) ML for Systems, distributed systems, and dimensionality reduction on time series (both systems & neural datasets). Reach out if you have problems, insights, or data (especially on correlated failures) to share!

Résumé (Feb ‘20) | Publications
Research during my PhD at CMU
  • Ongoing (Spring ‘20):
    • ML-based cache admission policies Spring ‘20 - Present

      Daniel Wong, Daniel Berger, Nathan Beckmann, Greg Ganger (and multiple Facebook collaborators)

    • CANDStore: Cheap replication for distributed NVM storage. Spring ‘19 - Present

      Thomas Kim, Daniel Wong, Anuj Kalia, Rajat Kateja, Michael Kaminsky, Greg Ganger, David G. Andersen

    • 10-708 (PGM) project: Dimensionality reduction on neuroscience datasets.
      Stitching neural population recordings (electrophysiological) from different days.
  • Keen to explore (Spring ‘20):
    • Applications of clustering & dimensionality reduction for time series and graphs.

      M any system problems don't look like CIFAR10. I'm keen to explore interpretable machine learning methods that find correlations in time series and graphs, with an especial interest in visualizations and causality.
      Sequential, graph structure. Data (e.g., traces) and tasks often have a temporal aspect, and a complex non-linear, graph structure (e.g., from task dependencies, or distributed nodes).
      Unsupervised learning. Dimensionality reduction and clustering provide insight (e.g., understanding root causes of correlated failures), or can be used as preprocessing to make the problem more tractable by removing noise and reducing the decision space (e.g., optimizing dataflow graphs).
      Interpretability. Systems design and optimization choices are about tradeoffs. Interpretability aids debuggablity, and increases practitioners' faith in decisions and findings from ML methods.

    • ML for Systems: Learnt Heuristics.

      Systems often depend on hand-crafted heuristics for good performance. How can we replace these with automatically generated heuristics that are customized for each workload?

    • Areas I have a soft spot for / past background: neuroscience, physiology, visualisations, clustering, systems security, HCI, psychology.
  • Past projects:
    • Co-optimizing scheduling and device placement in TensorFlow with deep RL for automatic model parallelism.
      Google Summer ‘19 intern, Fall ‘19 Student Researcher

      Daniel Wong, Peter Ma^, Sudip Roy*, Yanqi Zhou*
      ^Google Platforms Performance, *Google Brain (ML for Systems)

    • Selective-Backpropagation: Accelerating Deep Learning by Focusing on the Biggest Losers. Fall ‘18 - Fall ‘19

      Angela H. Jiang, Daniel L.-K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, Padmanabhan Pillai [preprint]

    • Mainstream: Dynamic Stem-Sharing for Multi-Tenant Video Processing. Fall ‘17 - Spring ‘18

      Angela Jiang, Daniel Lin-Kit Wong, Christopher Canel, Ishan Misra, Michael Kaminsky, Michael A. Kozuch, Padmanabhan Pillai, David G. Andersen, Gregory R. Ganger. USENIX ATC 2018. [PDF]
      Part of the Intel Science and Technology Center for Visual Cloud Systems (ISTC-VCS).

  • Past explorations (keen to revisit if things change)
    • Transient failures (grey failures). Fall ‘19

      How can we balance initiating recovery quickly and overreacting to transient failures?

    • Affordable robustness to failures in distributed storage. Spring ‘19 - Fall ‘19

      3-way cross-region replication is expensive and slow. It helps mitigate rare risks like a hurricane taking out a data center, but why pay that price for common events like equipment failures? Can we detect and predict correlated failures?

      Outcome: I performed simulations based on theoretical modelling and presented a poster on transient failures at PDL Retreat 2019. Although there was strong interest from industry who were also grappling with this problem, this project was put on hold because of a lack of real world data to model the failures. I would be keen to revisit this project. Hit me up if you are able to offer any datasets!
    • Speeding up evolutionary neural architecture search with adaptive weight sharing. Summer ‘18 - Fall ‘18
Teaching & Coursework at CMU
Highlights from life before starting my PhD

I'm a tinkerer at heart, and am always on the lookout for novel challenges to work on. In seeking opportunities, I aim to optimise for learning and to do meaningful, impactful work. I bask in the energy of synergistic collaborations, and the opportunity they give me to wade into new domains and learn from cool people.

I'm a software engineer and have a relentless urge to automate and optimize all parts of my work process.

I enjoy cooking, musicals, singing, Singaporean food, skiing &snowboarding, gliding, long scenic drives (and walks), waterfalls, baking, rock climbing, ice skating, scuba diving, and last but not least, good nigiri. I did my undergraduate studies at the University of Cambridge and am a member of Churchill College. I grew up in Singapore, am a 华中子弟, and am a proud alumnus of my high school computer club EC3 (where I learnt to code and hack stuff together.)

Get in touch: | [same username]@cmu.edu | LinkedIn | Facebook | Keybase | PGP key

My stuff: Quora | GitHub

More about me: Publications | Biography