Alex Ratner

PhD Candidate in CS @ Stanford

Github | Google | Twitter
Email: ajratner at stanford.edu

Latest News

The key bottleneck in deploying ML systems today is labeling training data. My research focuses on weak supervision: the idea of using higher-level, noisier input from domain experts to train complex state-of-the-art models. On the applications side, I’m interested in text relation extraction and image classification, particularly for biomedical applications, which I focus on as a Bio-X SIG Fellow. I’m very fortunate to work with Chris Ré and many other talented people in the Hazy, Info, StatsML, and DAWN labs.

Research Projects


Data Programming + Snorkel

Snorkel is a new system for quickly and cheaply generating training data based on user-provided labeling functions, which encode weak supervision signals like heuristics, patterns, and distant supervision sources. Snorkel automatically synthesizes and models these signals using our data programming approach, and is currently focused on training models for structured data extraction from text, PDFs, images, and more. Check out more tutorials and blog posts at snorkel.stanford.edu!

Publications

Selected highlights in bold.


Older News