NLP has traditionally mapped words to discrete elements without underlying structure. Recent research replaces these models with vector-based representations, efficiently learned using neural networks. The resulting embeddings not only improve performance on a variety of tasks, but also show surprising algebraic structure. I will give a gentle introduction to these exciting developments.
Back to All Events
Earlier Event: November 4BOSTON DATA FESTIVAL: QUANTIFYING UNCERTAINTY (THOMAS WIECKI)
Later Event: November 5BOSTON DATA FESTIVAL: DATA SCIENCE ON A BUDGET (NICHOLAS ARCOLANO)