This instrument is called guitalele
Giannis Karamanolakis

Ph.D. Candidate in CS @ Columbia

Hello, world!


I am a fourth-year Ph.D. Candidate in Computer Science at Columbia University, advised by Prof. Luis Gravano and Prof. Daniel Hsu. My research interests lie at the intersection of Machine Learning, Information Extraction, and Natural Language Processing.

Currently, I have been developing weakly-supervised learning frameworks for knowledge extraction from text. I am interested in training deep neural networks for real-world tasks with limited or no training labels using alternative supervision signals, such as noisy/proxy labels and logical rules. I have applied our techniques for knowledge graph construction, sentiment analysis, topic extraction, and mining social media for rare events related to public health.

In the past, I have worked on human emotion recognition from conversational speech data, music information retrieval, and multimodal word embeddings grounded to the visual and auditory sense modalities.

In addition to doing research, I love playing the bass guitar, windsurfing, taking photos and (of course) traveling. Update: Last summer in Seattle I got my sailing certificate!

I grew up in Sitia, a small town in eastern Crete with nice tsikoudia and many wonderful places around! Here are a few drone shots: Vai Palm Beach, Tenda Beach. If you'd like to visit Crete, you should ask me for recommendations.

For more information about me, see my CV or contact me.



Education

2017-now
Columbia University

Ph.D. in Computer Science

2017-2018
Columbia University

M.Sc. in Computer Science

2011-2017
National Technical University of Athens

M.Eng. in Electrical and Computer Engineering

Professional Experience

2020
2019
Amazon (Product Graph Team)

Applied / Machine Learning Scientist Intern

2016-2017
Behavioral Signals

Machine Learning Engineer

2016



For more details, please see my full CV (PDF).

News



09/2020
Our CLTS paper on low-resource cross-lingual transfer got accepted into the Findings of EMNLP '20

08/2020
Served as a PC member at the TrueFact '20 workshop

07/2020
Presented our TXtract paper with Amazon at ACL '20

05/2020

05/2020
Gave a talk at Relational AI on large-scale extraction of product attributes from text descriptions

04/2020
Passed my candidacy exam on Minimally-Supervised Learning from Text


Projects



A Yelp restaurant review discussing about food poisoning.

Information Extraction from Social Media for Public Health

Joint work with Tom Effland and Lampros Flokas

Advised by Luis Gravano and Daniel Hsu

We have been collaborating with health departments in NYC and LA on processing social media data for public health applications. We have deployed systems that help DOHMH track user complaints on social media (e.g., Yelp reviews, tweets) and detect foodborne illness outbreaks in restaurants.

Our weakly supervised (HSAN) network highlights important sentences in "Sick" reviews to make inspection easier (see Papers).

[project page] [WNUT@EMNLP '19 paper] [EMNLP '19 paper] [AT&T slides] [LOUHI@EMNLP '20 paper]

There more than 4,000 written languages in the world!

Training Neural Networks Across Languages With Minimal Resources

Advised by Luis Gravano and Daniel Hsu

While most NLP models/datasets have been developed in English, it is important to consider more languages out of the 4,000 written languages in the world. However, it would be expensive or sometimes impossible to obtain training data across all languages for deep learning. In our recent work, we show how to train neural networks for a target language without labeled data. We developed CLTS, a method for transferring weak supervision across languages using minimal resources. CLTS sometimes outperforms more expensive approaches and can be applied even for low-resource languages!

[LOUHI@EMNLP '20 paper] [Findings of EMNLP '20 paper]

A hierarchical taxonomy of Amazon's product categories.

Knowledge Graph Construction for Products from 10K+ Categories

Work with Amazon's Product Graph Team

Advised by Xin Luna Dong and Jun Ma

Product understanding is crucial for product search at Amazon.com or answering user's questions through Amazon's Alexa (personal assistant): ``Alexa, add a family-size chocolate ice cream to my shopping list.'' During my internship at Amazon, we worked on the construction of a knowledge graph of products or "product graph". To scale up to a taxonomy of thousands of product categories without manual labeling, we developed TXtract, a taxonomy-aware deep neural network that extracts product attributes from the text of product titles and descriptions (ACL'20 paper). TXtract is an important component into "AutoKnow", Amazon's large-scale knowledge graph of products (KDD'20 paper).

[blog] [ACL '20 paper] [KDD '20 paper]

A product review with manual aspect annotations.

Training Classifiers with Keywords Via Weakly Supervised Co-Training

Advised by Luis Gravano and Daniel Hsu

We are developing deep learning models that annotate online reviews (e.g., Amazon product reviews, Yelp restaurant reviews) with aspects (e.g., price, image, food quality). Manually collecting aspect labels for training is expensive, so we propose a weakly supervised learning framework, which only requires from the user to provide a few descriptive keywords (seed words) for each aspect (e.g., 'price', 'value', and 'money' for the Price aspect). To leverage keywords in neural networks, we developed "Weakly-Supervised Co-Training", a teacher-student approach that uses keywords in a teacher classifier to train a student neural network (similar to knowledge distillation) and iteratively updates the teacher and student (EMNLP'19 paper).

[LLD@ICLR '19 paper] [EMNLP '19 paper]

Deep Learning for Personalized Item Recommendation

Joint work with Kevin Cherian and Ananth Narayan

Advised by Tony Jebara

We developed deep learning models for recommending items (e.g., restaurants, movies) to users in online platforms. In our recent paper, we show how to extend Variational Autoencoders (VAEs) for collaborative filtering with side information in the form of user reviews. We incorporate user preferences into the VAE model as user-dependent priors.

[link] [DLRS@RecSys '18 paper] [slides]

Transfer Learning for Style-Specific Text Generation

Joint work with Katy Ilonka Gero

We trained deep language models (LSTMs) for generating text of a specific literary style (e.g., poetry). Training these models is challenging, because most stylistic literary datasets are very small. In our paper, we demonstrate that generic pre-trained language models can be effectively fine-tuned on small stylistic corpora to generate coherent and expressive text.

[link] [ML4CD@NIPS '18 paper]

"Sobrite" Mobile Health App

Joint work with John Bosco, Mark Chu, Lampros Flokas, and Fatima Koli

We are developing a mobile app that is powered by Machine Learning and provides holistic tools to patients receiving treatment from opioid addiction, as an effort to help them maintain sobriety beyond formal treatment. We were one of the winning teams in the "Addressing the Opioid Epidemic" challenge (Columbia Engineering, 12/2017).

[link] [Android app] [iOS app]

The NAO humanoid robot demonstrating dance skills.

NAO Dance! CNNs for Real-time Beat Tracking

Joint work with Myrto Damianou, Christos Palivos, and Stelios Stavroulakis

Advised by Aggelos Gkiokas and Vassilis Katsouros

We embedded real-time beat tracking and music genre classification algorithms into the NAO humanoit robot. While music plays, NAO's choreography dynamically adapts to the genre and the dance moves are synchronized with the output of the beat tracking system. We submitted our system to the Signal Processing Cup Challenge 2017.

[demo] [ISMIR '17 paper]

Automatically Tagging Audio/Music Clips with Descriptive Tags

Advised by Alexandros Potamianos

We embedded audio clips and the corresponding descriptive tags into the same multimodal vector space by representing tags and clips as bags-of-audio-words. In this way, we can easily (1) annotate audio clips with descriptive tags (by comparing audio vectors to tag vectors), or (2) estimate the similarity between audio clips or music songs (by optionally enhancing audio vectors with semantic information).

[Multi-Learn@EUSIPCO '17 paper]

What comes to your mind when you read the word 'guitar'?

Grounding Natural Language to Perceptual Modalities

Advised by Alexandros Potamianos

We created multimodal word embeddings as an attempt to ground word semantics to the acoustic and visual sensory modalities. We modeled the acoustic and visual properties of words by associating words to audio clips and images, respectively. We fused textual, acoustic, and visual features into a joint semantic vector space in which vector similarities correlate with human judgements of semantic word similarity.

[INTERSPEECH '16 paper] [Multi-Learn@EUSIPCO '17 paper]

Urban Soundscape Event Detection and Quality Estimation

Advised by Theodoros Giannakopoulos

We collected hundreds of recordings of urban soundscapes, i.e., sounds produced by mixed sound sources within a given urban area. We developed Machine Learning algorithms that analyze audio recordings to (1) detect acoustic events (e.g., car horns, human voices, birds), and (2) estimate the soundscape quality in different urban areas.

Papers




2020


Cross-Lingual Text Classification with Minimal Resources By Transferring a Sparse Teacher

Giannis Karamanolakis, Daniel Hsu, and Luis Gravano
Findings of EMNLP 2020
[PDF] [Code] [Slides]

Detecting Foodborne Illness Complaints in Multiple Languages Using English Annotations Only

Ziyi Liu, Giannis Karamanolakis, Daniel Hsu, and Luis Gravano
EMNLP 2020, 11th Workshop on Health Text Mining and Information Analysis (LOUHI 2020)
[PDF] [Slides]

AutoKnow: Self-Driving Knowledge Collection for Products of Thousands of Types

Xin Luna Dong, Xiang He, Andrey Kan, Xian Li, Yan Liang, Jun Ma, Yifan Ethan Xu, Chenwei Zhang, Tong Zhao, Gabriel Blanco Saldana, Saurabh Deshpande, Alexandre Michetti Manduca, Jay Ren, Surender Pal Singh, Fan Xiao, Haw-Shiuan Chang, Giannis Karamanolakis, Yuning Mao, Yaqing Wang, Christos Faloutsos, Andrew McCallum, Jiawei Han
KDD 2020, San Diego, CA
[PDF] [Talk]

TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories

Giannis Karamanolakis, Jun Ma, and Xin Luna Dong
ACL 2020, Seattle, WA (Oral Presentation)
[PDF] [Slides]

2019

Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training

Giannis Karamanolakis, Daniel Hsu, and Luis Gravano
EMNLP-IJCNLP 2019, Hong Kong, China (Oral Presentation)
[PDF] [Talk] [Slides]

Weakly Supervised Attention Networks for Fine-Grained Opinion Mining and Public Health

Giannis Karamanolakis, Daniel Hsu, and Luis Gravano
EMNLP-IJCNLP 2019, 5th Workshop on Noisy User-generated Text (W-NUT 2019), Hong Kong, China (Oral Presentation)
[PDF] [Poster] [Slides]

Training Neural Networks for Aspect Extraction Using Descriptive Keywords Only

Giannis Karamanolakis, Daniel Hsu, and Luis Gravano
ICLR 2019, 2nd Workshop on Learning from Limited Labeled Data (LLD 2019), New Orleans, LA
[PDF] [Poster]

2018


Transfer Learning for Style-Specific Text Generation

Katy Ilonka Gero, Giannis Karamanolakis, and Lydia Chilton
NeurIPS 2018, Workshop on Machine Learning for Creativity and Design, Montreal, QC, Canada
[PDF] [Poster]

Item Recommendation with Variational Autoencoders and Heterogenous Priors

Giannis Karamanolakis, Kevin Cherian, Ananth Narayan, Jie Yuan, Da Tang, and Tony Jebara
RecSys 2018, 3rd Workshop on Deep Learning for Recommender Systems (DLRS 2018), Vancouver, BC, Canada (Oral Presentation)
[PDF] [slides]

2017


Audio-Based Distributional Semantic Models for Music Auto-tagging and Similarity Measurement

Giannis Karamanolakis, Elias Iosif, Athanasia Zlatintsi, Aggelos Pikrakis, and Alexandros Potamianos
EUSIPCO 2017, Multimodal processing, modeling and learning approaches for human-computer/robot interaction (Multi-Learn) workshop, Kos island, Greece (Oral Presentation)
[PDF]

Sensory-Aware Multimodal Fusion for Word Semantic Similarity Estimation

Georgios Paraskevopoulos, Giannis Karamanolakis, Elias Iosif, Aggelos Pikrakis, and Alexandros Potamianos
EUSIPCO 2017, Multimodal processing, modeling and learning approaches for human-computer/robot interaction (Multi-Learn) workshop, Kos island, Greece (Oral Presentation)
[PDF]

2016


Audio-Based Distributional Representations of Meaning Using a Fusion of Feature Encodings

Giannis Karamanolakis, Elias Iosif, Athanasia Zlatintsi, Aggelos Pikrakis, and Alexandros Potamianos,
INTERSPEECH 2016, San Fransisco, California (Oral Presentation)
[PDF] [slides]

Contact


Γιάννης Καραμανωλάκης
E-mail: <x>@cs.columbia.edu, where x=gkaraman.
Office: Mudd 406, Data Science Institute (map).

Extra: My first name (Giannis) is pronounced as y aa n ih s. You can also try this: