This instrument is called guitalele
Giannis Karamanolakis

Applied Scientist @ Amazon Alexa AI

Hello, world!

I am an Applied Scientist at Amazon Alexa AI in NYC. I got a Ph.D. in Computer Science at Columbia University, where I was fortunate to be advised by Prof. Luis Gravano and Prof. Daniel Hsu. My research interests lie at the intersection of Machine Learning, Information Extraction, and Natural Language Processing (NLP).

The main focus of my research has been to assist humans in teaching machine learning models via more flexible types of interaction than standard data labeling, which is expensive and time-consuming. Towards this goal, I have developed minimally-supervised learning frameworks for training neural networks using alternative types of supervision, such as domain-specific keywords, coarse labels, taxonomies, and labeling rules. I have demonstrated the benefits of high-level supervision for scaling NLP across domains, languages, and tasks, including knowledge graph construction, sentiment analysis, news document classification, topic extraction, cross-lingual learning, and mining social media for rare events related to public health.

In the past, I have worked on human emotion recognition from conversational speech data, music information retrieval, and training multimodal word embeddings to ground language to the visual and auditory sense modalities.

Aside from research, I love playing the bass guitar, windsurfing, taking photos and (of course) traveling. Updates: In summer 2019 in Seattle I got my sailing certificate; In summer 2020, I got into drone videography.

I grew up in Sitia, a small town in eastern Crete that produces delicious tsikoudia and has wonderful places around! Here are a few drone shots: Vai Palm Beach, Tenda Beach. If you'd like to visit Crete, you should ask me for recommendations.

For more information about me, see my CV or contact me.


Columbia University

Ph.D. in Computer Science

Columbia University

M.Phil. in Computer Science

Columbia University

M.Sc. in Computer Science

National Technical University of Athens

M.Eng. in Electrical and Computer Engineering

Professional Experience

Amazon Alexa AI

Applied Scientist

Amazon (Product Graph Team)

Applied / Machine Learning Scientist Intern

Behavioral Signals

Machine Learning Engineer


For more details, please see my full CV (PDF).


Our Super-NaturalInstructions benchmark for in-context learning got accepted into EMNLP '22

Joined Amazon Alexa AI in New York City as an Applied Scientist

Successfully defended my PhD dissertation on Efficient Machine Teaching Frameworks for NLP

Honored to receive the Davide Giri Memorial Prize by Columbia CS

Our WALNUT benchmark for semi-weakly supervised learning got accepted into NAACL '22

Check out Natural Instructions v2, a benchmark with 1600+ NLP tasks and their natural language instructions.

Gave a talk at the Alan Turing Institute on weakly-supervised learning for NLP

You can find a Q&A about my research experience at the Columbia CS website:


Presented our ASTRA paper with Microsoft Research at NAACL '21

Served on the Program Committee for ACL '21, DeeLIO '21, ICML '21 (expert reviewer), EMNLP '21, NeurIPS '21

Our work with Microsoft Research on self-training with weak supervision got accepted into NAACL '21

Passed my Ph.D. thesis proposal!

Gave a talk at Two Sigma on weakly supervised neural networks for text mining

Ranked within the top 10% of NeurIPS '20 reviewers

Our CLTS paper on low-resource cross-lingual transfer got accepted into the Findings of EMNLP '20

Served on the Program Committee for TrueFact '20, NeurIPS '20

Presented our TXtract paper with Amazon at ACL '20


Gave a talk at Relational AI on large-scale extraction of product attributes from text descriptions

Passed my candidacy exam on Minimally-Supervised Learning from Text


A Yelp restaurant review discussing about food poisoning.

Information Extraction from Social Media for Public Health

Joint work with Tom Effland and Lampros Flokas

Advised by Luis Gravano and Daniel Hsu

We have been collaborating with health departments in NYC and LA on processing social media data for public health applications. We have deployed systems that help DOHMH track user complaints on social media (e.g., Yelp reviews, tweets) and detect foodborne illness outbreaks in restaurants.

We have developed a weakly supervised network, HSAN, which highlights important sentences in "Sick" reviews as an effort to facilitate inspections in health departments. We have also built models for foodborne illness detection in languages beyond English and have analyzed how Yelp reviews have changed during the COVID-19 pandemic. For more information, check our papers.

[project page] [papers: WNUT@EMNLP '19, EMNLP '19, LOUHI@EMNLP '20, SocialNLP@NAACL '21] [slides: AT&T] [news: The New York Times, The Washington Post, Science Daily, Yelp Blog]

ASTRA leverages domain-specific rules via self-training.

Training Neural Networks with Domain-Specific Rules

Work with Microsoft's Language and Information Technologies (LIT) team

Advised by Subho Mukherjee, Guoqing Zheng, and Ahmed Awadallah

State-of-the-art deep neural networks require large-scale labeled training data that is often expensive to obtain. During my internship at Microsoft Research, we developed ASTRA, a semi-supervised learning framework for training neural networks using domain-specific labeling rules (e.g., regular expression patterns). ASTRA leverages multiple heuristic rules through a Rule Attention Network (RAN Teacher) and automatically generates weakly-labeled data for training any classifier (Student) via iterative self-training.

[Microsoft page] [NAACL '21 paper] [ASTRA Code]

There are more than 4,000 written languages in the world!

Document Classification Across Languages With Minimal Resources

Advised by Luis Gravano and Daniel Hsu

While most NLP models and training datasets have been developed in English, it is important to consider more languages out of the 4,000 written languages in the world. However, it would be expensive or sometimes impossible to obtain training data across all languages for deep learning. In our recent work, we show how to train neural networks for a target language without labeled data. We developed CLTS, a method for transferring weak supervision across languages using minimal resources. CLTS sometimes outperforms more expensive approaches and can be applied even for low-resource languages!

[LOUHI@EMNLP '20 paper] [Findings of EMNLP '20 paper] [Slides]

A hierarchical taxonomy of Amazon's product categories.

Knowledge Graph Construction for Products from 10K+ Categories

Work with Amazon's Product Graph Team

Advised by Xin Luna Dong and Jun Ma

Product understanding is crucial for product search at or answering user's questions through Amazon's Alexa (personal assistant): ``Alexa, add a family-size chocolate ice cream to my shopping list.'' During my internship at Amazon, we worked on the construction of a knowledge graph of products or "product graph". To scale up to a taxonomy of thousands of product categories without manual labeling, we developed TXtract, a taxonomy-aware deep neural network that extracts product attributes from the text of product titles and descriptions (ACL'20 paper). TXtract is an important component into "AutoKnow", Amazon's large-scale knowledge graph of products (KDD'20 paper).

[blog] [TWIML podcast] [ACL '20 paper] [KDD '20 paper] [Slides]

A product review with manual aspect annotations.

Training Classifiers with Keywords Via Weakly Supervised Co-Training

Advised by Luis Gravano and Daniel Hsu

We have been developing deep learning models that annotate online reviews (e.g., Amazon product reviews, Yelp restaurant reviews) with aspects (e.g., price, image, food quality). Manually collecting aspect labels for training is expensive, so we propose a weakly supervised learning framework, which only requires from the user to provide a few descriptive keywords (seed words) for each aspect (e.g., 'price', 'value', and 'money' for the Price aspect). To leverage keywords in neural networks, we developed "Weakly-Supervised Co-Training", a teacher-student approach that uses keywords in a teacher classifier to train a student neural network (similar to knowledge distillation) and iteratively updates the teacher and student (EMNLP'19 paper).

[LLD@ICLR '19 paper] [EMNLP '19 paper] [slides]

Deep Learning for Personalized Item Recommendation

Joint work with Kevin Cherian and Ananth Narayan

Advised by Tony Jebara

We developed deep learning models for recommending items (e.g., restaurants, movies) to users in online platforms. In our recent paper, we show how to extend Variational Autoencoders (VAEs) for collaborative filtering with side information in the form of user reviews. We incorporate user preferences into the VAE model as user-dependent priors.

[link] [DLRS@RecSys '18 paper] [slides]

Transfer Learning for Style-Specific Text Generation

Joint work with Katy Ilonka Gero

We trained deep language models (LSTMs) for generating text of a specific literary style (e.g., poetry). Training these models is challenging, because most stylistic literary datasets are very small. In our paper, we demonstrate that generic pre-trained language models can be effectively fine-tuned on small stylistic corpora to generate coherent and expressive text.

[link] [ML4CD@NIPS '18 paper]

"Sobrite" Mobile Health App

Joint work with John Bosco, Mark Chu, Lampros Flokas, and Fatima Koli

We developed a mobile app that is powered by Machine Learning and provides holistic tools to patients receiving treatment from opioid addiction, as an effort to help them maintain sobriety beyond formal treatment. We were one of the winning teams in the "Addressing the Opioid Epidemic" challenge (Columbia Engineering, 12/2017).

[link] [Android app] [iOS app]

The NAO humanoid robot demonstrating dance skills.

NAO Dance! CNNs for Real-time Beat Tracking

Joint work with Myrto Damianou, Christos Palivos, and Stelios Stavroulakis

Advised by Aggelos Gkiokas and Vassilis Katsouros

We embedded real-time beat tracking and music genre classification algorithms into the NAO humanoit robot. While music plays, NAO's choreography dynamically adapts to the genre and the dance moves are synchronized with the output of the beat tracking system. We submitted our system to the Signal Processing Cup Challenge 2017.

[demo] [ISMIR '17 paper]

Automatically Tagging Audio/Music Clips with Descriptive Tags

Advised by Alexandros Potamianos

We embedded audio clips and the corresponding descriptive tags into the same multimodal vector space by representing tags and clips as bags-of-audio-words. In this way, we can easily (1) annotate audio clips with descriptive tags (by comparing audio vectors to tag vectors), or (2) estimate the similarity between audio clips or music songs (by optionally enhancing audio vectors with semantic information).

[Multi-Learn@EUSIPCO '17 paper]

What comes to your mind when you read the word 'guitar'?

Grounding Natural Language to Perceptual Modalities

Advised by Alexandros Potamianos

We created multimodal word embeddings as an attempt to ground word semantics to the acoustic and visual sensory modalities. We modeled the acoustic and visual properties of words by associating words to audio clips and images, respectively. We fused textual, acoustic, and visual features into a joint semantic vector space in which vector similarities correlate with human judgements of semantic word similarity.

[INTERSPEECH '16 paper] [Multi-Learn@EUSIPCO '17 paper]

Urban Soundscape Event Detection and Quality Estimation

Advised by Theodoros Giannakopoulos

We collected hundreds of recordings of urban soundscapes, i.e., sounds produced by mixed sound sources within a given urban area. We developed Machine Learning algorithms that analyze audio recordings to (1) detect acoustic events (e.g., car horns, human voices, birds), and (2) estimate the soundscape quality in different urban areas.



Efficient Machine Teaching Frameworks for Natural Language Processing

Giannis Karamanolakis
PhD Dissertation, Columbia University

Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks

Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Hannaneh Hajishirzi, Noah A. Smith, Daniel Khashabi
EMNLP 2022, Abu Dhabi
[PDF] [Web]

WALNUT: A Benchmark on Semi-weakly Supervised Learning for Natural Language Understanding

Guoqing Zheng, Giannis Karamanolakis, Kai Shu, Ahmed Hassan Awadallah
NAACL 2022, Seattle, Washington
[PDF] [Code]


Quantifying the Effects of COVID-19 on Restaurant Reviews

Ivy Cao, Zizhou Liu, Giannis Karamanolakis, Daniel Hsu, Luis Gravano
NAACL 2021, 9th International Workshop on Natural Language Processing for Social Media (SocialNLP 2021), Virtual (Oral Presentation)

Self-Training with Weak Supervision

Giannis Karamanolakis, Subhabrata Mukherjee, Guoqing Zheng, Ahmed Hassan Awadallah
NAACL 2021, Virtual (Oral Presentation)


Cross-Lingual Text Classification with Minimal Resources By Transferring a Sparse Teacher

Giannis Karamanolakis, Daniel Hsu, and Luis Gravano
Findings of EMNLP 2020, Virtual
[PDF] [Code] [Slides]

Detecting Foodborne Illness Complaints in Multiple Languages Using English Annotations Only

Ziyi Liu, Giannis Karamanolakis, Daniel Hsu, and Luis Gravano
EMNLP 2020, 11th Workshop on Health Text Mining and Information Analysis (LOUHI 2020), Virtual (Oral Presentation)
[PDF] [Slides]

AutoKnow: Self-Driving Knowledge Collection for Products of Thousands of Types

Xin Luna Dong, Xiang He, Andrey Kan, Xian Li, Yan Liang, Jun Ma, Yifan Ethan Xu, Chenwei Zhang, Tong Zhao, Gabriel Blanco Saldana, Saurabh Deshpande, Alexandre Michetti Manduca, Jay Ren, Surender Pal Singh, Fan Xiao, Haw-Shiuan Chang, Giannis Karamanolakis, Yuning Mao, Yaqing Wang, Christos Faloutsos, Andrew McCallum, Jiawei Han
KDD 2020, Virtual (Oral Presentation)
[PDF] [Talk] [Blog]

TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories

Giannis Karamanolakis, Jun Ma, and Xin Luna Dong
ACL 2020, Virtual (Oral Presentation)
[PDF] [Slides]


Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training

Giannis Karamanolakis, Daniel Hsu, and Luis Gravano
EMNLP-IJCNLP 2019, Hong Kong, China (Oral Presentation)
[PDF] [Talk] [Slides]

Weakly Supervised Attention Networks for Fine-Grained Opinion Mining and Public Health

Giannis Karamanolakis, Daniel Hsu, and Luis Gravano
EMNLP-IJCNLP 2019, 5th Workshop on Noisy User-generated Text (W-NUT 2019), Hong Kong, China (Oral Presentation)
[PDF] [Poster] [Slides]

Training Neural Networks for Aspect Extraction Using Descriptive Keywords Only

Giannis Karamanolakis, Daniel Hsu, and Luis Gravano
ICLR 2019, 2nd Workshop on Learning from Limited Labeled Data (LLD 2019), New Orleans, LA
[PDF] [Poster]


Transfer Learning for Style-Specific Text Generation

Katy Ilonka Gero, Giannis Karamanolakis, and Lydia Chilton
NeurIPS 2018, Workshop on Machine Learning for Creativity and Design, Montreal, QC, Canada
[PDF] [Poster]

Item Recommendation with Variational Autoencoders and Heterogenous Priors

Giannis Karamanolakis, Kevin Cherian, Ananth Narayan, Jie Yuan, Da Tang, and Tony Jebara
RecSys 2018, 3rd Workshop on Deep Learning for Recommender Systems (DLRS 2018), Vancouver, BC, Canada (Oral Presentation)
[PDF] [slides]


Audio-Based Distributional Semantic Models for Music Auto-tagging and Similarity Measurement

Giannis Karamanolakis, Elias Iosif, Athanasia Zlatintsi, Aggelos Pikrakis, and Alexandros Potamianos
EUSIPCO 2017, Multimodal processing, modeling and learning approaches for human-computer/robot interaction (Multi-Learn) workshop, Kos island, Greece (Oral Presentation)

Sensory-Aware Multimodal Fusion for Word Semantic Similarity Estimation

Georgios Paraskevopoulos, Giannis Karamanolakis, Elias Iosif, Aggelos Pikrakis, and Alexandros Potamianos
EUSIPCO 2017, Multimodal processing, modeling and learning approaches for human-computer/robot interaction (Multi-Learn) workshop, Kos island, Greece (Oral Presentation)


Audio-Based Distributional Representations of Meaning Using a Fusion of Feature Encodings

Giannis Karamanolakis, Elias Iosif, Athanasia Zlatintsi, Aggelos Pikrakis, and Alexandros Potamianos,
INTERSPEECH 2016, San Fransisco, California (Oral Presentation)
[PDF] [slides]


Γιάννης Καραμανωλάκης
E-mail: <x>, where x=gkaraman.
Office: Mudd 406, Data Science Institute (map).

Extra: My first name (Giannis) is pronounced as y aa n ih s. You can also try this: