Wenye Lin

Hello! I’m Wenye Lin, a third year graduate from Tsinghua University, majoring in computer science.
I have a broad research interest across Machine Learning, Natural Language Processing, Computer Vision, and mainly focus on Representation Learning and improving generalization of networks.

Experience

I was a research intern at Tencent AI Lab, NLP Center during Sept. 2021 - Aug. 2022.

Community Service

I am a regular reviewer for EMNLP, AAAI and NLPCC, et al.

Publications

Wenye Lin, Yifeng Ding, Zhixiong Cao, Hai-tao Zheng (2022). Establishing a stronger baseline for lightweight contrastive models. In arxiv.

In this work, we aim to establish a stronger baseline for lightweight contrastive models without using a pretrained teacher model.


Wenye Lin, Yangning Li, Yifeng Ding, Hai-tao Zheng (2022). Tree-Supervised Auxiliary Online Knowledge Distillation. In IJCNN.

We design tree-structured auxiliary (TSA) online knowledge distillation to perform one-stage distillation when the teacher is unavailable. With TSA, we gain an average of 3% to 4% improvement in accuracy on CIFAR-100. On ImageNet, ResNet-34 obtains 74.97% accuracy, which is 1.8% above the vanilla one. On IWSLT translation tasks, we gain an average of 0.9 BLEU improvement over vanilla Transformer for three datasets.


Wenye Lin, Yangming Li, Lemao Liu, Shuming Shi, Hai-tao Zheng (2022). Efficient Structured Knowledge Distillation. In arxiv.

Performing knowledge distillation for structured prediction models models is not trivial due to their exponentially large output space. In this work, we propose an approach that is much simpler in its formulation and far more efficient for training than existing approaches. Specifically, we transfer the knowledge from a teacher model to its student model by locally matching their predictions on all sub-structures, instead of the whole output space.