Emre Can Acikgoz

I am a PhD fellow in Computer Science at UIUC advised by Prof. Dilek Hakkani-Tur and Prof. Gokhan Tur. My research focus is in Conversational and Generative AI.


I obtained my MSc in Computer Science at Koc University. I worked around Large Language Models and Multimodal Learning under the supervision of Deniz Yuret and Aykut Erdem.


Prior to my MSc, I recieved my BSc in Electrical and Electronics Engineering (AI focus) from Koc University, where I worked under the supervision of Deniz Yuret on supervised and unsupervised morphological analysis.


Email  /  GitHub  /  HuggingFace  /  Google Scholar  /  LinkedIn

profile photo

Research

My research is in Conversational Agents and Large Language Models.

2024


respact ReSpAct: Harmonizing Reasoning, Speaking, and Acting
Vardhan Dongre, Xiaocheng Yang, Emre Can Acikgoz, Suvodip Dey, Gokhan Tur, Dilek Hakkani-Tür
arXiv, 2024
arxiv / website / code

ReSpAct is a framework that enables LLM agents to engage in interactive, user-aligned task-solving. It enhances agents' ability to clarify, adapt, and act on feedback.

hippo Bridging the Bosphorus: Advancing Turkish Large Language Models through Strategies for Low-Resource Language Adaptation and Benchmarking
Emre Can Acikgoz, Mete Erdoğan, Deniz Yuret
EMNLP Workshop, 2024
arxiv / website / code / poster

This study evaluates the effectiveness of training strategies for large language models in low-resource languages like Turkish, focusing on model adaptation, development, and fine-tuning to enhance reasoning skills and address challenges such as data scarcity and catastrophic forgetting.

bridge Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare
Emre Can Acikgoz, Osman Batur İnce, Rayene Bech, Arda Anıl Boz, Ilker Kesen, Aykut Erdem, Erkut Erdem
NeurIPS Workshop (Oral), 2024
arxiv / website / poster

We present Hippocrates, an open-source LLM framework specifically developed for the medical domain. Also, we introduce Hippo, a family of 7B models tailored for the medical domain, fine-tuned from Mistral and LLaMA2 through continual pre-training, instruction tuning, and reinforcement learning from human and AI feedback.

vilma ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models
Ilker Kesen, Andrea Pedrotti, Mustafa Dogan, Michele Cafagna, Emre Can Acikgoz, Letitia Parcalabescu, Iacer Calixto, Anette Frank, Albert Gatt, Aykut Erdem, Erkut Erdem
ICLR, 2024
arxiv / website / code

ViLMA (Video Language Model Assessment) presents a comprehensive benchmark for Video-Language Models, starting with a fundamental comprehension test and followed by a more advanced evaluation for temporal reasoning skills.

2022


mrl Transformers on Multilingual Clause-Level Morphology
Emre Can Acikgoz, Tilek Chubakov, Müge Kural, Gözde Gül Şahin, Deniz Yuret
EMNLP Workshop, 2022
arxiv / code / slides

This paper describes the winning approaches in MRL: The 1st Shared Task on Multilingual Clause-level Morphology. Our submission, which excelled in all three parts of the shared task — inflection, reinflection, and analysis — won the first prize in each category.

Tutor


Comp547: Deep Unsupervised Learning (Spring'24)
Comp541: Deep Learning (Fall'23)
Comp542: Natural Language Processing (Spring'23)
Comp541: Deep Learning (Fall'22)
Comp547: Deep Unsupervised Learning (Spring'22)

Talks


Huawei NLP/ML Community Seminer Series: Morphological Analysis with Large Language Models (2022, Virtual)
EMNLP MRL: Winning Paper Presentation (2022, Abu-Dhabi)


(website template credits)