Portrait

I am a third-year computer science student at UCLA. I work closely with Yijia Xiao and I am advised by Professor Wei Wang and Professor Yuchen Cui. I also actively collaborate with Yuchen Wu from the University of Washington, and work jointly with Professor Jindong Wang at William & Mary.

I have worked on multimodal large language models, reasoning, and agentic systems, with applications in biology, finance, and robotics. During my time working on agents and reasoning for finance applications, I co-founded the opensource research community Tauric Research with my amazing mentor Yijia Xiao. I am currently interested in questions of safety, robustness, and interpretability in foundational models and their applications.

I was named a Goldwater Scholar in 2025.

This site hosts my writings, research, and ways to get in touch.

Highlights

Personalized Safety in LLMs: A Benchmark and A Planning-Based Agent Approach

Yuchen Wu, Edward Sun, Kaijie Zhu, Jianxun Lian, Jose Hernandez-Orallo, Aylin Caliskan, Jindong Wang

NeurIPS 2025

Introduced the need to study personalized safety in LLMs. Argued that alignment should not be purely global, but tailored to a user's background since risk profiles vary across users. Showed where current models fall short, and introduced an inference-time mitigation approach using LLM-guided Monte Carlo Tree Search.

Trading-R1: Financial trading with llm reasoning via reinforcement learning

Y Xiao, Edward Sun, T Chen, F Wu, D Luo, W Wang

arXiv preprint arXiv:2509.11420

Trading-R1 is an RL-trained reasoning LLM that transforms heterogeneous financial signals into structured, auditable investment theses and volatility-aware trade ratings by learning multi-perspective financial reasoning through reverse chain-of-thought distillation and an easy-to-hard RL curriculum, achieving superior risk-adjusted backtest performance over generic instruction-following and reasoning models.

ProteinGPT: Multimodal LLM for protein property prediction and structure understanding

Y Xiao, Edward Sun, Y Jin, Q Wang, W Wang

ICLR 2025 MLGenX Spotlight

ProteinGPT is a multimodal LLM that accelerates protein research by enabling natural language interaction with protein sequences and structures. Trained on over 130,000 proteins, it integrates evolutionary-scale protein folding and inverse folding models (esm2) to encode sequence and structural information, aligns these modalities with language, and is instruction-tuned to deliver an interactive, high-performance protein-focused chat experience.

Created: 2026-01-05 Mon 06:55

Emacs 29.3 (Org mode 9.6.15)