Hao Liu


I'm a research scientist at Google DeepMind.

Previously, I was a PhD student in EECS at Berkeley, advised by Pieter Abbeel, and spent two years part-time at Google Brain.

I'm an incoming Assistant Professor at Carnegie Mellon University.

I'm interested in solving general superintelligence. To reach this goal, I research scalable training objectives and architectures, developing methodologies in areas such as world models, reasoning, large-scale language models, and reinforcement learning.




News:

  • ElasticTok: adapts computation to represent images/videos based on available information(paper'24,code).

  • Large World Models enable modeling text and video of millions tokens length(code).

  • Agentic Transformer for learning decision-making from trial-and-error sequences at scale(paper'23).

  • Nearinfinite context with BlockwiseTransformer,RingAttention(paper'24,paper'23,code).

  • Open language models Koala OpenLLaMa(blog'23,models).

  • Reinforcement learning from unsupervised exploration(paper'22,paper'22,paper'21).



  • Publications
    Teaching
    email: haoxyliu@gmail.com