Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

About me

Posts

2023 Rewind

less than 1 minute read

Published:

Hello everyone, the following is my rewind of 2023.

Rebuild ubuntu 22.04

1 minute read

Published:

This is a post about how to rebuild ubuntu 22.04, and install some useful tools.

New York

less than 1 minute read

Published:

「The Big Apple. The dream of every lad that ever threw a leg over a thoroughbred and the goal of all horsemen. There’s only one Big Apple. That’s New York.」 —— The New York Morning Telegraph, John J. Fitz Gerald, 1921

My experience for watching NBA

less than 1 minute read

Published:

「大江東去,浪淘盡,千古風流人物」 NBA 悠久歷史,出現許許多多傳奇,相信籃球迷也都津津樂到 在此不多贅述,只分享我這觀看幾年的心路歷程

Niagara Falls

less than 1 minute read

Published:

「震懾與驚艷,無數條河流匯聚起,萬馬奔騰」,這是我對尼加拉瓜瀑布的評語,目前生平兩度造訪,第一次的初窺,第二次的領略,讓我大致體驗,無數水花飛濺,迷濛的霧氣環顧四周,那美好的記憶被錄製在腦中。

experiences

Teaching Assistant

Part-time, National Yang Ming Chiao Tung University, Department of Communication Engineering, 2023

Duration: Sep. 2023 - Jan. 2024 (5 months)

Teaching Assistant

Part-time, National Yang Ming Chiao Tung University, Department of Communication Engineering, 2024

Duration: Feb. 2024 - Present

portfolio

Posture

Published:

College graduation project

publications

talks

Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables

Published:

Previous methods rely heavily on on-policy experience, limiting their sample efficiency.

They also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems.

This paper developing an off-policy meta-RL algorithm that disentangles task inference and control.

  1. Achieving excellent sample efficiency during meta-training, enables fast adaptation by accumulating experience online
  2. Performing structured exploration by reasoning about uncertainty over tasks

BEIT:BERT Pre-training of Image Transformers

Published:

Motivated by BERT, they turn to the denoising auto-encoding idea to pretrain vision transformers, which has not been well studied by the vision community.

Off-Policy Deep Reinforcement Learning without Exploration

Published:

This paper proposes a new algorithm for off-policy reinforcement learning that combines state-of-the-art deep Q-learning algorithms with a state-conditioned generative model for producing only previously seen actions.

Active Retrieval Augmented Generation

Published:

Most existing retrieval-augmented LMs employ a retrieve-and-generate setup that only retrieves information once based on the input.