I am an Assistant Professor in the Computer Science and Engineering (CSE) Department at the University of California, Riverside (UCR), since 2020.
Prior to that, I was a postdoc associate at MIT CSAIL in 2019, working with Julian Shun. Before that, I finished my PhD at Carnegie Mellon University in 2018, advised by Guy Blelloch. Before that, I received my Bachelor’s degree in Computer Science from Tsinghua University in 2012. You can find my [CV] here.
My current research interest is design efficient (usually parallel) algorithms for large-scale data with good performance in practice. You can find more information about our research group here.
We always welcome highly self-motivated students to join our group. However, the requested number is huge and we only have very limited slots per year for PhDs, Master’s projects/theses, undergrad research, and summer intership. If you are currently at UCR (for both undergrads and graduates), make sure you have taken CS 214: Parallel Algorithm (offered yearly) and did well. If you are not at UCR but want to join our research group, please fill out this form and drop me an email. This is particularly important for PhD applications since we do not necessarily check the general pool (which is very large). You can find more information about [recruiting], and instructions on requesting my [recommendation letter].
I am teaching CS 119L in Spring 2024 (https://www.cs.ucr.edu/~ygu/teaching/119/W24/index.html).
This is the homepage of my lovely wife.
View my publication list:
A full list of my publications can be found here.
Research Interests and Selected Topics
My research aims at designing efficient algorithms and software for the most fundamental problems (e.g., graphs, geometry) in CS. Basically speaking, the two main approaches are (1) by understanding the emerging hardware and algorithm design principles on top of them, and (2) to design new algorithms that are faster and/or have better theoretical guarantees than the best previous ones. In many cases, the two methods overlap, and the algorithms designed for certain hardware actually work well for a wide variety of scenarios, due to the new angle to consider these problems.
Parallel Algorithms:
Shared-memory parallel algorithms have been studied for over 50 years. Still, quite surprisingly, it seems that there are a lot of room for improvements both theoretically and practically, even for the most fundamental problems. The new algorithms I have designed are classed into the following categories.
Graph algorithms [Related Papers]
The new parallel graph algorithms I have designed include: a variety of stepping algorithms (rho-stepping, radius-stepping, delta*-stepping) for SSSP, BGSS algorithm for SCC, BC labeling for biconnectivity, and many other distance-based or connectivity-based algorithms.
Geometric algorithms [Related Papers]
The new parallel geometric algorithms I have designed include: the first theoretically-efficient parallel algorithms for incremental convex hull and Delaunay triangulation, several algorithms to construct spatial-partitioning data structures, and several spatial clustering algorithms that are efficient both theoretically and practically.
Data structures [Related Papers]
I have been working on parallel tree structures, both in theory and in practice. Some interesting works include: parallel priority queues, compressed parallel search trees, and parallel data structures for range and nearest-neighbor search in low dimension.
Parallel incremental algorithms [Related Papers]
We show parallel algorithms under the randomized incremental construction (RIC) framework, and many incremental algorithms in this framework are highly parallel (i.e., with shallow dependence depth).
Algorithm design for emerging computer architecture: [Related Papers]
Write-efficient algorithms [Related Papers]
This group of work aims at better utilizing the new non-volatile main memories (NVRAMs) for computing, which provides significantly larger capacity than normal DRAM. However, writing to NVRAMs is significantly more costly than reading, so how to reduce the number of writes in algorithms is interesting both theoretically and practically.
Space-efficient algorithms [Related Papers]
Space-efficient algorithms consider computing using limited-size fast memories (i.e., DRAMs). Nowadays, DRAM size is sufficient for many applications that were only available on distributed settings, super-computers, or on disks previously, which greatly improves the performance. However, DRAM size is still limited and relatively small (e.g., compared to NVRAMs). Hence, how to use such a DRAM for efficient computing remains to be an interesting topic to explore.
PIM-balanced algorithms [Related Papers]
Processing-in-Memory (PIM) is an architectural idea that puts processors inside the memory chips to reduce data movement between CPU and main memory. Algorithmically, this new architecture can be considered as a hybrid of shared-memory computing (the CPU side) and distributed computing (the PIM side). While the setting is very different from what we have previously, it remains an open problem on how to cooperate between the CPU and the PIM modules for a balanced overall computation.
Other Topics
I have also been working on some other interesting research topics, such as computer graphics and image processing. You can go to the full list of my papers for more details.