My research interests lie in the broad area of Parallel & Distributed Computing including their Programmability, Performance, Scalability & Fault Tolerance. My specific current research interests include:
LARGE SCALE GRAPH PROCESSING
Efficiently processing very large graphs is a challenging task. On a cluster based environment, inter-node communication latency becomes a significant contributing factor in the overall performance of graph processing applications. Moreover, it is important to tolerate machine failures during processing in order to prevent loss of computed results. In out-of-core processing environment, the overheads incurred by disk reads and writes heavily offset the execution time and hence affect the overall performance.
I am looking at different ways to leverage asynchronous parallelism in order to reduce communication dependency and hence, speedup the overall processing. To reduce the overheads incurred by traditional fault tolerance mechanisms, I am interested in developing custom light-weight fault tolerance mechanisms which can handle machine failures by minimal or no rollback of execution states. For disk-based processing, I am investigating various techniques to dynamically capture the required set of data that contributes to the overall progress of algorithm, hence eliminating unnecessary disk reads and writes. I am also interested in developing novel representations which are GPU friendly, i.e., those that allow coalesced memory access and do not result in GPU under-utilization while processing.
PROGRAMMING PARALLEL ALGORITHMS
Various runtime systems (graph processing systems, dataflow execution based systems, etc.) require users to express algorithms in different forms like vertex-centric computations, map-reduce tasks, etc. These computations are executed in parallel by the runtime framework and hence are typically expected to exhibit various properties like commutativity & associativity. I am interested in defining such different properties of computations which affect the runtime behavior and the correctness of algorithms in different scenarios (asynchronous processing, streaming graph processing, etc.). Using these properties I am interested in developing different algorithms which solve the same problem, in order to enable specific runtime optimizations and to allow better understanding of the algorithms for easier correctness checking.
MEMORY CONSISTENCY MODELS
The performance of shared memory systems is heavily dependent on the consistency models they provide. Various applications often do not require the strong consistency guarantees that most of the memory systems provide. I am interested in developing relaxed consistency protocols which allow relaxations of read-write dependencies in order to accelerate memory operations. Beyond this, I am also looking at identifying algorithms which can leverage from benefits achieved by relaxed consistency protocols while ensure correctness of results.
LARGE SCALE GRAPH PROCESSING
Efficiently processing very large graphs is a challenging task. On a cluster based environment, inter-node communication latency becomes a significant contributing factor in the overall performance of graph processing applications. Moreover, it is important to tolerate machine failures during processing in order to prevent loss of computed results. In out-of-core processing environment, the overheads incurred by disk reads and writes heavily offset the execution time and hence affect the overall performance.
I am looking at different ways to leverage asynchronous parallelism in order to reduce communication dependency and hence, speedup the overall processing. To reduce the overheads incurred by traditional fault tolerance mechanisms, I am interested in developing custom light-weight fault tolerance mechanisms which can handle machine failures by minimal or no rollback of execution states. For disk-based processing, I am investigating various techniques to dynamically capture the required set of data that contributes to the overall progress of algorithm, hence eliminating unnecessary disk reads and writes. I am also interested in developing novel representations which are GPU friendly, i.e., those that allow coalesced memory access and do not result in GPU under-utilization while processing.
PROGRAMMING PARALLEL ALGORITHMS
Various runtime systems (graph processing systems, dataflow execution based systems, etc.) require users to express algorithms in different forms like vertex-centric computations, map-reduce tasks, etc. These computations are executed in parallel by the runtime framework and hence are typically expected to exhibit various properties like commutativity & associativity. I am interested in defining such different properties of computations which affect the runtime behavior and the correctness of algorithms in different scenarios (asynchronous processing, streaming graph processing, etc.). Using these properties I am interested in developing different algorithms which solve the same problem, in order to enable specific runtime optimizations and to allow better understanding of the algorithms for easier correctness checking.
MEMORY CONSISTENCY MODELS
The performance of shared memory systems is heavily dependent on the consistency models they provide. Various applications often do not require the strong consistency guarantees that most of the memory systems provide. I am interested in developing relaxed consistency protocols which allow relaxations of read-write dependencies in order to accelerate memory operations. Beyond this, I am also looking at identifying algorithms which can leverage from benefits achieved by relaxed consistency protocols while ensure correctness of results.