We show the acceleration of sparse matrix vector multiplication (SpMV) on GPU by highly reducing memory traffic. SpMV is a dominant kernel in many sparse algorithms. The performance of SpMV is strongly limited by memory bandwidth and lower locality of memory access to input vector causing performance degradation. We propose new sparse matrix format, which alleviates these problems about memory bound by adaptive multi-level blocking techniques and compressing the index of the given matrix. Performance evaluations of SpMV for 40 matrix datasets show that we achieve speedups of x2.91 on maximum and x1.81 on average compared to NVIDIA's cuSparse library. We also find out the memory traffic in SpMV can be estimated and the performance of SpMV strongly depends on the memory traffic.