Huge page is an attractive technique that can improve performance by reducing the number of TLB (Translation Lookaside Buffer) misses and address translation overhead. However, a lot of memory intensive applications such as Redis, Hadoop and MongoDB recommend to disable this technique due to the performance anomaly. To address this issue, this paper proposes a novel analytic framework, called HPanal, that can evaluate the benefit and cost of the huge page technique quantitatively. The benefit is estimated by three parameters, namely TLB miss, page walk overhead and page fault, while the cost is assessed by the page allocation overhead. These parameters are affected by not only application characteristics such as working set size and access pattern but also system conditions such as available memory and fragmentation degree. HPanal also provides run time capabilities for measuring these parameters while changing application characteristics and system conditions dynamically. Real implementation based experimental results reveal that our framework can explore the tradeoffs in an appropriate way and the fragmentation degree plays a key role on performance of the huge page technique.