In this paper, we modeled eye movements on tiled Large High-Resolution Displays (LHRD) as a Markov decision process(MDP). We collected eye movements of users participated with free viewing task experiments on LHRD. We have examined two different inverse reinforcement learning algorithms. The presented approach used information about the possible eye movement positions. We showed that it is possible to automatically extract reward function based on effective features from user eye movement behaviors using IRL. We found that, the Itti and HoG features shows highest positive reward weights for both algorithms. Most interestingly, we found that the reward function was able to extract expert behavior information that fulfill to predict eye movements. This is valuable information for estimating the internal states of users such as in, for example, intention recognition, to adapt visual interfaces, or to place important information.