In order to accelerate the learning process in high dimensional reinforcement learning problems, TD methods such as Q-learning and Sarsa are usually combined with eligibility traces. The recently introduced DQN (Deep Q-Network) algorithm, which is a combination of Q-learning with a deep neural network, has achieved good performance on several games in the Atari 2600 domain. However, the DQN training is very slow and requires too many time steps to converge. Here, we use the eligibility traces mechanism and propose the deep Q(λ) network algorithm. The proposed method provides faster learning in comparison with the DQN method. Empirical results on a range of games show that this approach significantly reduces learning time.