There have been enormous improvements in event reconstruction and signal identification for neutrino experiments via deep learning methods in recent years. Although these venues reach a considerable accuracy, they are also very time- and power-consuming. This poster presents the first attempt at accelerating deep learning methods for muon event energy and zenith reconstruction on Tensor Processing Units (TPU’s). These units, normally utilized on mobile devices for multiple-purpose real-time deep learning inference, are extremely power-efficient and fast, with the cost of a minor accuracy loss compared with GPU's. By modifying the DNN networks and obtaining an architecture and operations that are TPU-compatible, we demonstrate the advantage of TPUs in tasks related to neutrino experiments event reconstruction.