Run:AI aims to abstract hardware accelerators from data scientists and AI engineers to fast track the development and deployment of AI projects. We have built a virtualization software platform that allows IT administrators to manage resource allocations more efficiently, reduce infrastructure idle time, and increase cluster utilization. This, in turn, allows data scientists and AI engineers to consume more compute power to either run more experiments or to run distributed training using multiple AI Accelerators, essentially improving their productivity and the quality of their science, while shortening training times and enabling the training of very large DL models. These improvements translate into 10X increase in hardware utilization, 10x faster model training, and the use of 10x larger DL models.