Oracle has a portfolio of Machine Learning (ML) offerings for monitoring time-series telemetry signals for anomaly detection. The product suite is called the Multivariate State Estimation Technique (MSET2) that integrates an advanced prognostic pattern recognition technique with a collection of intelligent data preprocessing (IDP) innovations for high-sensitivity prognostic applications. One of the important application is monitoring dynamic computer power and catching the early incipience of mechanisms that cause servers to fail using the telemetry signals of servers. Telemetry signals in computing servers typically include many physical variables (e.g., voltages, currents, temperatures, fan speeds, and power levels) that correlate with system IO traffic, memory utilization, and system throughput. By utilizing the telemetry signals, MSET2 improve power efficiencies by monitoring, reporting and forecasting energy consumption, cooling requirements and load utilization of servers. However, the common challenge in the computing server industry is that telemetry signals are never perfect. For example, enterprise-class servers have disparate sampling rates and are often not synchronized in time, resulting in a lead-lag phase change among the various signals. In addition, the enterprise computing industry often uses 8-bit A/D conversion chips for physical sensors. This makes it difficult to discern small variations in the physical variables that are severely quantized because of the use of low-resolution chips. Moreover, missing values often exist in the streaming telemetry signals, which can be caused by the saturated system bus or data transmission error. This paper describes some features of key IDP algorithms for optimal ML solutions to the aforementioned challenges across the enterprise computing industry. It assures optimal ML performance for prognostics, optimal energy efficiency of Enterprise Servers, and streaming analytics.