The increasing popularity of high-volume performance-critical Internet applications calls for a scalable server design that allows meeting individual response-time guarantees. Considering the fact that most Internet applications can tolerate a small percentage of deadline misses, we define delay constraint as a statistical guarantee so as to relax server resource requirements. A recent decay function model characterizes the relationship between the request delay constraint, deadline miss rate, and server capacity in a transfer function based filter system. This paper extends the model and develops a time-variant scheduling policy that minimizes system load variances and capacity requirement. The scheduler assumes no priori knowledge about the input request distribution and correlation structure. The resultant server capacity bound is further tightened by utilizing the information of request arrival distribution. Simulation results validate the extended decay function model and show the superiority of the scheduler in comparison with other scheduling algorithms.