Dynamic Tuning of Online Data Migration Policies in Hierarchical Storage Systems using Reinforcement Learning*

Dynamic Tuning of Online Data Migration Policies in Hierarchical Storage Systems using Reinforcement Learning*

David Vengerov

19 June 2006

Multi-tier storage systems are becoming more and more widespread in the industry. In order to minimize the request response time in such systems, the most frequently accessed ("hot") files should be located in the fastest storage tiers (which are usually smaller and more expensive than the other tiers). Unfortunately, it is impossible to know ahead of time which files are going to be "hot", especially because the file access patterns change over time. This report presents a solution approach to this problem, where each tier uses Reinforcement Learning (RL) to learn its own cost function that predicts its future request response time, and the files are then migrated between the tiers so as to decrease the sum of costs of the tiers involved.

A multi-tier storage system simulator was used to evaluate the migration policies tuned by RL, and such policies were shown to achieve a significant performance improvement over the best hand-crafted policies found for this domain.

*This material is based upon work supported by DARPA under Contract No. NBCH3039002.


Venue : N/A