|author||Wu Fengguang <firstname.lastname@example.org>||2009-06-16 15:31:30 -0700|
|committer||Linus Torvalds <email@example.com>||2009-06-16 19:47:29 -0700|
readahead: record mmap read-around states in file_ra_state
Mmap read-around now shares the same code style and data structure with readahead code. This also removes do_page_cache_readahead(). Its last user, mmap read-around, has been changed to call ra_submit(). The no-readahead-if-congested logic is dumped by the way. Users will be pretty sensitive about the slow loading of executables. So it's unfavorable to disabled mmap read-around on a congested queue. [firstname.lastname@example.org: coding-style fixes] Cc: Nick Piggin <email@example.com> Signed-off-by: Fengguang Wu <firstname.lastname@example.org> Cc: Ying Han <email@example.com> Signed-off-by: Andrew Morton <firstname.lastname@example.org> Signed-off-by: Linus Torvalds <email@example.com>
Diffstat (limited to 'mm/readahead.c')
1 files changed, 2 insertions, 21 deletions
diff --git a/mm/readahead.c b/mm/readahead.c
index d7c6e143a12..a7f01fcce9e 100644
@@ -133,15 +133,12 @@ out:
- * do_page_cache_readahead actually reads a chunk of disk. It allocates all
+ * __do_page_cache_readahead() actually reads a chunk of disk. It allocates all
* the pages first, then submits them all for I/O. This avoids the very bad
* behaviour which would occur if page allocations are causing VM writeback.
* We really don't want to intermingle reads and writes like that.
* Returns the number of pages requested, or the maximum amount of I/O allowed.
- * do_page_cache_readahead() returns -1 if it encountered request queue
- * congestion.
__do_page_cache_readahead(struct address_space *mapping, struct file *filp,
@@ -232,22 +229,6 @@ int force_page_cache_readahead(struct address_space *mapping, struct file *filp,
- * This version skips the IO if the queue is read-congested, and will tell the
- * block layer to abandon the readahead if request allocation would block.
- * force_page_cache_readahead() will ignore queue congestion and will block on
- * request queues.
-int do_page_cache_readahead(struct address_space *mapping, struct file *filp,
- pgoff_t offset, unsigned long nr_to_read)
- if (bdi_read_congested(mapping->backing_dev_info))
- return -1;
- return __do_page_cache_readahead(mapping, filp, offset, nr_to_read, 0);
* Given a desired number of PAGE_CACHE_SIZE readahead pages, return a
* sensible upper limit.
@@ -260,7 +241,7 @@ unsigned long max_sane_readahead(unsigned long nr)
* Submit IO for the read-ahead request in file_ra_state.
-static unsigned long ra_submit(struct file_ra_state *ra,
+unsigned long ra_submit(struct file_ra_state *ra,
struct address_space *mapping, struct file *filp)