aboutsummaryrefslogtreecommitdiff
path: root/ldscripts
diff options
context:
space:
mode:
authorAdam Litke <agl@us.ibm.com>2008-01-15 10:11:03 -0600
committerNishanth Aravamudan <nacc@us.ibm.com>2008-01-30 22:13:10 -0800
commit61083df614efd97cef227ea622f84d96d1324c7e (patch)
tree98ec007a42d7b2c8efb82d94be1ca07e7d275a45 /ldscripts
parent64ef0c05ebd2b6c79111ebc32c415a8fc9a73b09 (diff)
downloadlibhugetlbfs-61083df614efd97cef227ea622f84d96d1324c7e.tar.gz
elf64ppc.xB flexible BSS alignment
glibc makes certain assumptions about the layout of the text, data, and bss segments of shared objects and executables. One of those assumptions is that all segments of an objects will be mapped consecutively. The current elf64ppc.xB linker script unconditionally begins the BSS at 1.5T which virtually guarantees that something will be mapped between the data segment and bss of the executable. This breaks the consecutive mapping assumption and can cause application failures. In many cases, the text, data, and bss can all fit below 4G. In this case, BSS alignment can be reduced to 256M and executable segments can be mapped much closer together. This patch contains a conditional ALIGN statement so the case of a data segment extending beyond 4G can be handled. This patch resolves the dlopen problems we have been seeing lately with some benchmark suites. Signed-off-by: Adam Litke <agl@us.ibm.com> Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Diffstat (limited to 'ldscripts')
-rw-r--r--ldscripts/elf64ppc.xB15
1 files changed, 8 insertions, 7 deletions
diff --git a/ldscripts/elf64ppc.xB b/ldscripts/elf64ppc.xB
index 030840e..b05915b 100644
--- a/ldscripts/elf64ppc.xB
+++ b/ldscripts/elf64ppc.xB
@@ -179,13 +179,14 @@ SECTIONS
. = ALIGN(64 / 8);
. = ALIGN(64 / 8);
. = DATA_SEGMENT_END (.);
- /* Hugepage area */
- /* Saving hugepages is more important than saving executable size, so
- * we don't attempt to maintain congruence here */
- . = ALIGN(0x18000000000); /* Move into next 1TB area, but use 1.5TB
- * instead of 1TB for compatibility with
- * old kernels that have a fixed hugepage
- * range */
+ /* Hugepage area:
+ * Saving hugepages is more important than saving executable size, so
+ * we don't attempt to maintain congruence here.
+ * In order to map hugepages into the address space, we must advance the
+ * location counter to a segment boundary. If the address is < 4G, the
+ * next segment will be on a 256M boundary. For higher areas, we have a
+ * 1TB granularity. */
+ . = (. < 0x100000000) ? ALIGN(0x10000000) : ALIGN(0x10000000000);
/* HACK: workaround fact that kernel may not cope with segments with zero
* filesize */
.hugetlb.data : { LONG(1) } :htlb