Discussion:
[RFC] Re-write LTO type merging again, do tree merging
(too old to reply)
Richard Biener
2013-06-12 14:40:18 UTC
Permalink
The following patch re-writes LTO type merging completely with the
goal to move as much work as possible to the compile step, away
from WPA time. At the same time the merging itself gets very
conservative but also more general - it now merges arbitrary trees,
not only types, but only if they are bit-identical and have the
same outgoing tree pointers.

Especially the latter means that we now have to merge SCCs of trees
together and either take the whole SCC as prevailing or throw it
away. Moving work to the compile step means that we compute
SCCs and their hashes there, re-organizing streaming to stream
tree bodies as SCC blocks together with the computed hash.

When we ask the streamer to output a tree T then it now has
to DFS walk all tree pointers, collecting SCCs of not yet
streamed trees and output them like the following:

{ LTO_tree_scc, N, hash, entry_len,
{ header1, header2, ... headerN },
{ bits1, refs1, bits2, refs2, ... bitsN, refsN } }
{ LTO_tree_scc, 1, hash, header, bits, refs }
{ LTO_tree_scc, M, hash, entry_len,
{ header1, header2, ... headerM },
{ bits1, refs1, bits2, refs2, ... bitsM, refsM } }
LTO_tree_pickle_reference to T

with tree references in refsN always being LTO_tree_pickle_references
instead of starting a new tree inline. That results in at most
N extra LTO_tree_pickle_references for N streamed trees, together
with the LTO_tree_scc wrapping overhead this causes a slight
increase in LTO object size (around 10% last time I measured, which
was before some additional optimization went in).

The overhead also happens on the LTRANS file producing side
which now has to do the DFS walk and stream the extra data.
It doesn't do the hashing though as on the LTRANS consumer
side no merging is performed.

The patch preserves the core of the old merging code to compare
with the new code and output some statistics. That means that
if you build with -flto-report[-wpa] you get an additional
compile-time and memory overhead.

For reference here are the stats when LTO bootstrapping for
stage2 cc1:

WPA statistics
[WPA] read 2494507 SCCs of average size 2.380067
[WPA] 5937095 tree bodies read in total
[WPA] tree SCC table: size 524287, 286280 elements, collision ratio:
0.806376
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 403361 SCCs, 6226 collisions (0.015435)
[WPA] Merged 399980 SCCs
[WPA] Merged 2438250 tree bodies
[WPA] Merged 192475 types
[WPA] 195422 types prevailed
[WPA] Old merging code merges an additional 54582 types of which 21083 are
in the same SCC with their prevailing variant

this says that we've streamed in 5937095 tree bodies in
2494507 SCCs (so the average SCC size is small), of those
we were able to immediately ggc_free 399980 SCCs because they
already existed in identical form (16% of the SCCs, 41% of the trees
and 49% of the types). The old merging code forced the merge
of an additional 54582 types (but 21083 of them it merged with
a type that is in the same SCC, that is, it changed the shape
of the SCC and collapsed parts of it - something that is
suspicious).

The patch was LTO bootstrapped (testing currently running) on
x86_64-unknown-linux-gnu and I've built SPEC2k6 with -Ofast -g -flto
and did a test run of the binaries which shows that
currently 471.omnetpp, 483.xalancbmk and 447.dealII fail
(471.omnetpp segfaults in __cxxabiv1::__dynamic_cast) - these
fails were introduced quite recently likely due to the improved
FUNCTION_DECL and VAR_DECL merging and the cgraph fixup Honza did.

Comments?

Thanks,
Richard.


* lto-streamer.h (enum LTO_tags): Add LTO_tree_scc.
(lto_input_scc): Declare.
(lto_input_tree_1): Likewise.
* lto-streamer-in.c (lto_read_body): Use streamer_tree_cache_get_tree.
(lto_read_tree_1): Split out from ...
(lto_read_tree): ... this.
(lto_input_scc): New function.
(lto_input_tree_1): Split out from ...
(lto_input_tree): ... this. Handle LTO_tree_scc.
(lto_data_in_create): Create the streamer cache without hashes.
* lto-streamer-out.c (create_output_block): Create the streamer
cache with hashes when not doing WPA.
(lto_write_tree_1): Split out from ...
(lto_write_tree): ... this.
(get_symbol_initial_value): New function.
(lto_output_tree_1): Split out from ...
(lto_output_tree): ... this. Write trees as series of SCCs
using a DFS walk via DFS_write_tree.
(struct sccs, struct scc_entry): New types.
(next_dfs_num, sccstack, sccstate, sccstate_obstack): New globals.
(DFS_write_tree_body): New function.
(DFS_write_tree): Likewise.
(hash_tree): Likewise.
(scc_entry_compare): Likewise.
(hash_scc): Likewise.
* tree-streamer-in.c (lto_input_ts_list_tree_pointers): Stream
TREE_CHAIN as regular reference.
(streamer_read_integer_cst): Remove.
(streamer_get_pickled_tree): Adjust.
* tree-streamer-out.c (streamer_write_chain): Disable streaming
of DECL_EXTERNALs in BLOCK_VARS for now.
(write_ts_list_tree_pointers): Stream TREE_CHAIN as regular
reference.
* tree-streamer.c (streamer_tree_cache_add_to_node_array):
Add hash value argument and record that if hashes are recorded
in the cache.
(streamer_tree_cache_insert_1): Adjust.
(streamer_tree_cache_insert): Likewise.
(streamer_tree_cache_insert_at): Rename to ...
(streamer_tree_cache_replace_tree): ... this and adjust.
(streamer_tree_cache_append): Adjust.
(record_common_node): Likewise.
(streamer_tree_cache_create): Add argument whether to
record hash values together with trees.
(streamer_tree_cache_delete): Adjust.
* tree-streamer.h (struct streamer_tree_cache_d): Add
vector of hashes.
(streamer_read_integer_cst): Remove.
(streamer_tree_cache_insert): Adjust.
(streamer_tree_cache_append): Likewise.
(streamer_tree_cache_insert_at): Rename to ...
(streamer_tree_cache_replace_tree): ... this and adjust.
(streamer_tree_cache_create): Add argument whether to record hashes.
(streamer_tree_cache_get): Rename to ...
(streamer_tree_cache_get_tree): ... this.
(streamer_tree_cache_get_hash): New function.
* tree.c (cache_integer_cst): New function.
* tree.h (cache_integer_cst): Declare.

lto/
* Make-lang.in (lto.o): Add $(DATA_STREAMER_H) dependency.
* lto.c: Include data-streamer.h.
(lto_read_in_decl_state): Use streamer_tree_cache_get_tree.
(gimple_type_leader_entry_s, gimple_type_leader,
gimple_lookup_type_leader): Remove.
(gtc_visit): Simplify.
(gimple_types_compatible_p): Likewise.
(gimple_register_type_1): Likewise. Merge into ...
(gimple_register_type): ... this. Keep it as legacy for
statistics purposes for now.
(fixup_integer_cst): Remove.
(LTO_FIXUP_TREE, lto_fixup_types, lto_ft_*): Simplify and
rename to ...
(MAYBE_REMEMBER_WITH_VARS, maybe_remember_with_vars,
maybe_remember_with_vars_*): ... these.
(uniquify_nodes): Remove.
(lto_fixup_prevailing_type): New function.
(struct tree_scc, struct tree_scc_hasher): New type and hasher.
(tree_scc_hash, tree_scc_hash_obstack): New globals.
(num_merged_types, num_prevailing_types, num_not_merged_types,
num_not_merged_types_in_same_scc, total_scc_size, num_sccs_read,
total_scc_size_merged, num_sccs_merged, num_scc_compares,
num_scc_compare_collisions): New global counters.
(compare_tree_sccs_1): New function.
(compare_tree_sccs): Likewise.
(unify_scc): Likewise.
(lto_read_decls): Stream in tree SCCs and unify them on the
way in. Finalize prevailing SCC tree members.
(read_cgraph_and_symbols): Do not initialize or free gimple_type_leader.
Allocate and free tree_scc_hash_obstack and tree_scc_hash, do not bother
to ggc-collect during merging.
(print_lto_report_1): Adjust for new merging code.

Index: trunk/gcc/lto-streamer.h
===================================================================
*** trunk.orig/gcc/lto-streamer.h 2013-06-12 13:51:51.000000000 +0200
--- trunk/gcc/lto-streamer.h 2013-06-12 13:52:57.867348800 +0200
*************** enum LTO_tags
*** 199,204 ****
--- 199,207 ----
/* EH try/catch node. */
LTO_eh_catch,

+ /* Special for global streamer. A blob of unnamed tree nodes. */
+ LTO_tree_scc,
+
/* References to indexable tree nodes. These objects are stored in
tables that are written separately from the function bodies that
reference them. This way they can be instantiated even when the
*************** tree lto_input_tree_ref (struct lto_inpu
*** 854,859 ****
--- 857,866 ----
struct function *, enum LTO_tags);
void lto_tag_check_set (enum LTO_tags, int, ...);
void lto_init_eh (void);
+ hashval_t lto_input_scc (struct lto_input_block *, struct data_in *,
+ unsigned *, unsigned *);
+ tree lto_input_tree_1 (struct lto_input_block *, struct data_in *,
+ enum LTO_tags, hashval_t hash);
tree lto_input_tree (struct lto_input_block *, struct data_in *);


Index: trunk/gcc/lto-streamer-in.c
===================================================================
*** trunk.orig/gcc/lto-streamer-in.c 2013-06-12 13:51:51.000000000 +0200
--- trunk/gcc/lto-streamer-in.c 2013-06-12 13:52:57.865348776 +0200
*************** lto_read_body (struct lto_file_decl_data
*** 1016,1022 ****
unsigned i;
for (i = len; i-- > from;)
{
! tree t = cache->nodes[i];
if (t == NULL_TREE)
continue;

--- 1016,1022 ----
unsigned i;
for (i = len; i-- > from;)
{
! tree t = streamer_tree_cache_get_tree (cache, i);
if (t == NULL_TREE)
continue;

*************** lto_input_function_body (struct lto_file
*** 1056,1067 ****
}


/* Read the physical representation of a tree node with tag TAG from
input block IB using the per-file context in DATA_IN. */

static tree
lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag)
{
/* Instantiate a new tree node. */
tree result = streamer_alloc_tree (ib, data_in, tag);
--- 1056,1098 ----
}


+ /* Read the physical representation of a tree node EXPR from
+ input block IB using the per-file context in DATA_IN. */
+
+ static void
+ lto_read_tree_1 (struct lto_input_block *ib, struct data_in *data_in, tree expr)
+ {
+ /* Read all the bitfield values in EXPR. Note that for LTO, we
+ only write language-independent bitfields, so no more unpacking is
+ needed. */
+ streamer_read_tree_bitfields (ib, data_in, expr);
+
+ /* Read all the pointer fields in EXPR. */
+ streamer_read_tree_body (ib, data_in, expr);
+
+ /* Read any LTO-specific data not read by the tree streamer. */
+ if (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
+ DECL_INITIAL (expr) = stream_read_tree (ib, data_in);
+
+ /* We should never try to instantiate an MD or NORMAL builtin here. */
+ if (TREE_CODE (expr) == FUNCTION_DECL)
+ gcc_assert (!streamer_handle_as_builtin_p (expr));
+
+ #ifdef LTO_STREAMER_DEBUG
+ /* Remove the mapping to RESULT's original address set by
+ streamer_alloc_tree. */
+ lto_orig_address_remove (expr);
+ #endif
+ }
+
/* Read the physical representation of a tree node with tag TAG from
input block IB using the per-file context in DATA_IN. */

static tree
lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag, hashval_t hash)
{
/* Instantiate a new tree node. */
tree result = streamer_alloc_tree (ib, data_in, tag);
*************** lto_read_tree (struct lto_input_block *i
*** 1069,1103 ****
/* Enter RESULT in the reader cache. This will make RESULT
available so that circular references in the rest of the tree
structure can be resolved in subsequent calls to stream_read_tree. */
! streamer_tree_cache_append (data_in->reader_cache, result);

! /* Read all the bitfield values in RESULT. Note that for LTO, we
! only write language-independent bitfields, so no more unpacking is
! needed. */
! streamer_read_tree_bitfields (ib, data_in, result);

! /* Read all the pointer fields in RESULT. */
! streamer_read_tree_body (ib, data_in, result);

! /* Read any LTO-specific data not read by the tree streamer. */
! if (DECL_P (result)
! && TREE_CODE (result) != FUNCTION_DECL
! && TREE_CODE (result) != TRANSLATION_UNIT_DECL)
! DECL_INITIAL (result) = stream_read_tree (ib, data_in);

- /* We should never try to instantiate an MD or NORMAL builtin here. */
- if (TREE_CODE (result) == FUNCTION_DECL)
- gcc_assert (!streamer_handle_as_builtin_p (result));

! /* end_marker = */ streamer_read_uchar (ib);

! #ifdef LTO_STREAMER_DEBUG
! /* Remove the mapping to RESULT's original address set by
! streamer_alloc_tree. */
! lto_orig_address_remove (result);
! #endif

! return result;
}


--- 1100,1169 ----
/* Enter RESULT in the reader cache. This will make RESULT
available so that circular references in the rest of the tree
structure can be resolved in subsequent calls to stream_read_tree. */
! streamer_tree_cache_append (data_in->reader_cache, result, hash);

! lto_read_tree_1 (ib, data_in, result);

! /* end_marker = */ streamer_read_uchar (ib);

! return result;
! }


! /* Populate the reader cache with trees materialized from the SCC
! following in the IB, DATA_IN stream. */

! hashval_t
! lto_input_scc (struct lto_input_block *ib, struct data_in *data_in,
! unsigned *len, unsigned *entry_len)
! {
! /* A blob of unnamed tree nodes, fill the cache from it and
! recurse. */
! unsigned size = streamer_read_uhwi (ib);
! hashval_t scc_hash = streamer_read_uhwi (ib);
! unsigned scc_entry_len = 1;

! if (size == 1)
! {
! enum LTO_tags tag = streamer_read_record_start (ib);
! lto_input_tree_1 (ib, data_in, tag, scc_hash);
! }
! else
! {
! unsigned int first = data_in->reader_cache->nodes.length ();
! tree result;
!
! scc_entry_len = streamer_read_uhwi (ib);
!
! /* Materialize size trees by reading their headers. */
! for (unsigned i = 0; i < size; ++i)
! {
! enum LTO_tags tag = streamer_read_record_start (ib);
! if (tag == LTO_null
! || (tag >= LTO_field_decl_ref && tag <= LTO_global_decl_ref)
! || tag == LTO_tree_pickle_reference
! || tag == LTO_builtin_decl
! || tag == LTO_integer_cst
! || tag == LTO_tree_scc)
! gcc_unreachable ();
!
! result = streamer_alloc_tree (ib, data_in, tag);
! streamer_tree_cache_append (data_in->reader_cache, result, 0);
! }
!
! /* Read the tree bitpacks and references. */
! for (unsigned i = 0; i < size; ++i)
! {
! result = streamer_tree_cache_get_tree (data_in->reader_cache,
! first + i);
! lto_read_tree_1 (ib, data_in, result);
! /* end_marker = */ streamer_read_uchar (ib);
! }
! }
!
! *len = size;
! *entry_len = scc_entry_len;
! return scc_hash;
}


*************** lto_read_tree (struct lto_input_block *i
*** 1106,1117 ****
to previously read nodes. */

tree
! lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
{
- enum LTO_tags tag;
tree result;

- tag = streamer_read_record_start (ib);
gcc_assert ((unsigned) tag < (unsigned) LTO_NUM_TAGS);

if (tag == LTO_null)
--- 1172,1182 ----
to previously read nodes. */

tree
! lto_input_tree_1 (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag, hashval_t hash)
{
tree result;

gcc_assert ((unsigned) tag < (unsigned) LTO_NUM_TAGS);

if (tag == LTO_null)
*************** lto_input_tree (struct lto_input_block *
*** 1137,1155 ****
}
else if (tag == LTO_integer_cst)
{
! /* For shared integer constants we only need the type and its hi/low
! words. */
! result = streamer_read_integer_cst (ib, data_in);
}
else
{
/* Otherwise, materialize a new node from IB. */
! result = lto_read_tree (ib, data_in, tag);
}

return result;
}


/* Input toplevel asms. */

--- 1202,1240 ----
}
else if (tag == LTO_integer_cst)
{
! /* For shared integer constants in singletons we can use the existing
! tree integer constant merging code. */
! tree type = stream_read_tree (ib, data_in);
! unsigned HOST_WIDE_INT low = streamer_read_uhwi (ib);
! HOST_WIDE_INT high = streamer_read_hwi (ib);
! result = build_int_cst_wide (type, low, high);
! streamer_tree_cache_append (data_in->reader_cache, result, hash);
! }
! else if (tag == LTO_tree_scc)
! {
! unsigned len, entry_len;
!
! /* Input and skip the SCC. */
! lto_input_scc (ib, data_in, &len, &entry_len);
!
! /* Recurse. */
! return lto_input_tree (ib, data_in);
}
else
{
/* Otherwise, materialize a new node from IB. */
! result = lto_read_tree (ib, data_in, tag, hash);
}

return result;
}

+ tree
+ lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
+ {
+ return lto_input_tree_1 (ib, data_in, streamer_read_record_start (ib), 0);
+ }
+

/* Input toplevel asms. */

*************** lto_data_in_create (struct lto_file_decl
*** 1220,1226 ****
data_in->strings = strings;
data_in->strings_len = len;
data_in->globals_resolution = resolutions;
! data_in->reader_cache = streamer_tree_cache_create ();

return data_in;
}
--- 1305,1311 ----
data_in->strings = strings;
data_in->strings_len = len;
data_in->globals_resolution = resolutions;
! data_in->reader_cache = streamer_tree_cache_create (false);

return data_in;
}
Index: trunk/gcc/lto-streamer-out.c
===================================================================
*** trunk.orig/gcc/lto-streamer-out.c 2013-06-12 13:51:51.000000000 +0200
--- trunk/gcc/lto-streamer-out.c 2013-06-12 13:52:57.866348788 +0200
*************** create_output_block (enum lto_section_ty
*** 71,77 ****
ob->decl_state = lto_get_out_decl_state ();
ob->main_stream = XCNEW (struct lto_output_stream);
ob->string_stream = XCNEW (struct lto_output_stream);
! ob->writer_cache = streamer_tree_cache_create ();

if (section_type == LTO_section_function_body)
ob->cfg_stream = XCNEW (struct lto_output_stream);
--- 71,77 ----
ob->decl_state = lto_get_out_decl_state ();
ob->main_stream = XCNEW (struct lto_output_stream);
ob->string_stream = XCNEW (struct lto_output_stream);
! ob->writer_cache = streamer_tree_cache_create (!flag_wpa);

if (section_type == LTO_section_function_body)
ob->cfg_stream = XCNEW (struct lto_output_stream);
*************** lto_is_streamable (tree expr)
*** 293,319 ****
}


/* Write a physical representation of tree node EXPR to output block
OB. If REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. IX is the index into the streamer cache
where EXPR is stored. */

static void
! lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
{
- struct bitpack_d bp;
-
- if (!lto_is_streamable (expr))
- internal_error ("tree code %qs is not supported in LTO streams",
- tree_code_name[TREE_CODE (expr)]);
-
- /* Write the header, containing everything needed to materialize
- EXPR on the reading side. */
- streamer_write_tree_header (ob, expr);
-
/* Pack all the non-pointer fields in EXPR into a bitpack and write
the resulting bitpack. */
! bp = bitpack_create (ob->main_stream);
streamer_pack_tree_bitfields (ob, &bp, expr);
streamer_write_bitpack (&bp);

--- 293,339 ----
}


+ /* For EXPR lookup and return what we want to stream to OB as DECL_INITIAL. */
+
+ static tree
+ get_symbol_initial_value (struct output_block *ob, tree expr)
+ {
+ gcc_checking_assert (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL);
+
+ /* Handle DECL_INITIAL for symbols. */
+ tree initial = DECL_INITIAL (expr);
+ if (TREE_CODE (expr) == VAR_DECL
+ && (TREE_STATIC (expr) || DECL_EXTERNAL (expr))
+ && initial)
+ {
+ lto_symtab_encoder_t encoder;
+ struct varpool_node *vnode;
+
+ encoder = ob->decl_state->symtab_node_encoder;
+ vnode = varpool_get_node (expr);
+ if (!vnode
+ || !lto_symtab_encoder_encode_initializer_p (encoder,
+ vnode))
+ initial = error_mark_node;
+ }
+
+ return initial;
+ }
+
+
/* Write a physical representation of tree node EXPR to output block
OB. If REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. IX is the index into the streamer cache
where EXPR is stored. */

static void
! lto_write_tree_1 (struct output_block *ob, tree expr, bool ref_p)
{
/* Pack all the non-pointer fields in EXPR into a bitpack and write
the resulting bitpack. */
! bitpack_d bp = bitpack_create (ob->main_stream);
streamer_pack_tree_bitfields (ob, &bp, expr);
streamer_write_bitpack (&bp);

*************** lto_write_tree (struct output_block *ob,
*** 345,366 ****

stream_write_tree (ob, initial, ref_p);
}

/* Mark the end of EXPR. */
streamer_write_zero (ob);
}

-
/* Emit the physical representation of tree node EXPR to output block
OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */

! void
! lto_output_tree (struct output_block *ob, tree expr,
! bool ref_p, bool this_ref_p)
{
unsigned ix;
- bool existed_p;

if (expr == NULL_TREE)
{
--- 365,403 ----

stream_write_tree (ob, initial, ref_p);
}
+ }
+
+ /* Write a physical representation of tree node EXPR to output block
+ OB. If REF_P is true, the leaves of EXPR are emitted as references
+ via lto_output_tree_ref. IX is the index into the streamer cache
+ where EXPR is stored. */
+
+ static void
+ lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
+ {
+ if (!lto_is_streamable (expr))
+ internal_error ("tree code %qs is not supported in LTO streams",
+ tree_code_name[TREE_CODE (expr)]);
+
+ /* Write the header, containing everything needed to materialize
+ EXPR on the reading side. */
+ streamer_write_tree_header (ob, expr);
+
+ lto_write_tree_1 (ob, expr, ref_p);

/* Mark the end of EXPR. */
streamer_write_zero (ob);
}

/* Emit the physical representation of tree node EXPR to output block
OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */

! static void
! lto_output_tree_1 (struct output_block *ob, tree expr, hashval_t hash,
! bool ref_p, bool this_ref_p)
{
unsigned ix;

if (expr == NULL_TREE)
{
*************** lto_output_tree (struct output_block *ob
*** 374,391 ****
return;
}

! /* Shared INTEGER_CST nodes are special because they need their original type
! to be materialized by the reader (to implement TYPE_CACHED_VALUES). */
! if (TREE_CODE (expr) == INTEGER_CST
! && !TREE_OVERFLOW (expr))
{
streamer_write_integer_cst (ob, expr, ref_p);
return;
}

! existed_p = streamer_tree_cache_insert (ob->writer_cache, expr, &ix);
if (existed_p)
{
/* If a node has already been streamed out, make sure that
we don't write it more than once. Otherwise, the reader
will instantiate two different nodes for the same object. */
--- 411,1333 ----
return;
}

! gcc_checking_assert (!streamer_tree_cache_lookup (ob->writer_cache, expr, &ix));
!
! streamer_tree_cache_insert (ob->writer_cache, expr, hash, &ix);
! if (streamer_handle_as_builtin_p (expr))
! {
! /* MD and NORMAL builtins do not need to be written out
! completely as they are always instantiated by the
! compiler on startup. The only builtins that need to
! be written out are BUILT_IN_FRONTEND. For all other
! builtins, we simply write the class and code. */
! streamer_write_builtin (ob, expr);
! }
! else if (TREE_CODE (expr) == INTEGER_CST
! && !TREE_OVERFLOW (expr))
{
+ /* Shared INTEGER_CST nodes are special because they need their
+ original type to be materialized by the reader (to implement
+ TYPE_CACHED_VALUES). */
streamer_write_integer_cst (ob, expr, ref_p);
+ }
+ else
+ {
+ /* This is the first time we see EXPR, write its fields
+ to OB. */
+ lto_write_tree (ob, expr, ref_p);
+ }
+ }
+
+ struct sccs
+ {
+ unsigned int dfsnum;
+ unsigned int low;
+ bool on_sccstack;
+ };
+
+ struct scc_entry
+ {
+ tree t;
+ hashval_t hash;
+ };
+
+ static unsigned int next_dfs_num;
+ static vec<scc_entry> sccstack;
+ static struct pointer_map_t *sccstate;
+ static struct obstack sccstate_obstack;
+
+ static void
+ DFS_write_tree (struct output_block *ob, sccs *from_state,
+ tree expr, bool ref_p, bool this_ref_p);
+
+ /* Handle the tree EXPR in the DFS walk with SCC state EXPR_STATE and
+ DFS recurse for all tree edges originating from it. */
+
+ static void
+ DFS_write_tree_body (struct output_block *ob,
+ tree expr, sccs *expr_state, bool ref_p)
+ {
+ #define DFS_follow_tree_edge(DEST) \
+ DFS_write_tree (ob, expr_state, DEST, ref_p, ref_p)
+
+ enum tree_code code;
+
+ code = TREE_CODE (expr);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
+ {
+ if (TREE_CODE (expr) != IDENTIFIER_NODE)
+ DFS_follow_tree_edge (TREE_TYPE (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
+ {
+ for (unsigned i = 0; i < VECTOR_CST_NELTS (expr); ++i)
+ DFS_follow_tree_edge (VECTOR_CST_ELT (expr, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
+ {
+ DFS_follow_tree_edge (TREE_REALPART (expr));
+ DFS_follow_tree_edge (TREE_IMAGPART (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
+ {
+ DFS_follow_tree_edge (DECL_NAME (expr));
+ DFS_follow_tree_edge (DECL_CONTEXT (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ DFS_follow_tree_edge (DECL_SIZE (expr));
+ DFS_follow_tree_edge (DECL_SIZE_UNIT (expr));
+
+ /* Note, DECL_INITIAL is not handled here. Since DECL_INITIAL needs
+ special handling in LTO, it must be handled by streamer hooks. */
+
+ DFS_follow_tree_edge (DECL_ATTRIBUTES (expr));
+
+ /* Do not follow DECL_ABSTRACT_ORIGIN. We cannot handle debug information
+ for early inlining so drop it on the floor instead of ICEing in
+ dwarf2out.c. */
+
+ if ((TREE_CODE (expr) == VAR_DECL
+ || TREE_CODE (expr) == PARM_DECL)
+ && DECL_HAS_VALUE_EXPR_P (expr))
+ DFS_follow_tree_edge (DECL_VALUE_EXPR (expr));
+ if (TREE_CODE (expr) == VAR_DECL)
+ DFS_follow_tree_edge (DECL_DEBUG_EXPR (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
+ {
+ if (TREE_CODE (expr) == FUNCTION_DECL)
+ {
+ for (tree t = DECL_ARGUMENTS (expr); t; t = TREE_CHAIN (t))
+ DFS_follow_tree_edge (t);
+ DFS_follow_tree_edge (DECL_RESULT (expr));
+ }
+ else if (TREE_CODE (expr) == TYPE_DECL)
+ DFS_follow_tree_edge (DECL_ORIGINAL_TYPE (expr));
+ DFS_follow_tree_edge (DECL_VINDEX (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ /* Make sure we don't inadvertently set the assembler name. */
+ if (DECL_ASSEMBLER_NAME_SET_P (expr))
+ DFS_follow_tree_edge (DECL_ASSEMBLER_NAME (expr));
+ DFS_follow_tree_edge (DECL_SECTION_NAME (expr));
+ DFS_follow_tree_edge (DECL_COMDAT_GROUP (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
+ {
+ DFS_follow_tree_edge (DECL_FIELD_OFFSET (expr));
+ DFS_follow_tree_edge (DECL_BIT_FIELD_TYPE (expr));
+ DFS_follow_tree_edge (DECL_BIT_FIELD_REPRESENTATIVE (expr));
+ DFS_follow_tree_edge (DECL_FIELD_BIT_OFFSET (expr));
+ DFS_follow_tree_edge (DECL_FCONTEXT (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ DFS_follow_tree_edge (DECL_FUNCTION_PERSONALITY (expr));
+ DFS_follow_tree_edge (DECL_FUNCTION_SPECIFIC_TARGET (expr));
+ DFS_follow_tree_edge (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ DFS_follow_tree_edge (TYPE_SIZE (expr));
+ DFS_follow_tree_edge (TYPE_SIZE_UNIT (expr));
+ DFS_follow_tree_edge (TYPE_ATTRIBUTES (expr));
+ DFS_follow_tree_edge (TYPE_NAME (expr));
+ /* Do not follow TYPE_POINTER_TO or TYPE_REFERENCE_TO. They will be
+ reconstructed during fixup. */
+ /* Do not follow TYPE_NEXT_VARIANT, we reconstruct the variant lists
+ during fixup. */
+ DFS_follow_tree_edge (TYPE_MAIN_VARIANT (expr));
+ DFS_follow_tree_edge (TYPE_CONTEXT (expr));
+ /* TYPE_CANONICAL is re-computed during type merging, so no need
+ to follow it here. */
+ DFS_follow_tree_edge (TYPE_STUB_DECL (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
+ {
+ if (TREE_CODE (expr) == ENUMERAL_TYPE)
+ DFS_follow_tree_edge (TYPE_VALUES (expr));
+ else if (TREE_CODE (expr) == ARRAY_TYPE)
+ DFS_follow_tree_edge (TYPE_DOMAIN (expr));
+ else if (RECORD_OR_UNION_TYPE_P (expr))
+ for (tree t = TYPE_FIELDS (expr); t; t = TREE_CHAIN (t))
+ DFS_follow_tree_edge (t);
+ else if (TREE_CODE (expr) == FUNCTION_TYPE
+ || TREE_CODE (expr) == METHOD_TYPE)
+ DFS_follow_tree_edge (TYPE_ARG_TYPES (expr));
+
+ if (!POINTER_TYPE_P (expr))
+ DFS_follow_tree_edge (TYPE_MINVAL (expr));
+ DFS_follow_tree_edge (TYPE_MAXVAL (expr));
+ if (RECORD_OR_UNION_TYPE_P (expr))
+ DFS_follow_tree_edge (TYPE_BINFO (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_LIST))
+ {
+ DFS_follow_tree_edge (TREE_PURPOSE (expr));
+ DFS_follow_tree_edge (TREE_VALUE (expr));
+ DFS_follow_tree_edge (TREE_CHAIN (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VEC))
+ {
+ for (int i = 0; i < TREE_VEC_LENGTH (expr); i++)
+ DFS_follow_tree_edge (TREE_VEC_ELT (expr, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_EXP))
+ {
+ for (int i = 0; i < TREE_OPERAND_LENGTH (expr); i++)
+ DFS_follow_tree_edge (TREE_OPERAND (expr, i));
+ DFS_follow_tree_edge (TREE_BLOCK (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BLOCK))
+ {
+ for (tree t = BLOCK_VARS (expr); t; t = TREE_CHAIN (t))
+ /* ??? FIXME. See also streamer_write_chain. */
+ if (!(VAR_OR_FUNCTION_DECL_P (t)
+ && DECL_EXTERNAL (t)))
+ DFS_follow_tree_edge (t);
+
+ DFS_follow_tree_edge (BLOCK_SUPERCONTEXT (expr));
+
+ /* Follow BLOCK_ABSTRACT_ORIGIN for the limited cases we can
+ handle - those that represent inlined function scopes.
+ For the drop rest them on the floor instead of ICEing
+ in dwarf2out.c. */
+ if (inlined_function_outer_scope_p (expr))
+ {
+ tree ultimate_origin = block_ultimate_origin (expr);
+ DFS_follow_tree_edge (ultimate_origin);
+ }
+ /* Do not follow BLOCK_NONLOCALIZED_VARS. We cannot handle debug
+ information for early inlined BLOCKs so drop it on the floor instead
+ of ICEing in dwarf2out.c. */
+
+ /* BLOCK_FRAGMENT_ORIGIN and BLOCK_FRAGMENT_CHAIN is not live at LTO
+ streaming time. */
+
+ /* Do not output BLOCK_SUBBLOCKS. Instead on streaming-in this
+ list is re-constructed from BLOCK_SUPERCONTEXT. */
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
+ {
+ unsigned i;
+ tree t;
+
+ /* Note that the number of BINFO slots has already been emitted in
+ EXPR's header (see streamer_write_tree_header) because this length
+ is needed to build the empty BINFO node on the reader side. */
+ FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (expr), i, t)
+ DFS_follow_tree_edge (t);
+ DFS_follow_tree_edge (BINFO_OFFSET (expr));
+ DFS_follow_tree_edge (BINFO_VTABLE (expr));
+ DFS_follow_tree_edge (BINFO_VPTR_FIELD (expr));
+
+ /* The number of BINFO_BASE_ACCESSES has already been emitted in
+ EXPR's bitfield section. */
+ FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (expr), i, t)
+ DFS_follow_tree_edge (t);
+
+ DFS_follow_tree_edge (BINFO_INHERITANCE_CHAIN (expr));
+ DFS_follow_tree_edge (BINFO_SUBVTT_INDEX (expr));
+ DFS_follow_tree_edge (BINFO_VPTR_INDEX (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
+ {
+ unsigned i;
+ tree index, value;
+
+ FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (expr), i, index, value)
+ {
+ DFS_follow_tree_edge (index);
+ DFS_follow_tree_edge (value);
+ }
+ }
+
+ #undef DFS_follow_tree_edge
+ }
+
+ /* Return a hash value for the tree T. */
+
+ static hashval_t
+ hash_tree (struct streamer_tree_cache_d *cache, tree t)
+ {
+ #define visit(SIBLING) \
+ do { \
+ unsigned ix; \
+ if (SIBLING && streamer_tree_cache_lookup (cache, SIBLING, &ix)) \
+ v = iterative_hash_hashval_t (streamer_tree_cache_get_hash (cache, ix), v); \
+ } while (0)
+
+ /* Hash TS_BASE. */
+ enum tree_code code = TREE_CODE (t);
+ hashval_t v = iterative_hash_host_wide_int (code, 0);
+ if (!TYPE_P (t))
+ {
+ v = iterative_hash_host_wide_int (TREE_SIDE_EFFECTS (t)
+ | (TREE_CONSTANT (t) << 1)
+ | (TREE_READONLY (t) << 2)
+ | (TREE_PUBLIC (t) << 3), v);
+ }
+ v = iterative_hash_host_wide_int (TREE_ADDRESSABLE (t)
+ | (TREE_THIS_VOLATILE (t) << 1), v);
+ if (DECL_P (t))
+ v = iterative_hash_host_wide_int (DECL_UNSIGNED (t), v);
+ else if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_UNSIGNED (t), v);
+ if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_ARTIFICIAL (t), v);
+ else
+ v = iterative_hash_host_wide_int (TREE_NO_WARNING (t), v);
+ v = iterative_hash_host_wide_int (TREE_USED (t)
+ | (TREE_NOTHROW (t) << 1)
+ | (TREE_STATIC (t) << 2)
+ | (TREE_PRIVATE (t) << 3)
+ | (TREE_PROTECTED (t) << 4)
+ | (TREE_DEPRECATED (t) << 5), v);
+ if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_SATURATING (t)
+ | (TYPE_ADDR_SPACE (t) << 1), v);
+ else if (code == SSA_NAME)
+ v = iterative_hash_host_wide_int (SSA_NAME_IS_DEFAULT_DEF (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
+ {
+ v = iterative_hash_host_wide_int (TREE_INT_CST_LOW (t), v);
+ v = iterative_hash_host_wide_int (TREE_INT_CST_HIGH (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_REAL_CST))
+ {
+ REAL_VALUE_TYPE r = TREE_REAL_CST (t);
+ v = iterative_hash_host_wide_int (r.cl, v);
+ v = iterative_hash_host_wide_int (r.decimal
+ | (r.sign << 1)
+ | (r.signalling << 2)
+ | (r.canonical << 3), v);
+ v = iterative_hash_host_wide_int (r.uexp, v);
+ for (unsigned i = 0; i < SIGSZ; ++i)
+ v = iterative_hash_host_wide_int (r.sig[i], v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIXED_CST))
+ {
+ FIXED_VALUE_TYPE f = TREE_FIXED_CST (t);
+ v = iterative_hash_host_wide_int (f.mode, v);
+ v = iterative_hash_host_wide_int (f.data.low, v);
+ v = iterative_hash_host_wide_int (f.data.high, v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ v = iterative_hash_host_wide_int (DECL_MODE (t), v);
+ v = iterative_hash_host_wide_int (DECL_NONLOCAL (t)
+ | (DECL_VIRTUAL_P (t) << 1)
+ | (DECL_IGNORED_P (t) << 2)
+ | (DECL_ABSTRACT (t) << 3)
+ | (DECL_ARTIFICIAL (t) << 4)
+ | (DECL_USER_ALIGN (t) << 5)
+ | (DECL_PRESERVE_P (t) << 6)
+ | (DECL_EXTERNAL (t) << 7)
+ | (DECL_GIMPLE_REG_P (t) << 8), v);
+ v = iterative_hash_host_wide_int (DECL_ALIGN (t), v);
+ if (code == LABEL_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_ERROR_ISSUED (t), v);
+ v = iterative_hash_host_wide_int (EH_LANDING_PAD_NR (t), v);
+ v = iterative_hash_host_wide_int (LABEL_DECL_UID (t), v);
+ }
+ else if (code == FIELD_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_PACKED (t)
+ | (DECL_NONADDRESSABLE_P (t) << 1),
+ v);
+ v = iterative_hash_host_wide_int (DECL_OFFSET_ALIGN (t), v);
+ }
+ else if (code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_HAS_DEBUG_EXPR_P (t)
+ | (DECL_NONLOCAL_FRAME (t) << 1),
+ v);
+ }
+ if (code == RESULT_DECL
+ || code == PARM_DECL
+ || code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_BY_REFERENCE (t), v);
+ if (code == VAR_DECL
+ || code == PARM_DECL)
+ v = iterative_hash_host_wide_int (DECL_HAS_VALUE_EXPR_P (t), v);
+ }
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WRTL))
+ v = iterative_hash_host_wide_int (DECL_REGISTER (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ v = iterative_hash_host_wide_int (DECL_DEFER_OUTPUT (t)
+ | (DECL_COMMON (t) << 1)
+ | (DECL_DLLIMPORT_P (t) << 2)
+ | (DECL_WEAK (t) << 3)
+ | (DECL_SEEN_IN_BIND_EXPR_P (t) << 4)
+ | (DECL_COMDAT (t) << 5)
+ | (DECL_VISIBILITY_SPECIFIED (t) << 6),
+ v);
+ v = iterative_hash_host_wide_int (DECL_VISIBILITY (t), v);
+ if (code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_HARD_REGISTER (t)
+ | (DECL_IN_TEXT_SECTION (t) << 1)
+ | (DECL_IN_CONSTANT_POOL (t) << 2),
+ v);
+ v = iterative_hash_host_wide_int (DECL_TLS_MODEL (t), v);
+ }
+ if (VAR_OR_FUNCTION_DECL_P (t))
+ v = iterative_hash_host_wide_int (DECL_INIT_PRIORITY (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ v = iterative_hash_host_wide_int (DECL_BUILT_IN_CLASS (t), v);
+ v = iterative_hash_host_wide_int (DECL_STATIC_CONSTRUCTOR (t)
+ | (DECL_STATIC_DESTRUCTOR (t) << 1)
+ | (DECL_UNINLINABLE (t) << 2)
+ | (DECL_POSSIBLY_INLINED (t) << 3)
+ | (DECL_IS_NOVOPS (t) << 4)
+ | (DECL_IS_RETURNS_TWICE (t) << 5)
+ | (DECL_IS_MALLOC (t) << 6)
+ | (DECL_IS_OPERATOR_NEW (t) << 7)
+ | (DECL_DECLARED_INLINE_P (t) << 8)
+ | (DECL_STATIC_CHAIN (t) << 9)
+ | (DECL_NO_INLINE_WARNING_P (t) << 10)
+ | (DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT (t) << 11)
+ | (DECL_NO_LIMIT_STACK (t) << 12)
+ | (DECL_DISREGARD_INLINE_LIMITS (t) << 13)
+ | (DECL_PURE_P (t) << 14)
+ | (DECL_LOOPING_CONST_OR_PURE_P (t) << 15), v);
+ if (DECL_BUILT_IN_CLASS (t) != NOT_BUILT_IN)
+ v = iterative_hash_host_wide_int (DECL_FUNCTION_CODE (t), v);
+ if (DECL_STATIC_DESTRUCTOR (t))
+ v = iterative_hash_host_wide_int (DECL_FINI_PRIORITY (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ v = iterative_hash_host_wide_int (TYPE_MODE (t), v);
+ v = iterative_hash_host_wide_int (TYPE_STRING_FLAG (t)
+ | (TYPE_NO_FORCE_BLK (t) << 1)
+ | (TYPE_NEEDS_CONSTRUCTING (t) << 2)
+ | (TYPE_PACKED (t) << 3)
+ | (TYPE_RESTRICT (t) << 4)
+ | (TYPE_CONTAINS_PLACEHOLDER_INTERNAL (t) << 5)
+ | (TYPE_USER_ALIGN (t) << 6)
+ | (TYPE_READONLY (t) << 7), v);
+ if (RECORD_OR_UNION_TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_TRANSPARENT_AGGR (t), v);
+ else if (code == ARRAY_TYPE)
+ v = iterative_hash_host_wide_int (TYPE_NONALIASED_COMPONENT (t), v);
+ v = iterative_hash_host_wide_int (TYPE_PRECISION (t), v);
+ v = iterative_hash_host_wide_int (TYPE_ALIGN (t), v);
+ v = iterative_hash_host_wide_int (TYPE_ALIAS_SET (t) == 0 ? 0 : -1, v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TRANSLATION_UNIT_DECL))
+ v = iterative_hash (TRANSLATION_UNIT_LANGUAGE (t),
+ strlen (TRANSLATION_UNIT_LANGUAGE (t)), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TARGET_OPTION))
+ v = iterative_hash (t, sizeof (struct cl_target_option), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_OPTIMIZATION))
+ v = iterative_hash (t, sizeof (struct cl_optimization), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_IDENTIFIER))
+ v = iterative_hash_host_wide_int (IDENTIFIER_HASH_VALUE (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_STRING))
+ v = iterative_hash (TREE_STRING_POINTER (t), TREE_STRING_LENGTH (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
+ {
+ if (POINTER_TYPE_P (t))
+ {
+ /* For pointers factor in the pointed-to type recursively as
+ we cannot recurse through only pointers.
+ ??? We can generalize this by keeping track of the
+ in-SCC edges for each tree (or arbitrarily the first
+ such edge) and hashing that in in a second stage
+ (instead of the quadratic mixing of the SCC we do now). */
+ hashval_t x;
+ unsigned ix;
+ if (streamer_tree_cache_lookup (cache, TREE_TYPE (t), &ix))
+ x = streamer_tree_cache_get_hash (cache, ix);
+ else
+ x = hash_tree (cache, TREE_TYPE (t));
+ v = iterative_hash_hashval_t (x, v);
+ }
+ else if (code != IDENTIFIER_NODE)
+ visit (TREE_TYPE (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
+ for (unsigned i = 0; i < VECTOR_CST_NELTS (t); ++i)
+ visit (VECTOR_CST_ELT (t, i));
+
+ if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
+ {
+ visit (TREE_REALPART (t));
+ visit (TREE_IMAGPART (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
+ {
+ visit (DECL_NAME (t));
+ if (DECL_FILE_SCOPE_P (t))
+ ;
+ else
+ visit (DECL_CONTEXT (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ visit (DECL_SIZE (t));
+ visit (DECL_SIZE_UNIT (t));
+ visit (DECL_ATTRIBUTES (t));
+ if ((code == VAR_DECL
+ || code == PARM_DECL)
+ && DECL_HAS_VALUE_EXPR_P (t))
+ visit (DECL_VALUE_EXPR (t));
+ if (code == VAR_DECL
+ && DECL_HAS_DEBUG_EXPR_P (t))
+ visit (DECL_DEBUG_EXPR (t));
+ /* ??? Hash DECL_INITIAL as streamed. Needs the output-block to
+ be able to call get_symbol_initial_value. */
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
+ {
+ if (code == FUNCTION_DECL)
+ {
+ for (tree a = DECL_ARGUMENTS (t); a; a = DECL_CHAIN (a))
+ visit (a);
+ visit (DECL_RESULT (t));
+ }
+ else if (code == TYPE_DECL)
+ visit (DECL_ORIGINAL_TYPE (t));
+ visit (DECL_VINDEX (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ if (DECL_ASSEMBLER_NAME_SET_P (t))
+ visit (DECL_ASSEMBLER_NAME (t));
+ visit (DECL_SECTION_NAME (t));
+ visit (DECL_COMDAT_GROUP (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
+ {
+ visit (DECL_FIELD_OFFSET (t));
+ visit (DECL_BIT_FIELD_TYPE (t));
+ visit (DECL_BIT_FIELD_REPRESENTATIVE (t));
+ visit (DECL_FIELD_BIT_OFFSET (t));
+ visit (DECL_FCONTEXT (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ visit (DECL_FUNCTION_PERSONALITY (t));
+ visit (DECL_FUNCTION_SPECIFIC_TARGET (t));
+ visit (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ visit (TYPE_SIZE (t));
+ visit (TYPE_SIZE_UNIT (t));
+ visit (TYPE_ATTRIBUTES (t));
+ visit (TYPE_NAME (t));
+ visit (TYPE_MAIN_VARIANT (t));
+ if (TYPE_FILE_SCOPE_P (t))
+ ;
+ else
+ visit (TYPE_CONTEXT (t));
+ visit (TYPE_STUB_DECL (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
+ {
+ if (code == ENUMERAL_TYPE)
+ visit (TYPE_VALUES (t));
+ else if (code == ARRAY_TYPE)
+ visit (TYPE_DOMAIN (t));
+ else if (RECORD_OR_UNION_TYPE_P (t))
+ for (tree f = TYPE_FIELDS (t); f; f = TREE_CHAIN (f))
+ visit (f);
+ else if (code == FUNCTION_TYPE
+ || code == METHOD_TYPE)
+ visit (TYPE_ARG_TYPES (t));
+ if (!POINTER_TYPE_P (t))
+ visit (TYPE_MINVAL (t));
+ visit (TYPE_MAXVAL (t));
+ if (RECORD_OR_UNION_TYPE_P (t))
+ visit (TYPE_BINFO (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_LIST))
+ {
+ visit (TREE_PURPOSE (t));
+ visit (TREE_VALUE (t));
+ visit (TREE_CHAIN (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VEC))
+ for (int i = 0; i < TREE_VEC_LENGTH (t); ++i)
+ visit (TREE_VEC_ELT (t, i));
+
+ if (CODE_CONTAINS_STRUCT (code, TS_EXP))
+ {
+ v = iterative_hash_host_wide_int (TREE_OPERAND_LENGTH (t), v);
+ for (int i = 0; i < TREE_OPERAND_LENGTH (t); ++i)
+ visit (TREE_OPERAND (t, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
+ {
+ unsigned i;
+ tree b;
+ FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (t), i, b)
+ visit (b);
+ visit (BINFO_OFFSET (t));
+ visit (BINFO_VTABLE (t));
+ visit (BINFO_VPTR_FIELD (t));
+ FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (t), i, b)
+ visit (b);
+ visit (BINFO_INHERITANCE_CHAIN (t));
+ visit (BINFO_SUBVTT_INDEX (t));
+ visit (BINFO_VPTR_INDEX (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
+ {
+ unsigned i;
+ tree index, value;
+ v = iterative_hash_host_wide_int (CONSTRUCTOR_NELTS (t), v);
+ FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (t), i, index, value)
+ {
+ visit (index);
+ visit (value);
+ }
+ }
+
+ return v;
+
+ #undef visit
+ }
+
+ /* Compare two SCC entries by their hash value for qsorting them. */
+
+ static int
+ scc_entry_compare (const void *p1_, const void *p2_)
+ {
+ const scc_entry *p1 = (const scc_entry *) p1_;
+ const scc_entry *p2 = (const scc_entry *) p2_;
+ if (p1->hash < p2->hash)
+ return -1;
+ else if (p1->hash > p2->hash)
+ return 1;
+ return 0;
+ }
+
+ /* Return a hash value for the SCC on the SCC stack from FIRST with
+ size SIZE. */
+
+ static hashval_t
+ hash_scc (struct streamer_tree_cache_d *cache, unsigned first, unsigned size)
+ {
+ /* Compute hash values for the SCC members. */
+ for (unsigned i = 0; i < size; ++i)
+ sccstack[first+i].hash = hash_tree (cache, sccstack[first+i].t);
+
+ if (size == 1)
+ return sccstack[first].hash;
+
+ /* Sort the SCC of type, hash pairs so that when we mix in
+ all members of the SCC the hash value becomes independent on
+ the order we visited the SCC. Disregard hashes equal to
+ the hash of the tree we mix into because we cannot guarantee
+ a stable sort for those across different TUs. */
+ qsort (&sccstack[first], size, sizeof (scc_entry), scc_entry_compare);
+ hashval_t *tem = XALLOCAVEC (hashval_t, size);
+ for (unsigned i = 0; i < size; ++i)
+ {
+ hashval_t hash = sccstack[first+i].hash;
+ hashval_t orig_hash = hash;
+ unsigned j;
+ /* Skip same hashes. */
+ for (j = i + 1;
+ j < size && sccstack[first+j].hash == orig_hash; ++j)
+ ;
+ for (; j < size; ++j)
+ hash = iterative_hash_hashval_t (sccstack[first+j].hash, hash);
+ for (j = 0; sccstack[first+j].hash != orig_hash; ++j)
+ hash = iterative_hash_hashval_t (sccstack[first+j].hash, hash);
+ tem[i] = hash;
+ }
+ hashval_t scc_hash = 0;
+ for (unsigned i = 0; i < size; ++i)
+ {
+ sccstack[first+i].hash = tem[i];
+ scc_hash = iterative_hash_hashval_t (tem[i], scc_hash);
+ }
+ return scc_hash;
+ }
+
+ /* DFS walk EXPR and stream SCCs of tree bodies if they are not
+ already in the streamer cache. Main routine called for
+ each visit of EXPR. */
+
+ static void
+ DFS_write_tree (struct output_block *ob, sccs *from_state,
+ tree expr, bool ref_p, bool this_ref_p)
+ {
+ unsigned ix;
+ sccs **slot;
+
+ /* Handle special cases. */
+ if (expr == NULL_TREE)
+ return;
+
+ /* Do not DFS walk into indexable trees. */
+ if (this_ref_p && tree_is_indexable (expr))
+ return;
+
+ /* Check if we already streamed EXPR. */
+ if (streamer_tree_cache_lookup (ob->writer_cache, expr, &ix))
+ return;
+
+ slot = (sccs **)pointer_map_insert (sccstate, expr);
+ sccs *cstate = *slot;
+ if (!cstate)
+ {
+ scc_entry e = { expr, 0 };
+ /* Not yet visited. DFS recurse and push it onto the stack. */
+ *slot = cstate = XOBNEW (&sccstate_obstack, struct sccs);
+ sccstack.safe_push (e);
+ cstate->dfsnum = next_dfs_num++;
+ cstate->low = cstate->dfsnum;
+ cstate->on_sccstack = true;
+
+ if (streamer_handle_as_builtin_p (expr))
+ ;
+ else if (TREE_CODE (expr) == INTEGER_CST
+ && !TREE_OVERFLOW (expr))
+ DFS_write_tree (ob, cstate, TREE_TYPE (expr), ref_p, ref_p);
+ else
+ {
+ DFS_write_tree_body (ob, expr, cstate, ref_p);
+
+ /* Walk any LTO-specific edges. */
+ if (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
+ {
+ /* Handle DECL_INITIAL for symbols. */
+ tree initial = get_symbol_initial_value (ob, expr);
+ DFS_write_tree (ob, cstate, initial, ref_p, ref_p);
+ }
+ }
+
+ /* See if we found an SCC. */
+ if (cstate->low == cstate->dfsnum)
+ {
+ unsigned first, size;
+ tree x;
+
+ /* Pop the SCC and compute its size. */
+ first = sccstack.length ();
+ do
+ {
+ x = sccstack[--first].t;
+ sccs *s = (sccs *)*pointer_map_contains (sccstate, x);
+ s->on_sccstack = false;
+ }
+ while (x != expr);
+ size = sccstack.length () - first;
+
+ /* No need to compute hashes for LTRANS units, we don't perform
+ any merging there. */
+ hashval_t scc_hash = 0;
+ unsigned scc_entry_len = 0;
+ if (!flag_wpa)
+ {
+ scc_hash = hash_scc (ob->writer_cache, first, size);
+
+ /* Put the entries with the least number of collisions first. */
+ unsigned entry_start = 0;
+ scc_entry_len = size + 1;
+ for (unsigned i = 0; i < size;)
+ {
+ unsigned from = i;
+ for (i = i + 1; i < size
+ && (sccstack[first + i].hash
+ == sccstack[first + from].hash); ++i)
+ ;
+ if (i - from < scc_entry_len)
+ {
+ scc_entry_len = i - from;
+ entry_start = from;
+ }
+ }
+ for (unsigned i = 0; i < scc_entry_len; ++i)
+ {
+ scc_entry tem = sccstack[first + i];
+ sccstack[first + i] = sccstack[first + entry_start + i];
+ sccstack[first + entry_start + i] = tem;
+ }
+ }
+
+ /* Write LTO_tree_scc. */
+ streamer_write_record_start (ob, LTO_tree_scc);
+ streamer_write_uhwi (ob, size);
+ streamer_write_uhwi (ob, scc_hash);
+
+ /* Write size-1 SCCs without wrapping them inside SCC bundles.
+ All INTEGER_CSTs need to be handled this way as we need
+ their type to materialize them. Also builtins are handled
+ this way.
+ ??? We still wrap these in LTO_tree_scc so at the
+ input side we can properly identify the tree we want
+ to ultimatively return. */
+ size_t old_len = ob->writer_cache->nodes.length ();
+ if (size == 1)
+ lto_output_tree_1 (ob, expr, scc_hash, ref_p, this_ref_p);
+ else
+ {
+ /* Write the size of the SCC entry candidates. */
+ streamer_write_uhwi (ob, scc_entry_len);
+
+ /* Write all headers and populate the streamer cache. */
+ for (unsigned i = 0; i < size; ++i)
+ {
+ hashval_t hash = sccstack[first+i].hash;
+ tree t = sccstack[first+i].t;
+ bool exists_p = streamer_tree_cache_insert (ob->writer_cache,
+ t, hash, &ix);
+ gcc_assert (!exists_p);
+
+ if (!lto_is_streamable (t))
+ internal_error ("tree code %qs is not supported "
+ "in LTO streams",
+ tree_code_name[TREE_CODE (t)]);
+
+ if (streamer_handle_as_builtin_p (t))
+ gcc_unreachable ();
+
+ /* Write the header, containing everything needed to
+ materialize EXPR on the reading side. */
+ streamer_write_tree_header (ob, t);
+ }
+
+ /* Write the bitpacks and tree references. */
+ for (unsigned i = 0; i < size; ++i)
+ {
+ lto_write_tree_1 (ob, sccstack[first+i].t, ref_p);
+
+ /* Mark the end of the tree. */
+ streamer_write_zero (ob);
+ }
+ }
+ gcc_assert (old_len + size == ob->writer_cache->nodes.length ());
+
+ /* Finally truncate the vector. */
+ sccstack.truncate (first);
+ }
+
+ if (from_state)
+ from_state->low = MIN (from_state->low, cstate->low);
+ if (!cstate->on_sccstack)
+ return;
+ }
+ gcc_checking_assert (from_state);
+ if (cstate->dfsnum < from_state->dfsnum
+ && cstate->on_sccstack)
+ from_state->low = MIN (cstate->dfsnum, from_state->low);
+ }
+
+ extern unsigned long num_trees_output;
+ extern unsigned long num_grefs_output;
+ extern unsigned long num_lrefs_output;
+
+ /* Emit the physical representation of tree node EXPR to output block
+ OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
+ via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */
+
+ void
+ lto_output_tree (struct output_block *ob, tree expr,
+ bool ref_p, bool this_ref_p)
+ {
+ unsigned ix;
+ bool existed_p;
+
+ num_trees_output++;
+
+ if (expr == NULL_TREE)
+ {
+ streamer_write_record_start (ob, LTO_null);
+ return;
+ }
+
+ if (this_ref_p && tree_is_indexable (expr))
+ {
+ num_grefs_output++;
+ lto_output_tree_ref (ob, expr);
return;
}

! existed_p = streamer_tree_cache_lookup (ob->writer_cache, expr, &ix);
if (existed_p)
{
+ num_lrefs_output++;
/* If a node has already been streamed out, make sure that
we don't write it more than once. Otherwise, the reader
will instantiate two different nodes for the same object. */
*************** lto_output_tree (struct output_block *ob
*** 394,413 ****
streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
lto_tree_code_to_tag (TREE_CODE (expr)));
}
- else if (streamer_handle_as_builtin_p (expr))
- {
- /* MD and NORMAL builtins do not need to be written out
- completely as they are always instantiated by the
- compiler on startup. The only builtins that need to
- be written out are BUILT_IN_FRONTEND. For all other
- builtins, we simply write the class and code. */
- streamer_write_builtin (ob, expr);
- }
else
{
! /* This is the first time we see EXPR, write its fields
! to OB. */
! lto_write_tree (ob, expr, ref_p);
}
}

--- 1336,1366 ----
streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
lto_tree_code_to_tag (TREE_CODE (expr)));
}
else
{
! /* This is the first time we see EXPR, write all reachable
! trees to OB. */
!
! /* Start the DFS walk. */
! /* Save ob state ... */
! /* let's see ... */
! sccstate = pointer_map_create ();
! gcc_obstack_init (&sccstate_obstack);
! next_dfs_num = 1;
! DFS_write_tree (ob, NULL, expr, ref_p, this_ref_p);
! sccstack.release ();
! pointer_map_destroy (sccstate);
! obstack_free (&sccstate_obstack, NULL);
!
! /* Finally append a reference to the tree we were writing.
! ??? If expr ended up as a singleton we could have
! inlined it here and avoid outputting a reference. */
! existed_p = streamer_tree_cache_lookup (ob->writer_cache, expr, &ix);
! gcc_assert (existed_p);
! streamer_write_record_start (ob, LTO_tree_pickle_reference);
! streamer_write_uhwi (ob, ix);
! streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
! lto_tree_code_to_tag (TREE_CODE (expr)));
}
}

Index: trunk/gcc/tree-streamer-in.c
===================================================================
*** trunk.orig/gcc/tree-streamer-in.c 2013-06-12 13:51:51.000000000 +0200
--- trunk/gcc/tree-streamer-in.c 2013-06-12 13:52:57.866348788 +0200
*************** lto_input_ts_list_tree_pointers (struct
*** 811,817 ****
{
TREE_PURPOSE (expr) = stream_read_tree (ib, data_in);
TREE_VALUE (expr) = stream_read_tree (ib, data_in);
! TREE_CHAIN (expr) = streamer_read_chain (ib, data_in);
}


--- 811,817 ----
{
TREE_PURPOSE (expr) = stream_read_tree (ib, data_in);
TREE_VALUE (expr) = stream_read_tree (ib, data_in);
! TREE_CHAIN (expr) = stream_read_tree (ib, data_in);
}


*************** streamer_read_tree_body (struct lto_inpu
*** 1021,1039 ****
}


- /* Read and INTEGER_CST node from input block IB using the per-file
- context in DATA_IN. */
-
- tree
- streamer_read_integer_cst (struct lto_input_block *ib, struct data_in *data_in)
- {
- tree type = stream_read_tree (ib, data_in);
- unsigned HOST_WIDE_INT low = streamer_read_uhwi (ib);
- HOST_WIDE_INT high = streamer_read_hwi (ib);
- return build_int_cst_wide (type, low, high);
- }
-
-
/* Read an index IX from input block IB and return the tree node at
DATA_IN->FILE_DATA->GLOBALS_INDEX[IX]. */

--- 1021,1026 ----
*************** streamer_get_pickled_tree (struct lto_in
*** 1047,1053 ****
ix = streamer_read_uhwi (ib);
expected_tag = streamer_read_enum (ib, LTO_tags, LTO_NUM_TAGS);

! result = streamer_tree_cache_get (data_in->reader_cache, ix);
gcc_assert (result
&& TREE_CODE (result) == lto_tag_to_tree_code (expected_tag));

--- 1034,1040 ----
ix = streamer_read_uhwi (ib);
expected_tag = streamer_read_enum (ib, LTO_tags, LTO_NUM_TAGS);

! result = streamer_tree_cache_get_tree (data_in->reader_cache, ix);
gcc_assert (result
&& TREE_CODE (result) == lto_tag_to_tree_code (expected_tag));

*************** streamer_get_builtin_tree (struct lto_in
*** 1091,1097 ****
if (asmname)
set_builtin_user_assembler_name (result, asmname);

! streamer_tree_cache_append (data_in->reader_cache, result);

return result;
}
--- 1078,1084 ----
if (asmname)
set_builtin_user_assembler_name (result, asmname);

! streamer_tree_cache_append (data_in->reader_cache, result, 0);

return result;
}
Index: trunk/gcc/lto/lto.c
===================================================================
*** trunk.orig/gcc/lto/lto.c 2013-06-12 13:51:51.000000000 +0200
--- trunk/gcc/lto/lto.c 2013-06-12 14:39:20.630802568 +0200
*************** along with GCC; see the file COPYING3.
*** 45,50 ****
--- 45,51 ----
#include "tree-streamer.h"
#include "splay-tree.h"
#include "lto-partition.h"
+ #include "data-streamer.h"

static GTY(()) tree first_personality_decl;

*************** lto_read_in_decl_state (struct data_in *
*** 254,260 ****
uint32_t i, j;

ix = *data++;
! decl = streamer_tree_cache_get (data_in->reader_cache, ix);
if (TREE_CODE (decl) != FUNCTION_DECL)
{
gcc_assert (decl == void_type_node);
--- 255,261 ----
uint32_t i, j;

ix = *data++;
! decl = streamer_tree_cache_get_tree (data_in->reader_cache, ix);
if (TREE_CODE (decl) != FUNCTION_DECL)
{
gcc_assert (decl == void_type_node);
*************** lto_read_in_decl_state (struct data_in *
*** 268,274 ****
tree *decls = ggc_alloc_vec_tree (size);

for (j = 0; j < size; j++)
! decls[j] = streamer_tree_cache_get (data_in->reader_cache, data[j]);

state->streams[i].size = size;
state->streams[i].trees = decls;
--- 269,275 ----
tree *decls = ggc_alloc_vec_tree (size);

for (j = 0; j < size; j++)
! decls[j] = streamer_tree_cache_get_tree (data_in->reader_cache, data[j]);

state->streams[i].size = size;
state->streams[i].trees = decls;
*************** lto_read_in_decl_state (struct data_in *
*** 280,285 ****
--- 281,289 ----



+ /* ??? Old hashing and merging code follows, we keep it for statistics
+ purposes for now. */
+
/* Global type table. FIXME, it should be possible to re-use some
of the type hashing routines in tree.c (type_hash_canon, type_hash_lookup,
etc), but those assume that types were built with the various
*************** struct sccs
*** 366,398 ****
static unsigned int next_dfs_num;
static unsigned int gtc_next_dfs_num;

- /* GIMPLE type merging cache. A direct-mapped cache based on TYPE_UID. */
-
- typedef struct GTY(()) gimple_type_leader_entry_s {
- tree type;
- tree leader;
- } gimple_type_leader_entry;
-
- #define GIMPLE_TYPE_LEADER_SIZE 16381
- static GTY((length("GIMPLE_TYPE_LEADER_SIZE")))
- gimple_type_leader_entry *gimple_type_leader;
-
- /* Lookup an existing leader for T and return it or NULL_TREE, if
- there is none in the cache. */
-
- static inline tree
- gimple_lookup_type_leader (tree t)
- {
- gimple_type_leader_entry *leader;
-
- leader = &gimple_type_leader[TYPE_UID (t) % GIMPLE_TYPE_LEADER_SIZE];
- if (leader->type != t)
- return NULL_TREE;
-
- return leader->leader;
- }
-
-
/* Return true if T1 and T2 have the same name. If FOR_COMPLETION_P is
true then if any type has no name return false, otherwise return
true if both types have no names. */
--- 370,375 ----
*************** gtc_visit (tree t1, tree t2,
*** 450,456 ****
struct sccs *cstate = NULL;
type_pair_t p;
void **slot;
- tree leader1, leader2;

/* Check first for the obvious case of pointer identity. */
if (t1 == t2)
--- 427,432 ----
*************** gtc_visit (tree t1, tree t2,
*** 507,521 ****
/* For other types fall through to more complex checks. */
}

- /* If the types have been previously registered and found equal
- they still are. */
- leader1 = gimple_lookup_type_leader (t1);
- leader2 = gimple_lookup_type_leader (t2);
- if (leader1 == t2
- || t1 == leader2
- || (leader1 && leader1 == leader2))
- return true;
-
/* If the hash values of t1 and t2 are different the types can't
possibly be the same. This helps keeping the type-pair hashtable
small, only tracking comparisons for hash collisions. */
--- 483,488 ----
*************** gimple_types_compatible_p (tree t1, tree
*** 879,885 ****
struct obstack sccstate_obstack;
type_pair_t p = NULL;
bool res;
- tree leader1, leader2;

/* Before starting to set up the SCC machinery handle simple cases. */

--- 846,851 ----
*************** gimple_types_compatible_p (tree t1, tree
*** 938,952 ****
/* For other types fall through to more complex checks. */
}

- /* If the types have been previously registered and found equal
- they still are. */
- leader1 = gimple_lookup_type_leader (t1);
- leader2 = gimple_lookup_type_leader (t2);
- if (leader1 == t2
- || t1 == leader2
- || (leader1 && leader1 == leader2))
- return true;
-
/* If the hash values of t1 and t2 are different the types can't
possibly be the same. This helps keeping the type-pair hashtable
small, only tracking comparisons for hash collisions. */
--- 904,909 ----
*************** gimple_type_eq (const void *p1, const vo
*** 1335,1402 ****
CONST_CAST_TREE (t2));
}

!
! /* Worker for gimple_register_type.
! Register type T in the global type table gimple_types.
! When REGISTERING_MV is false first recurse for the main variant of T. */

static tree
! gimple_register_type_1 (tree t, bool registering_mv)
{
void **slot;
- gimple_type_leader_entry *leader;
-
- /* If we registered this type before return the cached result. */
- leader = &gimple_type_leader[TYPE_UID (t) % GIMPLE_TYPE_LEADER_SIZE];
- if (leader->type == t)
- return leader->leader;
-
- /* Always register the main variant first. This is important so we
- pick up the non-typedef variants as canonical, otherwise we'll end
- up taking typedef ids for structure tags during comparison.
- It also makes sure that main variants will be merged to main variants.
- As we are operating on a possibly partially fixed up type graph
- do not bother to recurse more than once, otherwise we may end up
- walking in circles.
- If we are registering a main variant it will either remain its
- own main variant or it will be merged to something else in which
- case we do not care for the main variant leader. */
- if (!registering_mv
- && TYPE_MAIN_VARIANT (t) != t)
- gimple_register_type_1 (TYPE_MAIN_VARIANT (t), true);

/* See if we already have an equivalent type registered. */
slot = htab_find_slot (gimple_types, t, INSERT);
if (*slot
&& *(tree *)slot != t)
! {
! tree new_type = (tree) *((tree *) slot);
! leader->type = t;
! leader->leader = new_type;
! return new_type;
! }

/* If not, insert it to the cache and the hash. */
- leader->type = t;
- leader->leader = t;
*slot = (void *) t;
return t;
}

! /* Register type T in the global type table gimple_types.
! If another type T', compatible with T, already existed in
! gimple_types then return T', otherwise return T. This is used by
! LTO to merge identical types read from different TUs. */
!
! static tree
! gimple_register_type (tree t)
! {
! gcc_assert (TYPE_P (t));
! return gimple_register_type_1 (t, false);
! }
!
! #define GIMPLE_REGISTER_TYPE(tt) \
! (TREE_VISITED (tt) ? gimple_register_type (tt) : tt)



--- 1292,1316 ----
CONST_CAST_TREE (t2));
}

! /* Register type T in the global type table gimple_types. */

static tree
! gimple_register_type (tree t)
{
void **slot;

/* See if we already have an equivalent type registered. */
slot = htab_find_slot (gimple_types, t, INSERT);
if (*slot
&& *(tree *)slot != t)
! return (tree) *((tree *) slot);

/* If not, insert it to the cache and the hash. */
*slot = (void *) t;
return t;
}

! /* End of old merging code. */



*************** remember_with_vars (tree t)
*** 1413,1632 ****
*(tree *) htab_find_slot (tree_with_vars, t, INSERT) = t;
}

! #define LTO_FIXUP_TREE(tt) \
do \
{ \
if (tt) \
{ \
- if (TYPE_P (tt)) \
- (tt) = GIMPLE_REGISTER_TYPE (tt); \
if (VAR_OR_FUNCTION_DECL_P (tt) && TREE_PUBLIC (tt)) \
remember_with_vars (t); \
- if (TREE_CODE (tt) == INTEGER_CST) \
- (tt) = fixup_integer_cst (tt); \
} \
} while (0)

- static void lto_fixup_types (tree);
-
- /* Return integer_cst T with updated type. */
-
- static tree
- fixup_integer_cst (tree t)
- {
- tree type = GIMPLE_REGISTER_TYPE (TREE_TYPE (t));
-
- if (type == TREE_TYPE (t))
- return t;
-
- /* If overflow was set, streamer_read_integer_cst
- produced local copy of T. */
- if (TREE_OVERFLOW (t))
- {
- TREE_TYPE (t) = type;
- return t;
- }
- else
- /* Otherwise produce new shared node for the new type. */
- return build_int_cst_wide (type, TREE_INT_CST_LOW (t),
- TREE_INT_CST_HIGH (t));
- }
-
/* Fix up fields of a tree_typed T. */

static void
! lto_ft_typed (tree t)
{
! LTO_FIXUP_TREE (TREE_TYPE (t));
}

/* Fix up fields of a tree_common T. */

static void
! lto_ft_common (tree t)
{
! lto_ft_typed (t);
! LTO_FIXUP_TREE (TREE_CHAIN (t));
}

/* Fix up fields of a decl_minimal T. */

static void
! lto_ft_decl_minimal (tree t)
{
! lto_ft_common (t);
! LTO_FIXUP_TREE (DECL_NAME (t));
! LTO_FIXUP_TREE (DECL_CONTEXT (t));
}

/* Fix up fields of a decl_common T. */

static void
! lto_ft_decl_common (tree t)
{
! lto_ft_decl_minimal (t);
! LTO_FIXUP_TREE (DECL_SIZE (t));
! LTO_FIXUP_TREE (DECL_SIZE_UNIT (t));
! LTO_FIXUP_TREE (DECL_INITIAL (t));
! LTO_FIXUP_TREE (DECL_ATTRIBUTES (t));
! LTO_FIXUP_TREE (DECL_ABSTRACT_ORIGIN (t));
}

/* Fix up fields of a decl_with_vis T. */

static void
! lto_ft_decl_with_vis (tree t)
{
! lto_ft_decl_common (t);

/* Accessor macro has side-effects, use field-name here. */
! LTO_FIXUP_TREE (t->decl_with_vis.assembler_name);
! LTO_FIXUP_TREE (DECL_SECTION_NAME (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! lto_ft_decl_non_common (tree t)
{
! lto_ft_decl_with_vis (t);
! LTO_FIXUP_TREE (DECL_ARGUMENT_FLD (t));
! LTO_FIXUP_TREE (DECL_RESULT_FLD (t));
! LTO_FIXUP_TREE (DECL_VINDEX (t));
! /* The C frontends may create exact duplicates for DECL_ORIGINAL_TYPE
! like for 'typedef enum foo foo'. We have no way of avoiding to
! merge them and dwarf2out.c cannot deal with this,
! so fix this up by clearing DECL_ORIGINAL_TYPE in this case. */
! if (TREE_CODE (t) == TYPE_DECL
! && DECL_ORIGINAL_TYPE (t) == TREE_TYPE (t))
! DECL_ORIGINAL_TYPE (t) = NULL_TREE;
}

/* Fix up fields of a decl_non_common T. */

static void
! lto_ft_function (tree t)
{
! lto_ft_decl_non_common (t);
! LTO_FIXUP_TREE (DECL_FUNCTION_PERSONALITY (t));
}

/* Fix up fields of a field_decl T. */

static void
! lto_ft_field_decl (tree t)
{
! lto_ft_decl_common (t);
! LTO_FIXUP_TREE (DECL_FIELD_OFFSET (t));
! LTO_FIXUP_TREE (DECL_BIT_FIELD_TYPE (t));
! LTO_FIXUP_TREE (DECL_QUALIFIER (t));
! LTO_FIXUP_TREE (DECL_FIELD_BIT_OFFSET (t));
! LTO_FIXUP_TREE (DECL_FCONTEXT (t));
}

/* Fix up fields of a type T. */

static void
! lto_ft_type (tree t)
{
! lto_ft_common (t);
! LTO_FIXUP_TREE (TYPE_CACHED_VALUES (t));
! LTO_FIXUP_TREE (TYPE_SIZE (t));
! LTO_FIXUP_TREE (TYPE_SIZE_UNIT (t));
! LTO_FIXUP_TREE (TYPE_ATTRIBUTES (t));
! LTO_FIXUP_TREE (TYPE_NAME (t));

/* Accessors are for derived node types only. */
if (!POINTER_TYPE_P (t))
! LTO_FIXUP_TREE (TYPE_MINVAL (t));
! LTO_FIXUP_TREE (TYPE_MAXVAL (t));

/* Accessor is for derived node types only. */
! LTO_FIXUP_TREE (t->type_non_common.binfo);

! LTO_FIXUP_TREE (TYPE_CONTEXT (t));
}

/* Fix up fields of a BINFO T. */

static void
! lto_ft_binfo (tree t)
{
unsigned HOST_WIDE_INT i, n;
- tree base, saved_base;

! lto_ft_common (t);
! LTO_FIXUP_TREE (BINFO_VTABLE (t));
! LTO_FIXUP_TREE (BINFO_OFFSET (t));
! LTO_FIXUP_TREE (BINFO_VIRTUALS (t));
! LTO_FIXUP_TREE (BINFO_VPTR_FIELD (t));
n = vec_safe_length (BINFO_BASE_ACCESSES (t));
for (i = 0; i < n; i++)
! {
! saved_base = base = BINFO_BASE_ACCESS (t, i);
! LTO_FIXUP_TREE (base);
! if (base != saved_base)
! (*BINFO_BASE_ACCESSES (t))[i] = base;
! }
! LTO_FIXUP_TREE (BINFO_INHERITANCE_CHAIN (t));
! LTO_FIXUP_TREE (BINFO_SUBVTT_INDEX (t));
! LTO_FIXUP_TREE (BINFO_VPTR_INDEX (t));
n = BINFO_N_BASE_BINFOS (t);
for (i = 0; i < n; i++)
! {
! saved_base = base = BINFO_BASE_BINFO (t, i);
! LTO_FIXUP_TREE (base);
! if (base != saved_base)
! (*BINFO_BASE_BINFOS (t))[i] = base;
! }
}

/* Fix up fields of a CONSTRUCTOR T. */

static void
! lto_ft_constructor (tree t)
{
unsigned HOST_WIDE_INT idx;
constructor_elt *ce;

! lto_ft_typed (t);

for (idx = 0; vec_safe_iterate (CONSTRUCTOR_ELTS (t), idx, &ce); idx++)
{
! LTO_FIXUP_TREE (ce->index);
! LTO_FIXUP_TREE (ce->value);
}
}

/* Fix up fields of an expression tree T. */

static void
! lto_ft_expr (tree t)
{
int i;
! lto_ft_typed (t);
for (i = TREE_OPERAND_LENGTH (t) - 1; i >= 0; --i)
! LTO_FIXUP_TREE (TREE_OPERAND (t, i));
}

/* Given a tree T fixup fields of T by replacing types with their merged
--- 1327,1499 ----
*(tree *) htab_find_slot (tree_with_vars, t, INSERT) = t;
}

! #define MAYBE_REMEMBER_WITH_VARS(tt) \
do \
{ \
if (tt) \
{ \
if (VAR_OR_FUNCTION_DECL_P (tt) && TREE_PUBLIC (tt)) \
remember_with_vars (t); \
} \
} while (0)

/* Fix up fields of a tree_typed T. */

static void
! maybe_remember_with_vars_typed (tree t)
{
! MAYBE_REMEMBER_WITH_VARS (TREE_TYPE (t));
}

/* Fix up fields of a tree_common T. */

static void
! maybe_remember_with_vars_common (tree t)
{
! maybe_remember_with_vars_typed (t);
! MAYBE_REMEMBER_WITH_VARS (TREE_CHAIN (t));
}

/* Fix up fields of a decl_minimal T. */

static void
! maybe_remember_with_vars_decl_minimal (tree t)
{
! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_NAME (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_CONTEXT (t));
}

/* Fix up fields of a decl_common T. */

static void
! maybe_remember_with_vars_decl_common (tree t)
{
! maybe_remember_with_vars_decl_minimal (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_SIZE (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_SIZE_UNIT (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_INITIAL (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_ATTRIBUTES (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_ABSTRACT_ORIGIN (t));
}

/* Fix up fields of a decl_with_vis T. */

static void
! maybe_remember_with_vars_decl_with_vis (tree t)
{
! maybe_remember_with_vars_decl_common (t);

/* Accessor macro has side-effects, use field-name here. */
! MAYBE_REMEMBER_WITH_VARS (t->decl_with_vis.assembler_name);
! MAYBE_REMEMBER_WITH_VARS (DECL_SECTION_NAME (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! maybe_remember_with_vars_decl_non_common (tree t)
{
! maybe_remember_with_vars_decl_with_vis (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_ARGUMENT_FLD (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_RESULT_FLD (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_VINDEX (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! maybe_remember_with_vars_function (tree t)
{
! maybe_remember_with_vars_decl_non_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_FUNCTION_PERSONALITY (t));
}

/* Fix up fields of a field_decl T. */

static void
! maybe_remember_with_vars_field_decl (tree t)
{
! maybe_remember_with_vars_decl_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_FIELD_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_BIT_FIELD_TYPE (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_QUALIFIER (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_FIELD_BIT_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_FCONTEXT (t));
}

/* Fix up fields of a type T. */

static void
! maybe_remember_with_vars_type (tree t)
{
! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (TYPE_CACHED_VALUES (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_SIZE (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_SIZE_UNIT (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_ATTRIBUTES (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_NAME (t));

/* Accessors are for derived node types only. */
if (!POINTER_TYPE_P (t))
! MAYBE_REMEMBER_WITH_VARS (TYPE_MINVAL (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_MAXVAL (t));

/* Accessor is for derived node types only. */
! MAYBE_REMEMBER_WITH_VARS (t->type_non_common.binfo);

! MAYBE_REMEMBER_WITH_VARS (TYPE_CONTEXT (t));
}

/* Fix up fields of a BINFO T. */

static void
! maybe_remember_with_vars_binfo (tree t)
{
unsigned HOST_WIDE_INT i, n;

! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (BINFO_VTABLE (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VIRTUALS (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VPTR_FIELD (t));
n = vec_safe_length (BINFO_BASE_ACCESSES (t));
for (i = 0; i < n; i++)
! MAYBE_REMEMBER_WITH_VARS (BINFO_BASE_ACCESS (t, i));
! MAYBE_REMEMBER_WITH_VARS (BINFO_INHERITANCE_CHAIN (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_SUBVTT_INDEX (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VPTR_INDEX (t));
n = BINFO_N_BASE_BINFOS (t);
for (i = 0; i < n; i++)
! MAYBE_REMEMBER_WITH_VARS (BINFO_BASE_BINFO (t, i));
}

/* Fix up fields of a CONSTRUCTOR T. */

static void
! maybe_remember_with_vars_constructor (tree t)
{
unsigned HOST_WIDE_INT idx;
constructor_elt *ce;

! maybe_remember_with_vars_typed (t);

for (idx = 0; vec_safe_iterate (CONSTRUCTOR_ELTS (t), idx, &ce); idx++)
{
! MAYBE_REMEMBER_WITH_VARS (ce->index);
! MAYBE_REMEMBER_WITH_VARS (ce->value);
}
}

/* Fix up fields of an expression tree T. */

static void
! maybe_remember_with_vars_expr (tree t)
{
int i;
! maybe_remember_with_vars_typed (t);
for (i = TREE_OPERAND_LENGTH (t) - 1; i >= 0; --i)
! MAYBE_REMEMBER_WITH_VARS (TREE_OPERAND (t, i));
}

/* Given a tree T fixup fields of T by replacing types with their merged
*************** lto_ft_expr (tree t)
*** 1635,1641 ****
for instance integer or string constants. */

static void
! lto_fixup_types (tree t)
{
switch (TREE_CODE (t))
{
--- 1502,1508 ----
for instance integer or string constants. */

static void
! maybe_remember_with_vars (tree t)
{
switch (TREE_CODE (t))
{
*************** lto_fixup_types (tree t)
*** 1643,1655 ****
break;

case TREE_LIST:
! LTO_FIXUP_TREE (TREE_VALUE (t));
! LTO_FIXUP_TREE (TREE_PURPOSE (t));
! LTO_FIXUP_TREE (TREE_CHAIN (t));
break;

case FIELD_DECL:
! lto_ft_field_decl (t);
break;

case LABEL_DECL:
--- 1510,1522 ----
break;

case TREE_LIST:
! MAYBE_REMEMBER_WITH_VARS (TREE_VALUE (t));
! MAYBE_REMEMBER_WITH_VARS (TREE_PURPOSE (t));
! MAYBE_REMEMBER_WITH_VARS (TREE_CHAIN (t));
break;

case FIELD_DECL:
! maybe_remember_with_vars_field_decl (t);
break;

case LABEL_DECL:
*************** lto_fixup_types (tree t)
*** 1657,1683 ****
case PARM_DECL:
case RESULT_DECL:
case IMPORTED_DECL:
! lto_ft_decl_common (t);
break;

case VAR_DECL:
! lto_ft_decl_with_vis (t);
break;

case TYPE_DECL:
! lto_ft_decl_non_common (t);
break;

case FUNCTION_DECL:
! lto_ft_function (t);
break;

case TREE_BINFO:
! lto_ft_binfo (t);
break;

case PLACEHOLDER_EXPR:
! lto_ft_common (t);
break;

case BLOCK:
--- 1524,1550 ----
case PARM_DECL:
case RESULT_DECL:
case IMPORTED_DECL:
! maybe_remember_with_vars_decl_common (t);
break;

case VAR_DECL:
! maybe_remember_with_vars_decl_with_vis (t);
break;

case TYPE_DECL:
! maybe_remember_with_vars_decl_non_common (t);
break;

case FUNCTION_DECL:
! maybe_remember_with_vars_function (t);
break;

case TREE_BINFO:
! maybe_remember_with_vars_binfo (t);
break;

case PLACEHOLDER_EXPR:
! maybe_remember_with_vars_common (t);
break;

case BLOCK:
*************** lto_fixup_types (tree t)
*** 1686,1706 ****
case TARGET_OPTION_NODE:
break;

default:
if (TYPE_P (t))
! lto_ft_type (t);
! else if (TREE_CODE (t) == CONSTRUCTOR)
! lto_ft_constructor (t);
else if (CONSTANT_CLASS_P (t))
! LTO_FIXUP_TREE (TREE_TYPE (t));
else if (EXPR_P (t))
! {
! lto_ft_expr (t);
! }
else
! {
! remember_with_vars (t);
! }
}
}

--- 1553,1571 ----
case TARGET_OPTION_NODE:
break;

+ case CONSTRUCTOR:
+ maybe_remember_with_vars_constructor (t);
+ break;
+
default:
if (TYPE_P (t))
! maybe_remember_with_vars_type (t);
else if (CONSTANT_CLASS_P (t))
! MAYBE_REMEMBER_WITH_VARS (TREE_TYPE (t));
else if (EXPR_P (t))
! maybe_remember_with_vars_expr (t);
else
! remember_with_vars (t);
}
}

*************** lto_register_function_decl_in_symtab (st
*** 1786,2006 ****
}
}

- static unsigned long num_merged_types = 0;

! /* Given a streamer cache structure DATA_IN (holding a sequence of trees
! for one compilation unit) go over all trees starting at index FROM until the
! end of the sequence and replace fields of those trees, and the trees
! themself with their canonical variants as per gimple_register_type. */

static void
! uniquify_nodes (struct data_in *data_in, unsigned from)
{
! struct streamer_tree_cache_d *cache = data_in->reader_cache;
! unsigned len = cache->nodes.length ();
! unsigned i;

! /* Go backwards because children streamed for the first time come
! as part of their parents, and hence are created after them. */

! /* First register all the types in the cache. This makes sure to
! have the original structure in the type cycles when registering
! them and computing hashes. */
! for (i = len; i-- > from;)
! {
! tree t = cache->nodes[i];
! if (t && TYPE_P (t))
! {
! tree newt = gimple_register_type (t);
! /* Mark non-prevailing types so we fix them up. No need
! to reset that flag afterwards - nothing that refers
! to those types is left and they are collected. */
! if (newt != t)
! {
! num_merged_types++;
! TREE_VISITED (t) = 1;
! }
}
}

! /* Second fixup all trees in the new cache entries. */
! for (i = len; i-- > from;)
{
! tree t = cache->nodes[i];
! tree oldt = t;
! if (!t)
! continue;
!
! /* First fixup the fields of T. */
! lto_fixup_types (t);
!
! if (!TYPE_P (t))
! continue;
!
! /* Now try to find a canonical variant of T itself. */
! t = GIMPLE_REGISTER_TYPE (t);
!
! if (t == oldt)
! {
! /* The following re-creates proper variant lists while fixing up
! the variant leaders. We do not stream TYPE_NEXT_VARIANT so the
! variant list state before fixup is broken. */
! tree tem, mv;
!
! #ifdef ENABLE_CHECKING
! /* Remove us from our main variant list if we are not the
! variant leader. */
! if (TYPE_MAIN_VARIANT (t) != t)
! {
! tem = TYPE_MAIN_VARIANT (t);
! while (tem && TYPE_NEXT_VARIANT (tem) != t)
! tem = TYPE_NEXT_VARIANT (tem);
! gcc_assert (!tem && !TYPE_NEXT_VARIANT (t));
! }
! #endif

! /* Query our new main variant. */
! mv = GIMPLE_REGISTER_TYPE (TYPE_MAIN_VARIANT (t));

! /* If we were the variant leader and we get replaced ourselves drop
! all variants from our list. */
! if (TYPE_MAIN_VARIANT (t) == t
! && mv != t)
! {
! tem = t;
! while (tem)
! {
! tree tem2 = TYPE_NEXT_VARIANT (tem);
! TYPE_NEXT_VARIANT (tem) = NULL_TREE;
! tem = tem2;
! }
! }

! /* If we are not our own variant leader link us into our new leaders
! variant list. */
! if (mv != t)
! {
! TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (mv);
! TYPE_NEXT_VARIANT (mv) = t;
! if (RECORD_OR_UNION_TYPE_P (t))
! TYPE_BINFO (t) = TYPE_BINFO (mv);
! /* Preserve the invariant that type variants share their
! TYPE_FIELDS. */
! if (RECORD_OR_UNION_TYPE_P (t)
! && TYPE_FIELDS (mv) != TYPE_FIELDS (t))
! {
! tree f1, f2;
! for (f1 = TYPE_FIELDS (mv), f2 = TYPE_FIELDS (t);
! f1 && f2; f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! {
! unsigned ix;
! gcc_assert (f1 != f2
! && DECL_NAME (f1) == DECL_NAME (f2));
! if (!streamer_tree_cache_lookup (cache, f2, &ix))
! gcc_unreachable ();
! /* If we're going to replace an element which we'd
! still visit in the next iterations, we wouldn't
! handle it, so do it here. We do have to handle it
! even though the field_decl itself will be removed,
! as it could refer to e.g. integer_cst which we
! wouldn't reach via any other way, hence they
! (and their type) would stay uncollected. */
! /* ??? We should rather make sure to replace all
! references to f2 with f1. That means handling
! COMPONENT_REFs and CONSTRUCTOR elements in
! lto_fixup_types and special-case the field-decl
! operand handling. */
! /* ??? Not sure the above is all relevant in this
! path canonicalizing TYPE_FIELDS to that of the
! main variant. */
! if (ix < i)
! lto_fixup_types (f2);
! streamer_tree_cache_insert_at (cache, f1, ix);
! }
! TYPE_FIELDS (t) = TYPE_FIELDS (mv);
! }
! }

! /* Finally adjust our main variant and fix it up. */
! TYPE_MAIN_VARIANT (t) = mv;

! /* The following reconstructs the pointer chains
! of the new pointed-to type if we are a main variant. We do
! not stream those so they are broken before fixup. */
! if (TREE_CODE (t) == POINTER_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_PTR_TO (t) = TYPE_POINTER_TO (TREE_TYPE (t));
! TYPE_POINTER_TO (TREE_TYPE (t)) = t;
! }
! else if (TREE_CODE (t) == REFERENCE_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_REF_TO (t) = TYPE_REFERENCE_TO (TREE_TYPE (t));
! TYPE_REFERENCE_TO (TREE_TYPE (t)) = t;
! }
}

! else
{
! if (RECORD_OR_UNION_TYPE_P (t))
! {
! tree f1, f2;
! if (TYPE_FIELDS (t) != TYPE_FIELDS (oldt))
! for (f1 = TYPE_FIELDS (t), f2 = TYPE_FIELDS (oldt);
! f1 && f2; f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! {
! unsigned ix;
! gcc_assert (f1 != f2 && DECL_NAME (f1) == DECL_NAME (f2));
! if (!streamer_tree_cache_lookup (cache, f2, &ix))
! gcc_unreachable ();
! /* If we're going to replace an element which we'd
! still visit in the next iterations, we wouldn't
! handle it, so do it here. We do have to handle it
! even though the field_decl itself will be removed,
! as it could refer to e.g. integer_cst which we
! wouldn't reach via any other way, hence they
! (and their type) would stay uncollected. */
! /* ??? We should rather make sure to replace all
! references to f2 with f1. That means handling
! COMPONENT_REFs and CONSTRUCTOR elements in
! lto_fixup_types and special-case the field-decl
! operand handling. */
! if (ix < i)
! lto_fixup_types (f2);
! streamer_tree_cache_insert_at (cache, f1, ix);
! }
! }

! /* If we found a tree that is equal to oldt replace it in the
! cache, so that further users (in the various LTO sections)
! make use of it. */
! streamer_tree_cache_insert_at (cache, t, i);
}
}

! /* Finally compute the canonical type of all TREE_TYPEs and register
! VAR_DECL and FUNCTION_DECL nodes in the symbol table.
! From this point there are no longer any types with
! TYPE_STRUCTURAL_EQUALITY_P and its type-based alias problems.
! This step requires the TYPE_POINTER_TO lists being present, so
! make sure it is done last. */
! for (i = len; i-- > from;)
! {
! tree t = cache->nodes[i];
! if (t == NULL_TREE)
! continue;
!
! if (TREE_CODE (t) == VAR_DECL)
! lto_register_var_decl_in_symtab (data_in, t);
! else if (TREE_CODE (t) == FUNCTION_DECL && !DECL_BUILT_IN (t))
! lto_register_function_decl_in_symtab (data_in, t);
! else if (!flag_wpa
! && TREE_CODE (t) == TYPE_DECL)
! debug_hooks->type_decl (t, !DECL_FILE_SCOPE_P (t));
! else if (TYPE_P (t) && !TYPE_CANONICAL (t))
! TYPE_CANONICAL (t) = gimple_register_canonical_type (t);
}
}


--- 1651,2376 ----
}
}


! /* For the type T re-materialize it in the type variant list and
! the pointer/reference-to chains. */

static void
! lto_fixup_prevailing_type (tree t)
{
! /* The following re-creates proper variant lists while fixing up
! the variant leaders. We do not stream TYPE_NEXT_VARIANT so the
! variant list state before fixup is broken. */

! /* If we are not our own variant leader link us into our new leaders
! variant list. */
! if (TYPE_MAIN_VARIANT (t) != t)
! {
! tree mv = TYPE_MAIN_VARIANT (t);
! TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (mv);
! TYPE_NEXT_VARIANT (mv) = t;
! }

! /* The following reconstructs the pointer chains
! of the new pointed-to type if we are a main variant. We do
! not stream those so they are broken before fixup. */
! if (TREE_CODE (t) == POINTER_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_PTR_TO (t) = TYPE_POINTER_TO (TREE_TYPE (t));
! TYPE_POINTER_TO (TREE_TYPE (t)) = t;
! }
! else if (TREE_CODE (t) == REFERENCE_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_REF_TO (t) = TYPE_REFERENCE_TO (TREE_TYPE (t));
! TYPE_REFERENCE_TO (TREE_TYPE (t)) = t;
! }
! }
!
!
! /* We keep prevailing tree SCCs in a hashtable with manual collision
! handling (in case all hashes compare the same) and keep the colliding
! entries in the tree_scc->next chain. */
!
! struct tree_scc
! {
! tree_scc *next;
! /* Hash of the whole SCC. */
! hashval_t hash;
! /* Number of trees in the SCC. */
! unsigned len;
! /* Number of possible entries into the SCC (tree nodes [0..entry_len-1]
! which share the same individual tree hash). */
! unsigned entry_len;
! /* The members of the SCC.
! We only need to remember the first entry node candidate for prevailing
! SCCs (but of course have access to all entries for SCCs we are
! processing).
! ??? For prevailing SCCs we really only need hash and the first
! entry candidate, but that's too awkward to implement. */
! tree entries[1];
! };
!
! struct tree_scc_hasher : typed_noop_remove <tree_scc>
! {
! typedef tree_scc value_type;
! typedef tree_scc compare_type;
! static inline hashval_t hash (const value_type *);
! static inline bool equal (const value_type *, const compare_type *);
! };
!
! hashval_t
! tree_scc_hasher::hash (const value_type *scc)
! {
! return scc->hash;
! }
!
! bool
! tree_scc_hasher::equal (const value_type *scc1, const compare_type *scc2)
! {
! if (scc1->hash != scc2->hash
! || scc1->len != scc2->len
! || scc1->entry_len != scc2->entry_len)
! return false;
! return true;
! }
!
! static hash_table <tree_scc_hasher> tree_scc_hash;
! static struct obstack tree_scc_hash_obstack;
!
! static unsigned long num_merged_types;
! static unsigned long num_prevailing_types;
! static unsigned long num_not_merged_types;
! static unsigned long num_not_merged_types_in_same_scc;
! static unsigned long total_scc_size;
! static unsigned long num_sccs_read;
! static unsigned long total_scc_size_merged;
! static unsigned long num_sccs_merged;
! static unsigned long num_scc_compares;
! static unsigned long num_scc_compare_collisions;
!
!
! /* Compare the two entries T1 and T2 of two SCCs that are possibly equal,
! recursing through in-SCC tree edges. Returns true if the SCCs entered
! through T1 and T2 are equal and fills in *MAP with the pairs of
! SCC entries we visited, starting with (*MAP)[0] = T1 and (*MAP)[1] = T2. */
!
! static bool
! compare_tree_sccs_1 (tree t1, tree t2, tree **map)
! {
! enum tree_code code;
!
! /* Handle combinations where at least one of t1 or t2 is NULL_TREE. */
! if (!t1 && !t2)
! return true;
! if (!t1 || !t2)
! return false;
!
! /* Outside of the SCC trees have to compare pointer equal. */
! if (!TREE_VISITED (t2))
! return t1 == t2;
!
! /* Inside of the SCC tree they cannot be pointer equal. */
! gcc_assert (t1 != t2);
!
! /* If we have already visited t1, we reached a cycle. Optimistically
! return equal. */
! if (TREE_ASM_WRITTEN (t2))
! return true;
!
! /* Mark already visited nodes. */
! TREE_ASM_WRITTEN (t2) = 1;
!
! /* Push the pair onto map. */
! (*map)[0] = t1;
! (*map)[1] = t2;
! *map = *map + 2;
!
! /* Compare value-fields. */
! #define compare_values(X) \
! do { \
! if (X(t1) != X(t2)) \
! return false; \
! } while (0)
!
! compare_values (TREE_CODE);
! code = TREE_CODE (t1);
!
! if (!TYPE_P (t1))
! {
! compare_values (TREE_SIDE_EFFECTS);
! compare_values (TREE_CONSTANT);
! compare_values (TREE_READONLY);
! compare_values (TREE_PUBLIC);
! }
! compare_values (TREE_ADDRESSABLE);
! compare_values (TREE_THIS_VOLATILE);
! if (DECL_P (t1))
! compare_values (DECL_UNSIGNED);
! else if (TYPE_P (t1))
! compare_values (TYPE_UNSIGNED);
! if (TYPE_P (t1))
! compare_values (TYPE_ARTIFICIAL);
! else
! compare_values (TREE_NO_WARNING);
! compare_values (TREE_USED);
! compare_values (TREE_NOTHROW);
! compare_values (TREE_STATIC);
! compare_values (TREE_PRIVATE);
! compare_values (TREE_PROTECTED);
! compare_values (TREE_DEPRECATED);
! if (TYPE_P (t1))
! {
! compare_values (TYPE_SATURATING);
! compare_values (TYPE_ADDR_SPACE);
! }
! else if (code == SSA_NAME)
! compare_values (SSA_NAME_IS_DEFAULT_DEF);
!
! if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
! {
! compare_values (TREE_INT_CST_LOW);
! compare_values (TREE_INT_CST_HIGH);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_REAL_CST))
! {
! /* ??? No suitable compare routine available. */
! REAL_VALUE_TYPE r1 = TREE_REAL_CST (t1);
! REAL_VALUE_TYPE r2 = TREE_REAL_CST (t2);
! if (r1.cl != r2.cl
! || r1.decimal != r2.decimal
! || r1.sign != r2.sign
! || r1.signalling != r2.signalling
! || r1.canonical != r2.canonical
! || r1.uexp != r2.uexp)
! return false;
! for (unsigned i = 0; i < SIGSZ; ++i)
! if (r1.sig[i] != r2.sig[i])
! return false;
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FIXED_CST))
! if (!fixed_compare (EQ_EXPR,
! TREE_FIXED_CST_PTR (t1), TREE_FIXED_CST_PTR (t2)))
! return false;
!
!
! /* We don't want to compare locations, so there is nothing do compare
! for TS_DECL_MINIMAL. */
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
! {
! compare_values (DECL_MODE);
! compare_values (DECL_NONLOCAL);
! compare_values (DECL_VIRTUAL_P);
! compare_values (DECL_IGNORED_P);
! compare_values (DECL_ABSTRACT);
! compare_values (DECL_ARTIFICIAL);
! compare_values (DECL_USER_ALIGN);
! compare_values (DECL_PRESERVE_P);
! compare_values (DECL_EXTERNAL);
! compare_values (DECL_GIMPLE_REG_P);
! compare_values (DECL_ALIGN);
! if (code == LABEL_DECL)
! {
! compare_values (DECL_ERROR_ISSUED);
! compare_values (EH_LANDING_PAD_NR);
! compare_values (LABEL_DECL_UID);
! }
! else if (code == FIELD_DECL)
! {
! compare_values (DECL_PACKED);
! compare_values (DECL_NONADDRESSABLE_P);
! compare_values (DECL_OFFSET_ALIGN);
! }
! else if (code == VAR_DECL)
! {
! compare_values (DECL_HAS_DEBUG_EXPR_P);
! compare_values (DECL_NONLOCAL_FRAME);
! }
! if (code == RESULT_DECL
! || code == PARM_DECL
! || code == VAR_DECL)
! {
! compare_values (DECL_BY_REFERENCE);
! if (code == VAR_DECL
! || code == PARM_DECL)
! compare_values (DECL_HAS_VALUE_EXPR_P);
! }
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WRTL))
! compare_values (DECL_REGISTER);
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
! {
! compare_values (DECL_DEFER_OUTPUT);
! compare_values (DECL_COMMON);
! compare_values (DECL_DLLIMPORT_P);
! compare_values (DECL_WEAK);
! compare_values (DECL_SEEN_IN_BIND_EXPR_P);
! compare_values (DECL_COMDAT);
! compare_values (DECL_VISIBILITY);
! compare_values (DECL_VISIBILITY_SPECIFIED);
! if (code == VAR_DECL)
! {
! compare_values (DECL_HARD_REGISTER);
! compare_values (DECL_IN_TEXT_SECTION);
! compare_values (DECL_IN_CONSTANT_POOL);
! compare_values (DECL_TLS_MODEL);
! }
! if (VAR_OR_FUNCTION_DECL_P (t1))
! compare_values (DECL_INIT_PRIORITY);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
! {
! compare_values (DECL_BUILT_IN_CLASS);
! compare_values (DECL_STATIC_CONSTRUCTOR);
! compare_values (DECL_STATIC_DESTRUCTOR);
! compare_values (DECL_UNINLINABLE);
! compare_values (DECL_POSSIBLY_INLINED);
! compare_values (DECL_IS_NOVOPS);
! compare_values (DECL_IS_RETURNS_TWICE);
! compare_values (DECL_IS_MALLOC);
! compare_values (DECL_IS_OPERATOR_NEW);
! compare_values (DECL_DECLARED_INLINE_P);
! compare_values (DECL_STATIC_CHAIN);
! compare_values (DECL_NO_INLINE_WARNING_P);
! compare_values (DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT);
! compare_values (DECL_NO_LIMIT_STACK);
! compare_values (DECL_DISREGARD_INLINE_LIMITS);
! compare_values (DECL_PURE_P);
! compare_values (DECL_LOOPING_CONST_OR_PURE_P);
! if (DECL_BUILT_IN_CLASS (t1) != NOT_BUILT_IN)
! compare_values (DECL_FUNCTION_CODE);
! if (DECL_STATIC_DESTRUCTOR (t1))
! compare_values (DECL_FINI_PRIORITY);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
! {
! compare_values (TYPE_MODE);
! compare_values (TYPE_STRING_FLAG);
! compare_values (TYPE_NO_FORCE_BLK);
! compare_values (TYPE_NEEDS_CONSTRUCTING);
! if (RECORD_OR_UNION_TYPE_P (t1))
! compare_values (TYPE_TRANSPARENT_AGGR);
! else if (code == ARRAY_TYPE)
! compare_values (TYPE_NONALIASED_COMPONENT);
! compare_values (TYPE_PACKED);
! compare_values (TYPE_RESTRICT);
! compare_values (TYPE_CONTAINS_PLACEHOLDER_INTERNAL);
! compare_values (TYPE_USER_ALIGN);
! compare_values (TYPE_READONLY);
! compare_values (TYPE_PRECISION);
! compare_values (TYPE_ALIGN);
! compare_values (TYPE_ALIAS_SET);
! }
!
! /* We don't want to compare locations, so there is nothing do compare
! for TS_EXP. */
!
! /* BLOCKs are function local and we don't merge anything there, so
! simply refuse to merge. */
! if (CODE_CONTAINS_STRUCT (code, TS_BLOCK))
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_TRANSLATION_UNIT_DECL))
! if (strcmp (TRANSLATION_UNIT_LANGUAGE (t1),
! TRANSLATION_UNIT_LANGUAGE (t2)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_TARGET_OPTION))
! if (memcmp (TREE_TARGET_OPTION (t1), TREE_TARGET_OPTION (t2),
! sizeof (struct cl_target_option)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_OPTIMIZATION))
! if (memcmp (TREE_OPTIMIZATION (t1), TREE_OPTIMIZATION (t2),
! sizeof (struct cl_optimization)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
! if (vec_safe_length (BINFO_BASE_ACCESSES (t1))
! != vec_safe_length (BINFO_BASE_ACCESSES (t2)))
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
! compare_values (CONSTRUCTOR_NELTS);
!
! if (CODE_CONTAINS_STRUCT (code, TS_IDENTIFIER))
! if (IDENTIFIER_LENGTH (t1) != IDENTIFIER_LENGTH (t2)
! || memcmp (IDENTIFIER_POINTER (t1), IDENTIFIER_POINTER (t2),
! IDENTIFIER_LENGTH (t1)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_STRING))
! if (TREE_STRING_LENGTH (t1) != TREE_STRING_LENGTH (t2)
! || memcmp (TREE_STRING_POINTER (t1), TREE_STRING_POINTER (t2),
! TREE_STRING_LENGTH (t1)) != 0)
! return false;
!
! #undef compare_values
!
!
! /* Compare pointer fields. */
!
! /* Recurse. Search & Replaced from DFS_write_tree_body. */
! #define compare_tree_edges(E1, E2) \
! do { \
! if (!compare_tree_sccs_1 ((E1), (E2), map)) \
! return false; \
! } while (0)
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
! {
! if (code != IDENTIFIER_NODE)
! compare_tree_edges (TREE_TYPE (t1), TREE_TYPE (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
! {
! unsigned i;
! /* Note that the number of elements for EXPR has already been emitted
! in EXPR's header (see streamer_write_tree_header). */
! for (i = 0; i < VECTOR_CST_NELTS (t1); ++i)
! compare_tree_edges (VECTOR_CST_ELT (t1, i), VECTOR_CST_ELT (t2, i));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
! {
! compare_tree_edges (TREE_REALPART (t1), TREE_REALPART (t2));
! compare_tree_edges (TREE_IMAGPART (t1), TREE_IMAGPART (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
! {
! compare_tree_edges (DECL_NAME (t1), DECL_NAME (t2));
! /* ??? Global decls from different TUs have non-matching
! TRANSLATION_UNIT_DECLs. Only consider a small set of
! decls equivalent, we should not end up merging others. */
! if ((code == TYPE_DECL
! || code == NAMESPACE_DECL
! || code == IMPORTED_DECL
! || code == CONST_DECL
! || ((code == VAR_DECL || code == FUNCTION_DECL)
! && (TREE_PUBLIC (t1) || DECL_EXTERNAL (t1))))
! && DECL_FILE_SCOPE_P (t1) && DECL_FILE_SCOPE_P (t2))
! ;
! else
! compare_tree_edges (DECL_CONTEXT (t1), DECL_CONTEXT (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
! {
! compare_tree_edges (DECL_SIZE (t1), DECL_SIZE (t2));
! compare_tree_edges (DECL_SIZE_UNIT (t1), DECL_SIZE_UNIT (t2));
! compare_tree_edges (DECL_ATTRIBUTES (t1), DECL_ATTRIBUTES (t2));
! if ((code == VAR_DECL
! || code == PARM_DECL)
! && DECL_HAS_VALUE_EXPR_P (t1))
! compare_tree_edges (DECL_VALUE_EXPR (t1), DECL_VALUE_EXPR (t2));
! if (code == VAR_DECL
! && DECL_HAS_DEBUG_EXPR_P (t1))
! compare_tree_edges (DECL_DEBUG_EXPR (t1), DECL_DEBUG_EXPR (t2));
! /* LTO specific edges. */
! if (code != FUNCTION_DECL
! && code != TRANSLATION_UNIT_DECL)
! compare_tree_edges (DECL_INITIAL (t1), DECL_INITIAL (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
! {
! if (code == FUNCTION_DECL)
! {
! tree a1, a2;
! for (a1 = DECL_ARGUMENTS (t1), a2 = DECL_ARGUMENTS (t2);
! a1 || a2;
! a1 = TREE_CHAIN (a1), a2 = TREE_CHAIN (a2))
! compare_tree_edges (a1, a2);
! compare_tree_edges (DECL_RESULT (t1), DECL_RESULT (t2));
! }
! else if (code == TYPE_DECL)
! compare_tree_edges (DECL_ORIGINAL_TYPE (t1), DECL_ORIGINAL_TYPE (t2));
! compare_tree_edges (DECL_VINDEX (t1), DECL_VINDEX (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
! {
! /* Make sure we don't inadvertently set the assembler name. */
! if (DECL_ASSEMBLER_NAME_SET_P (t1))
! compare_tree_edges (DECL_ASSEMBLER_NAME (t1),
! DECL_ASSEMBLER_NAME (t2));
! compare_tree_edges (DECL_SECTION_NAME (t1), DECL_SECTION_NAME (t2));
! compare_tree_edges (DECL_COMDAT_GROUP (t1), DECL_COMDAT_GROUP (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
! {
! compare_tree_edges (DECL_FIELD_OFFSET (t1), DECL_FIELD_OFFSET (t2));
! compare_tree_edges (DECL_BIT_FIELD_TYPE (t1), DECL_BIT_FIELD_TYPE (t2));
! compare_tree_edges (DECL_BIT_FIELD_REPRESENTATIVE (t1),
! DECL_BIT_FIELD_REPRESENTATIVE (t2));
! compare_tree_edges (DECL_FIELD_BIT_OFFSET (t1),
! DECL_FIELD_BIT_OFFSET (t2));
! compare_tree_edges (DECL_FCONTEXT (t1), DECL_FCONTEXT (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
! {
! compare_tree_edges (DECL_FUNCTION_PERSONALITY (t1),
! DECL_FUNCTION_PERSONALITY (t2));
! compare_tree_edges (DECL_FUNCTION_SPECIFIC_TARGET (t1),
! DECL_FUNCTION_SPECIFIC_TARGET (t2));
! compare_tree_edges (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t1),
! DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
! {
! compare_tree_edges (TYPE_SIZE (t1), TYPE_SIZE (t2));
! compare_tree_edges (TYPE_SIZE_UNIT (t1), TYPE_SIZE_UNIT (t2));
! compare_tree_edges (TYPE_ATTRIBUTES (t1), TYPE_ATTRIBUTES (t2));
! compare_tree_edges (TYPE_NAME (t1), TYPE_NAME (t2));
! /* Do not compare TYPE_POINTER_TO or TYPE_REFERENCE_TO. They will be
! reconstructed during fixup. */
! /* Do not compare TYPE_NEXT_VARIANT, we reconstruct the variant lists
! during fixup. */
! compare_tree_edges (TYPE_MAIN_VARIANT (t1), TYPE_MAIN_VARIANT (t2));
! /* ??? Global types from different TUs have non-matching
! TRANSLATION_UNIT_DECLs. Still merge them if they are otherwise
! equal. */
! if (TYPE_FILE_SCOPE_P (t1) && TYPE_FILE_SCOPE_P (t2))
! ;
! else
! compare_tree_edges (TYPE_CONTEXT (t1), TYPE_CONTEXT (t2));
! /* TYPE_CANONICAL is re-computed during type merging, so do not
! compare it here. */
! compare_tree_edges (TYPE_STUB_DECL (t1), TYPE_STUB_DECL (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
! {
! if (code == ENUMERAL_TYPE)
! compare_tree_edges (TYPE_VALUES (t1), TYPE_VALUES (t2));
! else if (code == ARRAY_TYPE)
! compare_tree_edges (TYPE_DOMAIN (t1), TYPE_DOMAIN (t2));
! else if (RECORD_OR_UNION_TYPE_P (t1))
! {
! tree f1, f2;
! for (f1 = TYPE_FIELDS (t1), f2 = TYPE_FIELDS (t2);
! f1 || f2;
! f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! compare_tree_edges (f1, f2);
! compare_tree_edges (TYPE_BINFO (t1), TYPE_BINFO (t2));
}
+ else if (code == FUNCTION_TYPE
+ || code == METHOD_TYPE)
+ compare_tree_edges (TYPE_ARG_TYPES (t1), TYPE_ARG_TYPES (t2));
+ if (!POINTER_TYPE_P (t1))
+ compare_tree_edges (TYPE_MINVAL (t1), TYPE_MINVAL (t2));
+ compare_tree_edges (TYPE_MAXVAL (t1), TYPE_MAXVAL (t2));
}

! if (CODE_CONTAINS_STRUCT (code, TS_LIST))
{
! compare_tree_edges (TREE_PURPOSE (t1), TREE_PURPOSE (t2));
! compare_tree_edges (TREE_VALUE (t1), TREE_VALUE (t2));
! compare_tree_edges (TREE_CHAIN (t1), TREE_CHAIN (t2));
! }

! if (CODE_CONTAINS_STRUCT (code, TS_VEC))
! for (int i = 0; i < TREE_VEC_LENGTH (t1); i++)
! compare_tree_edges (TREE_VEC_ELT (t1, i), TREE_VEC_ELT (t2, i));

! if (CODE_CONTAINS_STRUCT (code, TS_EXP))
! {
! for (int i = 0; i < TREE_OPERAND_LENGTH (t1); i++)
! compare_tree_edges (TREE_OPERAND (t1, i),
! TREE_OPERAND (t2, i));

! /* BLOCKs are function local and we don't merge anything there. */
! if (TREE_BLOCK (t1) || TREE_BLOCK (t2))
! return false;
! }

! if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
! {
! unsigned i;
! tree t;
! /* Lengths have already been compared above. */
! FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (t1), i, t)
! compare_tree_edges (t, BINFO_BASE_BINFO (t2, i));
! FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (t1), i, t)
! compare_tree_edges (t, BINFO_BASE_ACCESS (t2, i));
! compare_tree_edges (BINFO_OFFSET (t1), BINFO_OFFSET (t2));
! compare_tree_edges (BINFO_VTABLE (t1), BINFO_VTABLE (t2));
! compare_tree_edges (BINFO_VPTR_FIELD (t1), BINFO_VPTR_FIELD (t2));
! compare_tree_edges (BINFO_INHERITANCE_CHAIN (t1),
! BINFO_INHERITANCE_CHAIN (t2));
! compare_tree_edges (BINFO_SUBVTT_INDEX (t1),
! BINFO_SUBVTT_INDEX (t2));
! compare_tree_edges (BINFO_VPTR_INDEX (t1),
! BINFO_VPTR_INDEX (t2));
! }

! if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
! {
! unsigned i;
! tree index, value;
! /* Lengths have already been compared above. */
! FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (t1), i, index, value)
! {
! compare_tree_edges (index, CONSTRUCTOR_ELT (t2, i)->index);
! compare_tree_edges (value, CONSTRUCTOR_ELT (t2, i)->value);
}
+ }

! #undef compare_tree_edges
!
! return true;
! }
!
! /* Compare the tree scc SCC to the prevailing candidate PSCC, filling
! out MAP if they are equal. */
!
! static bool
! compare_tree_sccs (tree_scc *pscc, tree_scc *scc,
! tree *map)
! {
! /* Assume SCC entry hashes are sorted after their cardinality. Which
! means we can simply take the first n-tuple of equal hashes
! (which is recorded as entry_len) and do n SCC entry candidate
! comparisons. */
! for (unsigned i = 0; i < pscc->entry_len; ++i)
! {
! tree *mapp = map;
! num_scc_compare_collisions++;
! if (compare_tree_sccs_1 (pscc->entries[0], scc->entries[i], &mapp))
{
! /* Equal - no need to reset TREE_VISITED or TREE_ASM_WRITTEN
! on the scc as all trees will be freed. */
! return true;
! }
! /* Reset TREE_ASM_WRITTEN on scc for the next compare or in case
! the SCC prevails. */
! for (unsigned j = 0; j < scc->len; ++j)
! TREE_ASM_WRITTEN (scc->entries[j]) = 0;
! }
!
! return false;
! }
!
! /* Try to unify the SCC with nodes FROM to FROM + LEN in CACHE and
! hash value SCC_HASH with an already recorded SCC. Return true if
! that was successful, otherwise return false. */

! static bool
! unify_scc (struct streamer_tree_cache_d *cache, unsigned from,
! unsigned len, unsigned scc_entry_len, hashval_t scc_hash)
! {
! bool unified_p = false;
! tree_scc *scc
! = (tree_scc *) alloca (sizeof (tree_scc) + (len - 1) * sizeof (tree));
! scc->next = NULL;
! scc->hash = scc_hash;
! scc->len = len;
! scc->entry_len = scc_entry_len;
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = streamer_tree_cache_get_tree (cache, from + i);
! scc->entries[i] = t;
! /* Do not merge SCCs with local entities inside them. Also do
! not merge TRANSLATION_UNIT_DECLs. */
! if (TREE_CODE (t) == TRANSLATION_UNIT_DECL
! || ((TREE_CODE (t) == VAR_DECL || TREE_CODE (t) == FUNCTION_DECL)
! && !(TREE_PUBLIC (t) || DECL_EXTERNAL (t)))
! || TREE_CODE (t) == LABEL_DECL)
! {
! /* Avoid doing any work for these cases and do not worry to
! record the SCCs for further merging. */
! return false;
}
}

! /* Look for the list of candidate SCCs to compare against. */
! tree_scc **slot;
! slot = tree_scc_hash.find_slot_with_hash (scc, scc_hash, INSERT);
! if (*slot)
! {
! /* Try unifying against each candidate. */
! num_scc_compares++;
!
! /* Set TREE_VISITED on the scc so we can easily identify tree nodes
! outside of the scc when following tree edges. Make sure
! that TREE_ASM_WRITTEN is unset so we can use it as 2nd bit
! to track whether we visited the SCC member during the compare.
! We cannot use TREE_VISITED on the pscc members as the extended
! scc and pscc can overlap. */
! for (unsigned i = 0; i < scc->len; ++i)
! {
! TREE_VISITED (scc->entries[i]) = 1;
! gcc_assert (!TREE_ASM_WRITTEN (scc->entries[i]));
! }
!
! tree *map = XALLOCAVEC (tree, 2 * len);
! for (tree_scc *pscc = *slot; pscc; pscc = pscc->next)
! {
! if (!compare_tree_sccs (pscc, scc, map))
! continue;
!
! /* Found an equal SCC. */
! unified_p = true;
! num_scc_compare_collisions--;
! num_sccs_merged++;
! total_scc_size_merged += len;
!
! /* Fixup the streamer cache with the prevailing nodes according
! to the tree node mapping computed by compare_tree_sccs. */
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = map[2*i+1];
! enum tree_code code = TREE_CODE (t);
! unsigned ix;
! bool r;
! /* IDENTIFIER_NODEs should be singletons and are merged by the
! streamer. The others should be singletons, too, and we
! should not merge them in any way. */
! gcc_assert (code != TRANSLATION_UNIT_DECL
! && code != IDENTIFIER_NODE
! && !streamer_handle_as_builtin_p (t));
! r = streamer_tree_cache_lookup (cache, t, &ix);
! gcc_assert (r && ix >= from);
! streamer_tree_cache_replace_tree (cache, map[2 * i], ix);
! if (TYPE_P (t))
! num_merged_types++;
! }
! /* Free the tree nodes from the read SCC. */
! for (unsigned i = 0; i < len; ++i)
! ggc_free (scc->entries[i]);
! break;
! }
!
! /* Reset TREE_VISITED if we didn't unify the SCC with another. */
! if (!unified_p)
! for (unsigned i = 0; i < scc->len; ++i)
! TREE_VISITED (scc->entries[i]) = 0;
! }
!
! /* If we didn't unify it to any candidate duplicate the relevant
! pieces to permanent storage and link it into the chain. */
! if (!unified_p)
! {
! tree_scc *pscc
! = XOBNEWVAR (&tree_scc_hash_obstack, tree_scc, sizeof (tree_scc));
! memcpy (pscc, scc, sizeof (tree_scc));
! pscc->next = (*slot);
! *slot = pscc;
}
+ return unified_p;
}


*************** lto_read_decls (struct lto_file_decl_dat
*** 2036,2044 ****
{
tree t;
unsigned from = data_in->reader_cache->nodes.length ();
! t = stream_read_tree (&ib_main, data_in);
! gcc_assert (t && ib_main.p <= ib_main.len);
! uniquify_nodes (data_in, from);
}

/* Read in lto_in_decl_state objects. */
--- 2406,2511 ----
{
tree t;
unsigned from = data_in->reader_cache->nodes.length ();
! /* Read and uniquify SCCs as in the input stream. */
! enum LTO_tags tag = streamer_read_record_start (&ib_main);
! if (tag == LTO_tree_scc)
! {
! unsigned len_;
! unsigned scc_entry_len;
! hashval_t scc_hash = lto_input_scc (&ib_main, data_in, &len_,
! &scc_entry_len);
! unsigned len = data_in->reader_cache->nodes.length () - from;
! gcc_assert (len == len_);
!
! total_scc_size += len;
! num_sccs_read++;
!
! /* We have the special case of size-1 SCCs that are pre-merged
! by means of identifier and string sharing for example.
! ??? Maybe we should avoid streaming those as SCCs. */
! tree first = streamer_tree_cache_get_tree (data_in->reader_cache,
! from);
! if (len == 1
! && (TREE_CODE (first) == IDENTIFIER_NODE
! || TREE_CODE (first) == INTEGER_CST
! || TREE_CODE (first) == TRANSLATION_UNIT_DECL
! || streamer_handle_as_builtin_p (first)))
! continue;
!
! /* Try to unify the SCC with already existing ones. */
! if (!flag_ltrans
! && unify_scc (data_in->reader_cache, from,
! len, scc_entry_len, scc_hash))
! continue;
!
! /* Do remaining fixup tasks for prevailing nodes. */
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = streamer_tree_cache_get_tree (data_in->reader_cache,
! from + i);
! /* For statistics, see if the old code would have merged
! the type. */
! if (TYPE_P (t)
! && (flag_lto_report || (flag_wpa && flag_lto_report_wpa)))
! {
! tree newt = gimple_register_type (t);
! if (newt != t)
! {
! num_not_merged_types++;
! /* Check if we can never merge the types because
! they are in the same SCC and thus the old
! code was broken. */
! for (unsigned j = 0; j < len; ++j)
! if (i != j
! && streamer_tree_cache_get_tree
! (data_in->reader_cache, from + j) == newt)
! {
! num_not_merged_types_in_same_scc++;
! break;
! }
! }
! }
! /* Reconstruct the type variant and pointer-to/reference-to
! chains. */
! if (TYPE_P (t))
! {
! num_prevailing_types++;
! lto_fixup_prevailing_type (t);
! }
! /* Compute the canonical type of all types.
! ??? Should be able to assert that !TYPE_CANONICAL. */
! if (TYPE_P (t) && !TYPE_CANONICAL (t))
! TYPE_CANONICAL (t) = gimple_register_canonical_type (t);
! /* Link shared INTEGER_CSTs into TYPE_CACHED_VALUEs of its
! type which is also member of this SCC. */
! if (TREE_CODE (t) == INTEGER_CST
! && !TREE_OVERFLOW (t))
! cache_integer_cst (t);
! /* Register TYPE_DECLs with the debuginfo machinery. */
! if (!flag_wpa
! && TREE_CODE (t) == TYPE_DECL)
! debug_hooks->type_decl (t, !DECL_FILE_SCOPE_P (t));
! if (!flag_ltrans)
! {
! /* Register variables and functions with the
! symbol table. */
! if (TREE_CODE (t) == VAR_DECL)
! lto_register_var_decl_in_symtab (data_in, t);
! else if (TREE_CODE (t) == FUNCTION_DECL
! && !DECL_BUILT_IN (t))
! lto_register_function_decl_in_symtab (data_in, t);
! /* Scan the tree for references to global functions or
! variables and record those for later fixup. */
! maybe_remember_with_vars (t);
! }
! }
! }
! else
! {
! /* Pickle stray references. */
! t = lto_input_tree_1 (&ib_main, data_in, tag, 0);
! gcc_assert (t && data_in->reader_cache->nodes.length () == from);
! }
}

/* Read in lto_in_decl_state objects. */
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2897,2905 ****
type_hash_cache = htab_create_ggc (512, tree_int_map_hash,
tree_int_map_eq, NULL);
type_pair_cache = XCNEWVEC (struct type_pair_d, GIMPLE_TYPE_PAIR_SIZE);
- gimple_type_leader = ggc_alloc_cleared_vec_gimple_type_leader_entry_s
- (GIMPLE_TYPE_LEADER_SIZE);
gimple_types = htab_create_ggc (16381, gimple_type_hash, gimple_type_eq, 0);

if (!quiet_flag)
fprintf (stderr, "Reading object files:");
--- 3364,3372 ----
type_hash_cache = htab_create_ggc (512, tree_int_map_hash,
tree_int_map_eq, NULL);
type_pair_cache = XCNEWVEC (struct type_pair_d, GIMPLE_TYPE_PAIR_SIZE);
gimple_types = htab_create_ggc (16381, gimple_type_hash, gimple_type_eq, 0);
+ gcc_obstack_init (&tree_scc_hash_obstack);
+ tree_scc_hash.create (4096);

if (!quiet_flag)
fprintf (stderr, "Reading object files:");
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2932,2938 ****
lto_obj_file_close (current_lto_file);
free (current_lto_file);
current_lto_file = NULL;
- ggc_collect ();
}

lto_flatten_files (decl_data, count, last_file_ix);
--- 3399,3404 ----
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2954,2960 ****
type_hash_cache = NULL;
free (type_pair_cache);
type_pair_cache = NULL;
! gimple_type_leader = NULL;
free_gimple_type_tables ();
ggc_collect ();

--- 3420,3427 ----
type_hash_cache = NULL;
free (type_pair_cache);
type_pair_cache = NULL;
! tree_scc_hash.dispose ();
! obstack_free (&tree_scc_hash_obstack, NULL);
free_gimple_type_tables ();
ggc_collect ();

*************** print_lto_report_1 (void)
*** 3120,3146 ****
const char *pfx = (flag_lto) ? "LTO" : (flag_wpa) ? "WPA" : "LTRANS";
fprintf (stderr, "%s statistics\n", pfx);

! if (gimple_types)
! fprintf (stderr, "[%s] GIMPLE type table: size %ld, %ld elements, "
! "%ld searches, %ld collisions (ratio: %f)\n", pfx,
! (long) htab_size (gimple_types),
! (long) htab_elements (gimple_types),
! (long) gimple_types->searches,
! (long) gimple_types->collisions,
! htab_collisions (gimple_types));
! else
! fprintf (stderr, "[%s] GIMPLE type table is empty\n", pfx);
! if (type_hash_cache)
! fprintf (stderr, "[%s] GIMPLE type hash cache table: size %ld, %ld elements, "
! "%ld searches, %ld collisions (ratio: %f)\n", pfx,
! (long) htab_size (type_hash_cache),
! (long) htab_elements (type_hash_cache),
! (long) type_hash_cache->searches,
! (long) type_hash_cache->collisions,
! htab_collisions (type_hash_cache));
! else
! fprintf (stderr, "[%s] GIMPLE type hash cache table is empty\n", pfx);
! fprintf (stderr, "[%s] Merged %lu types\n", pfx, num_merged_types);

print_gimple_types_stats (pfx);
print_lto_report (pfx);
--- 3587,3632 ----
const char *pfx = (flag_lto) ? "LTO" : (flag_wpa) ? "WPA" : "LTRANS";
fprintf (stderr, "%s statistics\n", pfx);

! fprintf (stderr, "[%s] read %lu SCCs of average size %f\n",
! pfx, num_sccs_read, total_scc_size / (double)num_sccs_read);
! fprintf (stderr, "[%s] %lu tree bodies read in total\n", pfx, total_scc_size);
! if (flag_wpa && tree_scc_hash.is_created ())
! {
! fprintf (stderr, "[%s] tree SCC table: size %ld, %ld elements, "
! "collision ratio: %f\n", pfx,
! (long) tree_scc_hash.size (),
! (long) tree_scc_hash.elements (),
! tree_scc_hash.collisions ());
! hash_table<tree_scc_hasher>::iterator hiter;
! tree_scc *scc, *max_scc = NULL;
! unsigned max_length = 0;
! FOR_EACH_HASH_TABLE_ELEMENT (tree_scc_hash, scc, x, hiter)
! {
! unsigned length = 0;
! tree_scc *s = scc;
! for (; s; s = s->next)
! length++;
! if (length > max_length)
! {
! max_length = length;
! max_scc = scc;
! }
! }
! fprintf (stderr, "[%s] tree SCC max chain length %u (size %u)\n",
! pfx, max_length, max_scc->len);
! fprintf (stderr, "[%s] Compared %lu SCCs, %lu collisions (%f)\n", pfx,
! num_scc_compares, num_scc_compare_collisions,
! num_scc_compare_collisions / (double) num_scc_compares);
! fprintf (stderr, "[%s] Merged %lu SCCs\n", pfx, num_sccs_merged);
! fprintf (stderr, "[%s] Merged %lu tree bodies\n", pfx,
! total_scc_size_merged);
! fprintf (stderr, "[%s] Merged %lu types\n", pfx, num_merged_types);
! fprintf (stderr, "[%s] %lu types prevailed\n", pfx, num_prevailing_types);
! fprintf (stderr, "[%s] Old merging code merges an additional %lu types"
! " of which %lu are in the same SCC with their "
! "prevailing variant\n",
! pfx, num_not_merged_types, num_not_merged_types_in_same_scc);
! }

print_gimple_types_stats (pfx);
print_lto_report (pfx);
Index: trunk/gcc/lto/Make-lang.in
===================================================================
*** trunk.orig/gcc/lto/Make-lang.in 2013-06-12 13:51:51.000000000 +0200
--- trunk/gcc/lto/Make-lang.in 2013-06-12 13:52:57.861348728 +0200
*************** lto/lto.o: lto/lto.c $(CONFIG_H) $(SYSTE
*** 85,91 ****
langhooks.h $(VEC_H) $(BITMAP_H) pointer-set.h $(IPA_PROP_H) \
$(COMMON_H) debug.h $(GIMPLE_H) $(LTO_H) $(LTO_TREE_H) \
$(LTO_TAGS_H) $(LTO_STREAMER_H) $(SPLAY_TREE_H) gt-lto-lto.h \
! $(TREE_STREAMER_H) lto/lto-partition.h
lto/lto-partition.o: lto/lto-partition.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
toplev.h $(TREE_H) $(TM_H) \
$(CGRAPH_H) $(TIMEVAR_H) \
--- 85,91 ----
langhooks.h $(VEC_H) $(BITMAP_H) pointer-set.h $(IPA_PROP_H) \
$(COMMON_H) debug.h $(GIMPLE_H) $(LTO_H) $(LTO_TREE_H) \
$(LTO_TAGS_H) $(LTO_STREAMER_H) $(SPLAY_TREE_H) gt-lto-lto.h \
! $(TREE_STREAMER_H) $(DATA_STREAMER_H) lto/lto-partition.h
lto/lto-partition.o: lto/lto-partition.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
toplev.h $(TREE_H) $(TM_H) \
$(CGRAPH_H) $(TIMEVAR_H) \
Index: trunk/gcc/tree-streamer-out.c
===================================================================
*** trunk.orig/gcc/tree-streamer-out.c 2013-06-12 13:51:51.000000000 +0200
--- trunk/gcc/tree-streamer-out.c 2013-06-12 13:52:57.868348812 +0200
*************** streamer_write_chain (struct output_bloc
*** 491,499 ****
to the global decls section as we do not want to have them
enter decl merging. This is, of course, only for the call
for streaming BLOCK_VARS, but other callers are safe. */
if (VAR_OR_FUNCTION_DECL_P (t)
&& DECL_EXTERNAL (t))
! stream_write_tree_shallow_non_ref (ob, t, ref_p);
else
stream_write_tree (ob, t, ref_p);

--- 491,500 ----
to the global decls section as we do not want to have them
enter decl merging. This is, of course, only for the call
for streaming BLOCK_VARS, but other callers are safe. */
+ /* ??? FIXME wrt SCC streaming. Drop these for now. */
if (VAR_OR_FUNCTION_DECL_P (t)
&& DECL_EXTERNAL (t))
! ; /* stream_write_tree_shallow_non_ref (ob, t, ref_p); */
else
stream_write_tree (ob, t, ref_p);

*************** write_ts_list_tree_pointers (struct outp
*** 716,722 ****
{
stream_write_tree (ob, TREE_PURPOSE (expr), ref_p);
stream_write_tree (ob, TREE_VALUE (expr), ref_p);
! streamer_write_chain (ob, TREE_CHAIN (expr), ref_p);
}


--- 717,723 ----
{
stream_write_tree (ob, TREE_PURPOSE (expr), ref_p);
stream_write_tree (ob, TREE_VALUE (expr), ref_p);
! stream_write_tree (ob, TREE_CHAIN (expr), ref_p);
}


*************** write_ts_constructor_tree_pointers (stru
*** 834,839 ****
--- 835,842 ----
}
}

+ extern unsigned long num_tree_bodies_written;
+
/* Write all pointer fields in EXPR to output block OB. If REF_P is true,
the leaves of EXPR are emitted as references. */

*************** streamer_write_tree_body (struct output_
*** 842,847 ****
--- 845,852 ----
{
enum tree_code code;

+ num_tree_bodies_written++;
+
code = TREE_CODE (expr);

if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
Index: trunk/gcc/tree-streamer.c
===================================================================
*** trunk.orig/gcc/tree-streamer.c 2013-06-12 13:51:50.000000000 +0200
--- trunk/gcc/tree-streamer.c 2013-06-12 13:52:57.867348800 +0200
*************** streamer_check_handled_ts_structures (vo
*** 92,107 ****

static void
streamer_tree_cache_add_to_node_array (struct streamer_tree_cache_d *cache,
! unsigned ix, tree t)
{
/* Make sure we're either replacing an old element or
appending consecutively. */
gcc_assert (ix <= cache->nodes.length ());

if (ix == cache->nodes.length ())
! cache->nodes.safe_push (t);
else
! cache->nodes[ix] = t;
}


--- 92,115 ----

static void
streamer_tree_cache_add_to_node_array (struct streamer_tree_cache_d *cache,
! unsigned ix, tree t, hashval_t hash)
{
/* Make sure we're either replacing an old element or
appending consecutively. */
gcc_assert (ix <= cache->nodes.length ());

if (ix == cache->nodes.length ())
! {
! cache->nodes.safe_push (t);
! if (cache->hashes.exists ())
! cache->hashes.safe_push (hash);
! }
else
! {
! cache->nodes[ix] = t;
! if (cache->hashes.exists ())
! cache->hashes[ix] = hash;
! }
}


*************** streamer_tree_cache_add_to_node_array (s
*** 117,123 ****

static bool
streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
! tree t, unsigned *ix_p,
bool insert_at_next_slot_p)
{
void **slot;
--- 125,131 ----

static bool
streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
! tree t, hashval_t hash, unsigned *ix_p,
bool insert_at_next_slot_p)
{
void **slot;
*************** streamer_tree_cache_insert_1 (struct str
*** 136,142 ****
ix = *ix_p;
*slot = (void *)(size_t) (ix + 1);

! streamer_tree_cache_add_to_node_array (cache, ix, t);

/* Indicate that the item was not present in the cache. */
existed_p = false;
--- 144,150 ----
ix = *ix_p;
*slot = (void *)(size_t) (ix + 1);

! streamer_tree_cache_add_to_node_array (cache, ix, t, hash);

/* Indicate that the item was not present in the cache. */
existed_p = false;
*************** streamer_tree_cache_insert_1 (struct str
*** 151,157 ****
location, and ENTRY->TO does not match *IX_P, add T to
the requested location slot. */
ix = *ix_p;
! streamer_tree_cache_add_to_node_array (cache, ix, t);
*slot = (void *)(size_t) (ix + 1);
}

--- 159,165 ----
location, and ENTRY->TO does not match *IX_P, add T to
the requested location slot. */
ix = *ix_p;
! streamer_tree_cache_add_to_node_array (cache, ix, t, hash);
*slot = (void *)(size_t) (ix + 1);
}

*************** streamer_tree_cache_insert_1 (struct str
*** 174,203 ****

bool
streamer_tree_cache_insert (struct streamer_tree_cache_d *cache, tree t,
! unsigned *ix_p)
{
! return streamer_tree_cache_insert_1 (cache, t, ix_p, true);
}


! /* Insert tree node T in CACHE at slot IX. If T already
! existed in the cache return true. Otherwise, return false. */

! bool
! streamer_tree_cache_insert_at (struct streamer_tree_cache_d *cache,
! tree t, unsigned ix)
{
! return streamer_tree_cache_insert_1 (cache, t, &ix, false);
}


/* Appends tree node T to CACHE, even if T already existed in it. */

void
! streamer_tree_cache_append (struct streamer_tree_cache_d *cache, tree t)
{
unsigned ix = cache->nodes.length ();
! streamer_tree_cache_insert_1 (cache, t, &ix, false);
}

/* Return true if tree node T exists in CACHE, otherwise false. If IX_P is
--- 182,214 ----

bool
streamer_tree_cache_insert (struct streamer_tree_cache_d *cache, tree t,
! hashval_t hash, unsigned *ix_p)
{
! return streamer_tree_cache_insert_1 (cache, t, hash, ix_p, true);
}


! /* Replace the tree node with T in CACHE at slot IX. */

! void
! streamer_tree_cache_replace_tree (struct streamer_tree_cache_d *cache,
! tree t, unsigned ix)
{
! hashval_t hash = 0;
! if (cache->hashes.exists ())
! hash = streamer_tree_cache_get_hash (cache, ix);
! streamer_tree_cache_insert_1 (cache, t, hash, &ix, false);
}


/* Appends tree node T to CACHE, even if T already existed in it. */

void
! streamer_tree_cache_append (struct streamer_tree_cache_d *cache,
! tree t, hashval_t hash)
{
unsigned ix = cache->nodes.length ();
! streamer_tree_cache_insert_1 (cache, t, hash, &ix, false);
}

/* Return true if tree node T exists in CACHE, otherwise false. If IX_P is
*************** record_common_node (struct streamer_tree
*** 257,263 ****
if (!node)
node = error_mark_node;

! streamer_tree_cache_append (cache, node);

if (POINTER_TYPE_P (node)
|| TREE_CODE (node) == COMPLEX_TYPE
--- 268,277 ----
if (!node)
node = error_mark_node;

! /* ??? FIXME, devise a better hash value. But the hash needs to be equal
! for all frontend and lto1 invocations. So just use the position
! in the cache as hash value. */
! streamer_tree_cache_append (cache, node, cache->nodes.length ());

if (POINTER_TYPE_P (node)
|| TREE_CODE (node) == COMPLEX_TYPE
*************** preload_common_nodes (struct streamer_tr
*** 305,317 ****
/* Create a cache of pickled nodes. */

struct streamer_tree_cache_d *
! streamer_tree_cache_create (void)
{
struct streamer_tree_cache_d *cache;

cache = XCNEW (struct streamer_tree_cache_d);

cache->node_map = pointer_map_create ();

/* Load all the well-known tree nodes that are always created by
the compiler on startup. This prevents writing them out
--- 319,334 ----
/* Create a cache of pickled nodes. */

struct streamer_tree_cache_d *
! streamer_tree_cache_create (bool with_hashes)
{
struct streamer_tree_cache_d *cache;

cache = XCNEW (struct streamer_tree_cache_d);

cache->node_map = pointer_map_create ();
+ cache->nodes.create (165);
+ if (with_hashes)
+ cache->hashes.create (165);

/* Load all the well-known tree nodes that are always created by
the compiler on startup. This prevents writing them out
*************** streamer_tree_cache_delete (struct strea
*** 332,336 ****
--- 349,354 ----

pointer_map_destroy (c->node_map);
c->nodes.release ();
+ c->hashes.release ();
free (c);
}
Index: trunk/gcc/tree-streamer.h
===================================================================
*** trunk.orig/gcc/tree-streamer.h 2013-06-12 13:51:50.000000000 +0200
--- trunk/gcc/tree-streamer.h 2013-06-12 13:52:57.867348800 +0200
*************** along with GCC; see the file COPYING3.
*** 43,48 ****
--- 43,49 ----
T. The reconstructed T is inserted in some array so that when
the reference index for T is found in the input stream, it can be
used to look up into the array to get the reconstructed T. */
+
struct streamer_tree_cache_d
{
/* The mapping between tree nodes and slots into the nodes array. */
*************** struct streamer_tree_cache_d
*** 50,55 ****
--- 51,58 ----

/* The nodes pickled so far. */
vec<tree> nodes;
+ /* The node hashes (if available). */
+ vec<hashval_t> hashes;
};

/* Return true if tree node EXPR should be streamed as a builtin. For
*************** tree streamer_alloc_tree (struct lto_inp
*** 71,77 ****
void streamer_read_tree_body (struct lto_input_block *, struct data_in *, tree);
tree streamer_get_pickled_tree (struct lto_input_block *, struct data_in *);
tree streamer_get_builtin_tree (struct lto_input_block *, struct data_in *);
- tree streamer_read_integer_cst (struct lto_input_block *, struct data_in *);
struct bitpack_d streamer_read_tree_bitfields (struct lto_input_block *,
struct data_in *, tree);

--- 74,79 ----
*************** void streamer_write_builtin (struct outp
*** 89,110 ****
/* In tree-streamer.c. */
void streamer_check_handled_ts_structures (void);
bool streamer_tree_cache_insert (struct streamer_tree_cache_d *, tree,
! unsigned *);
! bool streamer_tree_cache_insert_at (struct streamer_tree_cache_d *, tree,
! unsigned);
! void streamer_tree_cache_append (struct streamer_tree_cache_d *, tree);
bool streamer_tree_cache_lookup (struct streamer_tree_cache_d *, tree,
unsigned *);
! struct streamer_tree_cache_d *streamer_tree_cache_create (void);
void streamer_tree_cache_delete (struct streamer_tree_cache_d *);

/* Return the tree node at slot IX in CACHE. */

static inline tree
! streamer_tree_cache_get (struct streamer_tree_cache_d *cache, unsigned ix)
{
return cache->nodes[ix];
}


#endif /* GCC_TREE_STREAMER_H */
--- 91,121 ----
/* In tree-streamer.c. */
void streamer_check_handled_ts_structures (void);
bool streamer_tree_cache_insert (struct streamer_tree_cache_d *, tree,
! hashval_t, unsigned *);
! void streamer_tree_cache_replace_tree (struct streamer_tree_cache_d *, tree,
! unsigned);
! void streamer_tree_cache_append (struct streamer_tree_cache_d *, tree,
! hashval_t);
bool streamer_tree_cache_lookup (struct streamer_tree_cache_d *, tree,
unsigned *);
! struct streamer_tree_cache_d *streamer_tree_cache_create (bool);
void streamer_tree_cache_delete (struct streamer_tree_cache_d *);

/* Return the tree node at slot IX in CACHE. */

static inline tree
! streamer_tree_cache_get_tree (struct streamer_tree_cache_d *cache, unsigned ix)
{
return cache->nodes[ix];
}

+ /* Return the tree hash value at slot IX in CACHE. */
+
+ static inline hashval_t
+ streamer_tree_cache_get_hash (struct streamer_tree_cache_d *cache, unsigned ix)
+ {
+ return cache->hashes[ix];
+ }
+

#endif /* GCC_TREE_STREAMER_H */
Index: trunk/gcc/lto-streamer.c
===================================================================
*** trunk.orig/gcc/lto-streamer.c 2013-06-12 13:51:50.000000000 +0200
--- trunk/gcc/lto-streamer.c 2013-06-12 13:52:57.868348812 +0200
*************** lto_get_section_name (int section_type,
*** 176,181 ****
--- 176,185 ----
return concat (LTO_SECTION_NAME_PREFIX, sep, add, post, NULL);
}

+ unsigned long num_tree_bodies_written = 0;
+ unsigned long num_trees_output = 0;
+ unsigned long num_grefs_output = 0;
+ unsigned long num_lrefs_output = 0;

/* Show various memory usage statistics related to LTO. */

*************** print_lto_report (const char *s)
*** 227,232 ****
--- 231,245 ----
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_output_symtab_nodes);

+ fprintf (stderr, "[%s] # of output trees: %lu\n",
+ s, num_trees_output);
+ fprintf (stderr, "[%s] # of output references to globals: %lu\n",
+ s, num_grefs_output);
+ fprintf (stderr, "[%s] # of output local references: %lu\n",
+ s, num_lrefs_output);
+ fprintf (stderr, "[%s] # of output tree bodies: %lu\n",
+ s, num_tree_bodies_written);
+
fprintf (stderr, "[%s] # callgraph partitions: "
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_cgraph_partitions);
Index: trunk/gcc/tree.c
===================================================================
*** trunk.orig/gcc/tree.c 2013-06-12 13:51:50.000000000 +0200
--- trunk/gcc/tree.c 2013-06-12 13:52:57.870348836 +0200
*************** build_int_cst_wide (tree type, unsigned
*** 1266,1271 ****
--- 1266,1364 ----
return t;
}

+ void
+ cache_integer_cst (tree t)
+ {
+ tree type = TREE_TYPE (t);
+ HOST_WIDE_INT hi = TREE_INT_CST_HIGH (t);
+ unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (t);
+ int ix = -1;
+ int limit = 0;
+
+ gcc_assert (!TREE_OVERFLOW (t));
+
+ switch (TREE_CODE (type))
+ {
+ case NULLPTR_TYPE:
+ gcc_assert (hi == 0 && low == 0);
+ /* Fallthru. */
+
+ case POINTER_TYPE:
+ case REFERENCE_TYPE:
+ /* Cache NULL pointer. */
+ if (!hi && !low)
+ {
+ limit = 1;
+ ix = 0;
+ }
+ break;
+
+ case BOOLEAN_TYPE:
+ /* Cache false or true. */
+ limit = 2;
+ if (!hi && low < 2)
+ ix = low;
+ break;
+
+ case INTEGER_TYPE:
+ case OFFSET_TYPE:
+ if (TYPE_UNSIGNED (type))
+ {
+ /* Cache 0..N */
+ limit = INTEGER_SHARE_LIMIT;
+ if (!hi && low < (unsigned HOST_WIDE_INT)INTEGER_SHARE_LIMIT)
+ ix = low;
+ }
+ else
+ {
+ /* Cache -1..N */
+ limit = INTEGER_SHARE_LIMIT + 1;
+ if (!hi && low < (unsigned HOST_WIDE_INT)INTEGER_SHARE_LIMIT)
+ ix = low + 1;
+ else if (hi == -1 && low == -(unsigned HOST_WIDE_INT)1)
+ ix = 0;
+ }
+ break;
+
+ case ENUMERAL_TYPE:
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+
+ if (ix >= 0)
+ {
+ /* Look for it in the type's vector of small shared ints. */
+ if (!TYPE_CACHED_VALUES_P (type))
+ {
+ TYPE_CACHED_VALUES_P (type) = 1;
+ TYPE_CACHED_VALUES (type) = make_tree_vec (limit);
+ }
+
+ gcc_assert (TREE_VEC_ELT (TYPE_CACHED_VALUES (type), ix) == NULL_TREE);
+ TREE_VEC_ELT (TYPE_CACHED_VALUES (type), ix) = t;
+ }
+ else
+ {
+ /* Use the cache of larger shared ints. */
+ void **slot;
+
+ slot = htab_find_slot (int_cst_hash_table, t, INSERT);
+ /* If there is already an entry for the number verify it's the
+ same. */
+ if (*slot)
+ {
+ gcc_assert (TREE_INT_CST_LOW ((tree)*slot) == low
+ && TREE_INT_CST_HIGH ((tree)*slot) == hi);
+ return;
+ }
+ /* Otherwise insert this one into the hash table. */
+ *slot = t;
+ }
+ }
+
+
/* Builds an integer constant in TYPE such that lowest BITS bits are ones
and the rest are zeros. */

Index: trunk/gcc/tree.h
===================================================================
*** trunk.orig/gcc/tree.h 2013-06-12 13:51:50.000000000 +0200
--- trunk/gcc/tree.h 2013-06-12 13:52:57.871348848 +0200
*************** extern const_tree strip_invariant_refs (
*** 5636,5641 ****
--- 5636,5642 ----
extern tree lhd_gcc_personality (void);
extern void assign_assembler_name_if_neeeded (tree);
extern void warn_deprecated_use (tree, tree);
+ extern void cache_integer_cst (tree);


/* In cgraph.c */
Richard Biener
2013-06-13 10:08:26 UTC
Permalink
Post by Richard Biener
The following patch re-writes LTO type merging completely with the
goal to move as much work as possible to the compile step, away
from WPA time. At the same time the merging itself gets very
conservative but also more general - it now merges arbitrary trees,
not only types, but only if they are bit-identical and have the
same outgoing tree pointers.
Especially the latter means that we now have to merge SCCs of trees
together and either take the whole SCC as prevailing or throw it
away. Moving work to the compile step means that we compute
SCCs and their hashes there, re-organizing streaming to stream
tree bodies as SCC blocks together with the computed hash.
When we ask the streamer to output a tree T then it now has
to DFS walk all tree pointers, collecting SCCs of not yet
{ LTO_tree_scc, N, hash, entry_len,
{ header1, header2, ... headerN },
{ bits1, refs1, bits2, refs2, ... bitsN, refsN } }
{ LTO_tree_scc, 1, hash, header, bits, refs }
{ LTO_tree_scc, M, hash, entry_len,
{ header1, header2, ... headerM },
{ bits1, refs1, bits2, refs2, ... bitsM, refsM } }
LTO_tree_pickle_reference to T
with tree references in refsN always being LTO_tree_pickle_references
instead of starting a new tree inline. That results in at most
N extra LTO_tree_pickle_references for N streamed trees, together
with the LTO_tree_scc wrapping overhead this causes a slight
increase in LTO object size (around 10% last time I measured, which
was before some additional optimization went in).
The overhead also happens on the LTRANS file producing side
which now has to do the DFS walk and stream the extra data.
It doesn't do the hashing though as on the LTRANS consumer
side no merging is performed.
The patch preserves the core of the old merging code to compare
with the new code and output some statistics. That means that
if you build with -flto-report[-wpa] you get an additional
compile-time and memory overhead.
For reference here are the stats when LTO bootstrapping for
WPA statistics
[WPA] read 2494507 SCCs of average size 2.380067
[WPA] 5937095 tree bodies read in total
0.806376
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 403361 SCCs, 6226 collisions (0.015435)
[WPA] Merged 399980 SCCs
[WPA] Merged 2438250 tree bodies
[WPA] Merged 192475 types
[WPA] 195422 types prevailed
[WPA] Old merging code merges an additional 54582 types of which 21083 are
in the same SCC with their prevailing variant
this says that we've streamed in 5937095 tree bodies in
2494507 SCCs (so the average SCC size is small), of those
we were able to immediately ggc_free 399980 SCCs because they
already existed in identical form (16% of the SCCs, 41% of the trees
and 49% of the types). The old merging code forced the merge
of an additional 54582 types (but 21083 of them it merged with
a type that is in the same SCC, that is, it changed the shape
of the SCC and collapsed parts of it - something that is
suspicious).
The patch was LTO bootstrapped (testing currently running) on
x86_64-unknown-linux-gnu and I've built SPEC2k6 with -Ofast -g -flto
and did a test run of the binaries which shows that
currently 471.omnetpp, 483.xalancbmk and 447.dealII fail
(471.omnetpp segfaults in __cxxabiv1::__dynamic_cast) - these
fails were introduced quite recently likely due to the improved
FUNCTION_DECL and VAR_DECL merging and the cgraph fixup Honza did.
The following incremental patch fixes that.

Index: trunk/gcc/lto-symtab.c
===================================================================
--- trunk.orig/gcc/lto-symtab.c 2013-06-12 16:47:38.000000000 +0200
+++ trunk/gcc/lto-symtab.c 2013-06-12 17:00:12.664126423 +0200
@@ -96,9 +96,6 @@ lto_varpool_replace_node (struct varpool

ipa_clone_referring ((symtab_node)prevailing_node,
&vnode->symbol.ref_list);

- /* Be sure we can garbage collect the initializer. */
- if (DECL_INITIAL (vnode->symbol.decl))
- DECL_INITIAL (vnode->symbol.decl) = error_mark_node;
/* Finally remove the replaced node. */
varpool_remove_node (vnode);
}
Index: trunk/gcc/varpool.c
===================================================================
--- trunk.orig/gcc/varpool.c 2013-06-12 13:13:06.000000000 +0200
+++ trunk/gcc/varpool.c 2013-06-12 17:01:46.088248807 +0200
@@ -77,15 +77,8 @@ varpool_remove_node (struct varpool_node

/* Renove node initializer when it is no longer needed. */
void
-varpool_remove_initializer (struct varpool_node *node)
+varpool_remove_initializer (struct varpool_node *)
{
- if (DECL_INITIAL (node->symbol.decl)
- && !DECL_IN_CONSTANT_POOL (node->symbol.decl)
- /* Keep vtables for BINFO folding. */
- && !DECL_VIRTUAL_P (node->symbol.decl)
- /* FIXME: http://gcc.gnu.org/PR55395 */
- && debug_info_level == DINFO_LEVEL_NONE)
- DECL_INITIAL (node->symbol.decl) = error_mark_node;
}

/* Dump given cgraph node. */


Here are some updated numbers on cc1 disk, memory and compile-time
usage when built in stage3 with LTO and release checking:

Input object file size unpatched: 324509482 bytes
Input object file size patched: 373225406 bytes (115%)

WPA time/maxrss unpatched:

10.83user 0.67system 0:11.53elapsed 99%CPU (0avgtext+0avgdata
575108maxresident)k
0inputs+778264outputs (0major+397754minor)pagefaults 0swaps

ipa lto decl in : 3.39 (32%) usr
ipa lto decl out : 3.88 (36%) usr

WPA time/maxrss patched:

18.35user 2.12system 0:20.56elapsed 99%CPU (0avgtext+0avgdata
648800maxresident)k
16inputs+1263088outputs (0major+606347minor)pagefaults 0swaps

ipa lto decl in : 3.30 (18%) usr
ipa lto decl out : 11.46 (63%) usr

LTRANS file size unpatched: 398407935 bytes
LTRANS file size patched: 645106888 bytes (162%)

Unpatched WPA statistics:

[WPA] GIMPLE type table: size 262139, 134158 elements, 365200 searches,
427507 collisions (ratio: 1.170611)
[WPA] GIMPLE type hash cache table: size 524287, 200149 elements, 4042887
searches, 3624678 collisions (ratio: 0.896557)
[WPA] Merged 229106 types
[WPA] GIMPLE canonical type table: size 16381, 6221 elements, 134279
searches, 4370 collisions (ratio: 0.032544)
[WPA] GIMPLE canonical type hash table: size 262139, 134223 elements,
574837 searches, 506302 collisions (ratio: 0.880775)
[WPA] # of input files: 422
...
[WPA] Size of mmap'd section decls: 36618988 bytes
[WPA] Size of mmap'd section function_body: 40566108 bytes
[WPA] Size of mmap'd section statics: 0 bytes
[WPA] Size of mmap'd section symtab: 0 bytes
[WPA] Size of mmap'd section refs: 138315 bytes
[WPA] Size of mmap'd section asm: 0 bytes
[WPA] Size of mmap'd section jmpfuncs: 665899 bytes
[WPA] Size of mmap'd section pureconst: 59901 bytes
[WPA] Size of mmap'd section reference: 0 bytes
[WPA] Size of mmap'd section profile: 8845 bytes
[WPA] Size of mmap'd section symbol_nodes: 1144120 bytes
[WPA] Size of mmap'd section opts: 0 bytes
[WPA] Size of mmap'd section cgraphopt: 0 bytes
[WPA] Size of mmap'd section inline: 844004 bytes
[WPA] Size of mmap'd section ipcp_trans: 0 bytes

Patched WPA statistics:

[WPA] read 2386084 SCCs of average size 2.421782
[WPA] 5778576 tree bodies read in total
[WPA] tree SCC table: size 524287, 267346 elements, collision ratio:
0.855240
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 362074 SCCs, 5417 collisions (0.014961)
[WPA] Merged 358942 SCCs
[WPA] Merged 2384340 tree bodies
[WPA] Merged 175201 types
[WPA] 188264 types prevailed
[WPA] Old merging code merges an additional 53081 types of which 21094 are
in the same SCC
[WPA] GIMPLE canonical type table: size 16381, 6234 elements, 188399
searches, 5850 collisions (ratio: 0.031051)
[WPA] GIMPLE canonical type hash table: size 262139, 188343 elements,
1217265 searches, 817652 collisions (ratio: 0.671712)
[WPA] # of input files: 422
...
[WPA] Size of mmap'd section decls: 68171523 bytes
[WPA] Size of mmap'd section function_body: 56454939 bytes
[WPA] Size of mmap'd section statics: 0 bytes
[WPA] Size of mmap'd section symtab: 0 bytes
[WPA] Size of mmap'd section refs: 138463 bytes
[WPA] Size of mmap'd section asm: 0 bytes
[WPA] Size of mmap'd section jmpfuncs: 1185254 bytes
[WPA] Size of mmap'd section pureconst: 59943 bytes
[WPA] Size of mmap'd section reference: 0 bytes
[WPA] Size of mmap'd section profile: 8845 bytes
[WPA] Size of mmap'd section symbol_nodes: 1145897 bytes
[WPA] Size of mmap'd section opts: 0 bytes
[WPA] Size of mmap'd section cgraphopt: 0 bytes
[WPA] Size of mmap'd section inline: 895848 bytes
[WPA] Size of mmap'd section ipcp_trans: 0 bytes


That gives me a mixed feeling despite the overall way superior
design (and correctness). The WPA LTRANS output side has a hard
time with both writing more trees (we merged less types) and
probably performing the DFS walks.

As you can see from the compile object file sizes the overhead
of streaming in SCCs is manageable so the doubling in LTRANS
file size has to come from extra trees we stream there (I'm
going to add some more counters and try doing stats per LTRANS
file we output).

Richard.
Richard Biener
2013-06-13 14:20:45 UTC
Permalink
Post by Richard Biener
Post by Richard Biener
The following patch re-writes LTO type merging completely with the
goal to move as much work as possible to the compile step, away
from WPA time. At the same time the merging itself gets very
conservative but also more general - it now merges arbitrary trees,
not only types, but only if they are bit-identical and have the
same outgoing tree pointers.
Especially the latter means that we now have to merge SCCs of trees
together and either take the whole SCC as prevailing or throw it
away. Moving work to the compile step means that we compute
SCCs and their hashes there, re-organizing streaming to stream
tree bodies as SCC blocks together with the computed hash.
When we ask the streamer to output a tree T then it now has
to DFS walk all tree pointers, collecting SCCs of not yet
{ LTO_tree_scc, N, hash, entry_len,
{ header1, header2, ... headerN },
{ bits1, refs1, bits2, refs2, ... bitsN, refsN } }
{ LTO_tree_scc, 1, hash, header, bits, refs }
{ LTO_tree_scc, M, hash, entry_len,
{ header1, header2, ... headerM },
{ bits1, refs1, bits2, refs2, ... bitsM, refsM } }
LTO_tree_pickle_reference to T
with tree references in refsN always being LTO_tree_pickle_references
instead of starting a new tree inline. That results in at most
N extra LTO_tree_pickle_references for N streamed trees, together
with the LTO_tree_scc wrapping overhead this causes a slight
increase in LTO object size (around 10% last time I measured, which
was before some additional optimization went in).
The overhead also happens on the LTRANS file producing side
which now has to do the DFS walk and stream the extra data.
It doesn't do the hashing though as on the LTRANS consumer
side no merging is performed.
The patch preserves the core of the old merging code to compare
with the new code and output some statistics. That means that
if you build with -flto-report[-wpa] you get an additional
compile-time and memory overhead.
For reference here are the stats when LTO bootstrapping for
WPA statistics
[WPA] read 2494507 SCCs of average size 2.380067
[WPA] 5937095 tree bodies read in total
0.806376
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 403361 SCCs, 6226 collisions (0.015435)
[WPA] Merged 399980 SCCs
[WPA] Merged 2438250 tree bodies
[WPA] Merged 192475 types
[WPA] 195422 types prevailed
[WPA] Old merging code merges an additional 54582 types of which 21083 are
in the same SCC with their prevailing variant
this says that we've streamed in 5937095 tree bodies in
2494507 SCCs (so the average SCC size is small), of those
we were able to immediately ggc_free 399980 SCCs because they
already existed in identical form (16% of the SCCs, 41% of the trees
and 49% of the types). The old merging code forced the merge
of an additional 54582 types (but 21083 of them it merged with
a type that is in the same SCC, that is, it changed the shape
of the SCC and collapsed parts of it - something that is
suspicious).
The patch was LTO bootstrapped (testing currently running) on
x86_64-unknown-linux-gnu and I've built SPEC2k6 with -Ofast -g -flto
and did a test run of the binaries which shows that
currently 471.omnetpp, 483.xalancbmk and 447.dealII fail
(471.omnetpp segfaults in __cxxabiv1::__dynamic_cast) - these
fails were introduced quite recently likely due to the improved
FUNCTION_DECL and VAR_DECL merging and the cgraph fixup Honza did.
The following incremental patch fixes that.
Index: trunk/gcc/lto-symtab.c
===================================================================
--- trunk.orig/gcc/lto-symtab.c 2013-06-12 16:47:38.000000000 +0200
+++ trunk/gcc/lto-symtab.c 2013-06-12 17:00:12.664126423 +0200
@@ -96,9 +96,6 @@ lto_varpool_replace_node (struct varpool
ipa_clone_referring ((symtab_node)prevailing_node,
&vnode->symbol.ref_list);
- /* Be sure we can garbage collect the initializer. */
- if (DECL_INITIAL (vnode->symbol.decl))
- DECL_INITIAL (vnode->symbol.decl) = error_mark_node;
/* Finally remove the replaced node. */
varpool_remove_node (vnode);
}
Index: trunk/gcc/varpool.c
===================================================================
--- trunk.orig/gcc/varpool.c 2013-06-12 13:13:06.000000000 +0200
+++ trunk/gcc/varpool.c 2013-06-12 17:01:46.088248807 +0200
@@ -77,15 +77,8 @@ varpool_remove_node (struct varpool_node
/* Renove node initializer when it is no longer needed. */
void
-varpool_remove_initializer (struct varpool_node *node)
+varpool_remove_initializer (struct varpool_node *)
{
- if (DECL_INITIAL (node->symbol.decl)
- && !DECL_IN_CONSTANT_POOL (node->symbol.decl)
- /* Keep vtables for BINFO folding. */
- && !DECL_VIRTUAL_P (node->symbol.decl)
- /* FIXME: http://gcc.gnu.org/PR55395 */
- && debug_info_level == DINFO_LEVEL_NONE)
- DECL_INITIAL (node->symbol.decl) = error_mark_node;
}
/* Dump given cgraph node. */
Here are some updated numbers on cc1 disk, memory and compile-time
Ok, not streaming and comparing TREE_USED gets it improved to
Post by Richard Biener
Input object file size unpatched: 324509482 bytes
Input object file size patched: 373225406 bytes (115%)
10.83user 0.67system 0:11.53elapsed 99%CPU (0avgtext+0avgdata
575108maxresident)k
0inputs+778264outputs (0major+397754minor)pagefaults 0swaps
ipa lto decl in : 3.39 (32%) usr
ipa lto decl out : 3.88 (36%) usr
18.35user 2.12system 0:20.56elapsed 99%CPU (0avgtext+0avgdata
648800maxresident)k
16inputs+1263088outputs (0major+606347minor)pagefaults 0swaps
13.01user 0.68system 0:13.72elapsed 99%CPU (0avgtext+0avgdata
520092maxresident)k
0inputs+8outputs (0major+389794minor)pagefaults 0swaps
Post by Richard Biener
ipa lto decl in : 3.30 (18%) usr
ipa lto decl out : 11.46 (63%) usr
ipa lto decl in : 3.31 (25%) usr
ipa lto decl out : 6.27 (48%) usr
Post by Richard Biener
LTRANS file size unpatched: 398407935 bytes
LTRANS file size patched: 645106888 bytes (162%)
LTRANS file size patched: 451821845 bytes (113%)
Post by Richard Biener
[WPA] GIMPLE type table: size 262139, 134158 elements, 365200 searches,
427507 collisions (ratio: 1.170611)
[WPA] GIMPLE type hash cache table: size 524287, 200149 elements, 4042887
searches, 3624678 collisions (ratio: 0.896557)
[WPA] Merged 229106 types
[WPA] GIMPLE canonical type table: size 16381, 6221 elements, 134279
searches, 4370 collisions (ratio: 0.032544)
[WPA] GIMPLE canonical type hash table: size 262139, 134223 elements,
574837 searches, 506302 collisions (ratio: 0.880775)
[WPA] # of input files: 422
...
[WPA] Size of mmap'd section decls: 36618988 bytes
[WPA] Size of mmap'd section function_body: 40566108 bytes
[WPA] Size of mmap'd section statics: 0 bytes
[WPA] Size of mmap'd section symtab: 0 bytes
[WPA] Size of mmap'd section refs: 138315 bytes
[WPA] Size of mmap'd section asm: 0 bytes
[WPA] Size of mmap'd section jmpfuncs: 665899 bytes
[WPA] Size of mmap'd section pureconst: 59901 bytes
[WPA] Size of mmap'd section reference: 0 bytes
[WPA] Size of mmap'd section profile: 8845 bytes
[WPA] Size of mmap'd section symbol_nodes: 1144120 bytes
[WPA] Size of mmap'd section opts: 0 bytes
[WPA] Size of mmap'd section cgraphopt: 0 bytes
[WPA] Size of mmap'd section inline: 844004 bytes
[WPA] Size of mmap'd section ipcp_trans: 0 bytes
[WPA] read 2386084 SCCs of average size 2.421782
[WPA] 5778576 tree bodies read in total
0.855240
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 362074 SCCs, 5417 collisions (0.014961)
[WPA] Merged 358942 SCCs
[WPA] Merged 2384340 tree bodies
[WPA] Merged 175201 types
[WPA] 188264 types prevailed
[WPA] Old merging code merges an additional 53081 types of which 21094 are
in the same SCC
[WPA] GIMPLE canonical type table: size 16381, 6234 elements, 188399
searches, 5850 collisions (ratio: 0.031051)
[WPA] GIMPLE canonical type hash table: size 262139, 188343 elements,
1217265 searches, 817652 collisions (ratio: 0.671712)
[WPA] # of input files: 422
...
[WPA] Size of mmap'd section decls: 68171523 bytes
[WPA] Size of mmap'd section function_body: 56454939 bytes
[WPA] Size of mmap'd section statics: 0 bytes
[WPA] Size of mmap'd section symtab: 0 bytes
[WPA] Size of mmap'd section refs: 138463 bytes
[WPA] Size of mmap'd section asm: 0 bytes
[WPA] Size of mmap'd section jmpfuncs: 1185254 bytes
[WPA] Size of mmap'd section pureconst: 59943 bytes
[WPA] Size of mmap'd section reference: 0 bytes
[WPA] Size of mmap'd section profile: 8845 bytes
[WPA] Size of mmap'd section symbol_nodes: 1145897 bytes
[WPA] Size of mmap'd section opts: 0 bytes
[WPA] Size of mmap'd section cgraphopt: 0 bytes
[WPA] Size of mmap'd section inline: 895848 bytes
[WPA] Size of mmap'd section ipcp_trans: 0 bytes
[WPA] read 2386075 SCCs of average size 2.421786
[WPA] 5778564 tree bodies read in total
[WPA] tree SCC table: size 524287, 255520 elements, collision ratio:
0.944901
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 373893 SCCs, 5706 collisions (0.015261)
[WPA] Merged 370688 SCCs
[WPA] Merged 3165669 tree bodies
[WPA] Merged 188791 types
[WPA] 174670 types prevailed (547208 associated trees)
[WPA] Old merging code merges an additional 40013 types of which 21620 are
in the same SCC with their prevailing variant (411743 and 324175
associated trees)
[WPA] GIMPLE canonical type table: size 16381, 6233 elements, 174805
searches, 5484 collisions (ratio: 0.031372)
[WPA] GIMPLE canonical type hash table: size 262139, 174753 elements,
862715 searches, 755816 collisions (ratio: 0.876090)
[WPA] # of input files: 422
...
[WPA] Size of mmap'd section decls: 67911573 bytes
[WPA] Size of mmap'd section function_body: 0 bytes
[WPA] Size of mmap'd section statics: 0 bytes
[WPA] Size of mmap'd section symtab: 0 bytes
[WPA] Size of mmap'd section refs: 0 bytes
[WPA] Size of mmap'd section asm: 0 bytes
[WPA] Size of mmap'd section jmpfuncs: 0 bytes
[WPA] Size of mmap'd section pureconst: 0 bytes
[WPA] Size of mmap'd section reference: 0 bytes
[WPA] Size of mmap'd section profile: 0 bytes
[WPA] Size of mmap'd section symbol_nodes: 0 bytes
[WPA] Size of mmap'd section opts: 0 bytes
[WPA] Size of mmap'd section cgraphopt: 0 bytes
[WPA] Size of mmap'd section inline: 0 bytes
[WPA] Size of mmap'd section ipcp_trans: 0 bytes


That's now much more reasonable.

Complete fixed patch below, for reference (without adjusted
changelog).

Richard.


* lto-streamer.h (enum LTO_tags): Add LTO_tree_scc.
(lto_input_scc): Declare.
(lto_input_tree_1): Likewise.
* lto-streamer-in.c (lto_read_body): Use streamer_tree_cache_get_tree.
(lto_read_tree_1): Split out from ...
(lto_read_tree): ... this.
(lto_input_scc): New function.
(lto_input_tree_1): Split out from ...
(lto_input_tree): ... this. Handle LTO_tree_scc.
(lto_data_in_create): Create the streamer cache without hashes.
* lto-streamer-out.c (create_output_block): Create the streamer
cache with hashes when not doing WPA.
(lto_write_tree_1): Split out from ...
(lto_write_tree): ... this.
(get_symbol_initial_value): New function.
(lto_output_tree_1): Split out from ...
(lto_output_tree): ... this. Write trees as series of SCCs
using a DFS walk via DFS_write_tree.
(struct sccs, struct scc_entry): New types.
(next_dfs_num, sccstack, sccstate, sccstate_obstack): New globals.
(DFS_write_tree_body): New function.
(DFS_write_tree): Likewise.
(hash_tree): Likewise.
(scc_entry_compare): Likewise.
(hash_scc): Likewise.
* tree-streamer-in.c (lto_input_ts_list_tree_pointers): Stream
TREE_CHAIN as regular reference.
(streamer_read_integer_cst): Remove.
(streamer_get_pickled_tree): Adjust.
* tree-streamer-out.c (streamer_write_chain): Disable streaming
of DECL_EXTERNALs in BLOCK_VARS for now.
(write_ts_list_tree_pointers): Stream TREE_CHAIN as regular
reference.
* tree-streamer.c (streamer_tree_cache_add_to_node_array):
Add hash value argument and record that if hashes are recorded
in the cache.
(streamer_tree_cache_insert_1): Adjust.
(streamer_tree_cache_insert): Likewise.
(streamer_tree_cache_insert_at): Rename to ...
(streamer_tree_cache_replace_tree): ... this and adjust.
(streamer_tree_cache_append): Adjust.
(record_common_node): Likewise.
(streamer_tree_cache_create): Add argument whether to
record hash values together with trees.
(streamer_tree_cache_delete): Adjust.
* tree-streamer.h (struct streamer_tree_cache_d): Add
vector of hashes.
(streamer_read_integer_cst): Remove.
(streamer_tree_cache_insert): Adjust.
(streamer_tree_cache_append): Likewise.
(streamer_tree_cache_insert_at): Rename to ...
(streamer_tree_cache_replace_tree): ... this and adjust.
(streamer_tree_cache_create): Add argument whether to record hashes.
(streamer_tree_cache_get): Rename to ...
(streamer_tree_cache_get_tree): ... this.
(streamer_tree_cache_get_hash): New function.
* tree.c (cache_integer_cst): New function.
* tree.h (cache_integer_cst): Declare.

lto/
* Make-lang.in (lto.o): Add $(DATA_STREAMER_H) dependency.
* lto.c: Include data-streamer.h.
(lto_read_in_decl_state): Use streamer_tree_cache_get_tree.
(gimple_type_leader_entry_s, gimple_type_leader,
gimple_lookup_type_leader): Remove.
(gtc_visit): Simplify.
(gimple_types_compatible_p): Likewise.
(gimple_register_type_1): Likewise. Merge into ...
(gimple_register_type): ... this. Keep it as legacy for
statistics purposes for now.
(fixup_integer_cst): Remove.
(LTO_FIXUP_TREE, lto_fixup_types, lto_ft_*): Simplify and
rename to ...
(MAYBE_REMEMBER_WITH_VARS, maybe_remember_with_vars,
maybe_remember_with_vars_*): ... these.
(uniquify_nodes): Remove.
(lto_fixup_prevailing_type): New function.
(struct tree_scc, struct tree_scc_hasher): New type and hasher.
(tree_scc_hash, tree_scc_hash_obstack): New globals.
(num_merged_types, num_prevailing_types, num_not_merged_types,
num_not_merged_types_in_same_scc, total_scc_size, num_sccs_read,
total_scc_size_merged, num_sccs_merged, num_scc_compares,
num_scc_compare_collisions): New global counters.
(compare_tree_sccs_1): New function.
(compare_tree_sccs): Likewise.
(unify_scc): Likewise.
(lto_read_decls): Stream in tree SCCs and unify them on the
way in. Finalize prevailing SCC tree members.
(read_cgraph_and_symbols): Do not initialize or free gimple_type_leader.
Allocate and free tree_scc_hash_obstack and tree_scc_hash, do not bother
to ggc-collect during merging.
(print_lto_report_1): Adjust for new merging code.

Index: trunk/gcc/lto-streamer.h
===================================================================
*** trunk.orig/gcc/lto-streamer.h 2013-06-13 14:35:48.000000000 +0200
--- trunk/gcc/lto-streamer.h 2013-06-13 14:35:53.249107976 +0200
*************** enum LTO_tags
*** 199,204 ****
--- 199,207 ----
/* EH try/catch node. */
LTO_eh_catch,

+ /* Special for global streamer. A blob of unnamed tree nodes. */
+ LTO_tree_scc,
+
/* References to indexable tree nodes. These objects are stored in
tables that are written separately from the function bodies that
reference them. This way they can be instantiated even when the
*************** tree lto_input_tree_ref (struct lto_inpu
*** 854,859 ****
--- 857,866 ----
struct function *, enum LTO_tags);
void lto_tag_check_set (enum LTO_tags, int, ...);
void lto_init_eh (void);
+ hashval_t lto_input_scc (struct lto_input_block *, struct data_in *,
+ unsigned *, unsigned *);
+ tree lto_input_tree_1 (struct lto_input_block *, struct data_in *,
+ enum LTO_tags, hashval_t hash);
tree lto_input_tree (struct lto_input_block *, struct data_in *);


Index: trunk/gcc/lto-streamer-in.c
===================================================================
*** trunk.orig/gcc/lto-streamer-in.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto-streamer-in.c 2013-06-13 14:35:53.250107988 +0200
*************** lto_read_body (struct lto_file_decl_data
*** 1016,1022 ****
unsigned i;
for (i = len; i-- > from;)
{
! tree t = cache->nodes[i];
if (t == NULL_TREE)
continue;

--- 1016,1022 ----
unsigned i;
for (i = len; i-- > from;)
{
! tree t = streamer_tree_cache_get_tree (cache, i);
if (t == NULL_TREE)
continue;

*************** lto_input_function_body (struct lto_file
*** 1056,1067 ****
}


/* Read the physical representation of a tree node with tag TAG from
input block IB using the per-file context in DATA_IN. */

static tree
lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag)
{
/* Instantiate a new tree node. */
tree result = streamer_alloc_tree (ib, data_in, tag);
--- 1056,1098 ----
}


+ /* Read the physical representation of a tree node EXPR from
+ input block IB using the per-file context in DATA_IN. */
+
+ static void
+ lto_read_tree_1 (struct lto_input_block *ib, struct data_in *data_in, tree expr)
+ {
+ /* Read all the bitfield values in EXPR. Note that for LTO, we
+ only write language-independent bitfields, so no more unpacking is
+ needed. */
+ streamer_read_tree_bitfields (ib, data_in, expr);
+
+ /* Read all the pointer fields in EXPR. */
+ streamer_read_tree_body (ib, data_in, expr);
+
+ /* Read any LTO-specific data not read by the tree streamer. */
+ if (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
+ DECL_INITIAL (expr) = stream_read_tree (ib, data_in);
+
+ /* We should never try to instantiate an MD or NORMAL builtin here. */
+ if (TREE_CODE (expr) == FUNCTION_DECL)
+ gcc_assert (!streamer_handle_as_builtin_p (expr));
+
+ #ifdef LTO_STREAMER_DEBUG
+ /* Remove the mapping to RESULT's original address set by
+ streamer_alloc_tree. */
+ lto_orig_address_remove (expr);
+ #endif
+ }
+
/* Read the physical representation of a tree node with tag TAG from
input block IB using the per-file context in DATA_IN. */

static tree
lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag, hashval_t hash)
{
/* Instantiate a new tree node. */
tree result = streamer_alloc_tree (ib, data_in, tag);
*************** lto_read_tree (struct lto_input_block *i
*** 1069,1103 ****
/* Enter RESULT in the reader cache. This will make RESULT
available so that circular references in the rest of the tree
structure can be resolved in subsequent calls to stream_read_tree. */
! streamer_tree_cache_append (data_in->reader_cache, result);

! /* Read all the bitfield values in RESULT. Note that for LTO, we
! only write language-independent bitfields, so no more unpacking is
! needed. */
! streamer_read_tree_bitfields (ib, data_in, result);

! /* Read all the pointer fields in RESULT. */
! streamer_read_tree_body (ib, data_in, result);

! /* Read any LTO-specific data not read by the tree streamer. */
! if (DECL_P (result)
! && TREE_CODE (result) != FUNCTION_DECL
! && TREE_CODE (result) != TRANSLATION_UNIT_DECL)
! DECL_INITIAL (result) = stream_read_tree (ib, data_in);

- /* We should never try to instantiate an MD or NORMAL builtin here. */
- if (TREE_CODE (result) == FUNCTION_DECL)
- gcc_assert (!streamer_handle_as_builtin_p (result));

! /* end_marker = */ streamer_read_uchar (ib);

! #ifdef LTO_STREAMER_DEBUG
! /* Remove the mapping to RESULT's original address set by
! streamer_alloc_tree. */
! lto_orig_address_remove (result);
! #endif

! return result;
}


--- 1100,1169 ----
/* Enter RESULT in the reader cache. This will make RESULT
available so that circular references in the rest of the tree
structure can be resolved in subsequent calls to stream_read_tree. */
! streamer_tree_cache_append (data_in->reader_cache, result, hash);

! lto_read_tree_1 (ib, data_in, result);

! /* end_marker = */ streamer_read_uchar (ib);

! return result;
! }


! /* Populate the reader cache with trees materialized from the SCC
! following in the IB, DATA_IN stream. */

! hashval_t
! lto_input_scc (struct lto_input_block *ib, struct data_in *data_in,
! unsigned *len, unsigned *entry_len)
! {
! /* A blob of unnamed tree nodes, fill the cache from it and
! recurse. */
! unsigned size = streamer_read_uhwi (ib);
! hashval_t scc_hash = streamer_read_uhwi (ib);
! unsigned scc_entry_len = 1;

! if (size == 1)
! {
! enum LTO_tags tag = streamer_read_record_start (ib);
! lto_input_tree_1 (ib, data_in, tag, scc_hash);
! }
! else
! {
! unsigned int first = data_in->reader_cache->nodes.length ();
! tree result;
!
! scc_entry_len = streamer_read_uhwi (ib);
!
! /* Materialize size trees by reading their headers. */
! for (unsigned i = 0; i < size; ++i)
! {
! enum LTO_tags tag = streamer_read_record_start (ib);
! if (tag == LTO_null
! || (tag >= LTO_field_decl_ref && tag <= LTO_global_decl_ref)
! || tag == LTO_tree_pickle_reference
! || tag == LTO_builtin_decl
! || tag == LTO_integer_cst
! || tag == LTO_tree_scc)
! gcc_unreachable ();
!
! result = streamer_alloc_tree (ib, data_in, tag);
! streamer_tree_cache_append (data_in->reader_cache, result, 0);
! }
!
! /* Read the tree bitpacks and references. */
! for (unsigned i = 0; i < size; ++i)
! {
! result = streamer_tree_cache_get_tree (data_in->reader_cache,
! first + i);
! lto_read_tree_1 (ib, data_in, result);
! /* end_marker = */ streamer_read_uchar (ib);
! }
! }
!
! *len = size;
! *entry_len = scc_entry_len;
! return scc_hash;
}


*************** lto_read_tree (struct lto_input_block *i
*** 1106,1117 ****
to previously read nodes. */

tree
! lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
{
- enum LTO_tags tag;
tree result;

- tag = streamer_read_record_start (ib);
gcc_assert ((unsigned) tag < (unsigned) LTO_NUM_TAGS);

if (tag == LTO_null)
--- 1172,1182 ----
to previously read nodes. */

tree
! lto_input_tree_1 (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag, hashval_t hash)
{
tree result;

gcc_assert ((unsigned) tag < (unsigned) LTO_NUM_TAGS);

if (tag == LTO_null)
*************** lto_input_tree (struct lto_input_block *
*** 1137,1155 ****
}
else if (tag == LTO_integer_cst)
{
! /* For shared integer constants we only need the type and its hi/low
! words. */
! result = streamer_read_integer_cst (ib, data_in);
}
else
{
/* Otherwise, materialize a new node from IB. */
! result = lto_read_tree (ib, data_in, tag);
}

return result;
}


/* Input toplevel asms. */

--- 1202,1240 ----
}
else if (tag == LTO_integer_cst)
{
! /* For shared integer constants in singletons we can use the existing
! tree integer constant merging code. */
! tree type = stream_read_tree (ib, data_in);
! unsigned HOST_WIDE_INT low = streamer_read_uhwi (ib);
! HOST_WIDE_INT high = streamer_read_hwi (ib);
! result = build_int_cst_wide (type, low, high);
! streamer_tree_cache_append (data_in->reader_cache, result, hash);
! }
! else if (tag == LTO_tree_scc)
! {
! unsigned len, entry_len;
!
! /* Input and skip the SCC. */
! lto_input_scc (ib, data_in, &len, &entry_len);
!
! /* Recurse. */
! return lto_input_tree (ib, data_in);
}
else
{
/* Otherwise, materialize a new node from IB. */
! result = lto_read_tree (ib, data_in, tag, hash);
}

return result;
}

+ tree
+ lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
+ {
+ return lto_input_tree_1 (ib, data_in, streamer_read_record_start (ib), 0);
+ }
+

/* Input toplevel asms. */

*************** lto_data_in_create (struct lto_file_decl
*** 1220,1226 ****
data_in->strings = strings;
data_in->strings_len = len;
data_in->globals_resolution = resolutions;
! data_in->reader_cache = streamer_tree_cache_create ();

return data_in;
}
--- 1305,1311 ----
data_in->strings = strings;
data_in->strings_len = len;
data_in->globals_resolution = resolutions;
! data_in->reader_cache = streamer_tree_cache_create (false);

return data_in;
}
Index: trunk/gcc/lto-streamer-out.c
===================================================================
*** trunk.orig/gcc/lto-streamer-out.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto-streamer-out.c 2013-06-13 15:35:29.920956587 +0200
*************** create_output_block (enum lto_section_ty
*** 71,77 ****
ob->decl_state = lto_get_out_decl_state ();
ob->main_stream = XCNEW (struct lto_output_stream);
ob->string_stream = XCNEW (struct lto_output_stream);
! ob->writer_cache = streamer_tree_cache_create ();

if (section_type == LTO_section_function_body)
ob->cfg_stream = XCNEW (struct lto_output_stream);
--- 71,77 ----
ob->decl_state = lto_get_out_decl_state ();
ob->main_stream = XCNEW (struct lto_output_stream);
ob->string_stream = XCNEW (struct lto_output_stream);
! ob->writer_cache = streamer_tree_cache_create (!flag_wpa);

if (section_type == LTO_section_function_body)
ob->cfg_stream = XCNEW (struct lto_output_stream);
*************** lto_is_streamable (tree expr)
*** 293,319 ****
}


/* Write a physical representation of tree node EXPR to output block
OB. If REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. IX is the index into the streamer cache
where EXPR is stored. */

static void
! lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
{
- struct bitpack_d bp;
-
- if (!lto_is_streamable (expr))
- internal_error ("tree code %qs is not supported in LTO streams",
- tree_code_name[TREE_CODE (expr)]);
-
- /* Write the header, containing everything needed to materialize
- EXPR on the reading side. */
- streamer_write_tree_header (ob, expr);
-
/* Pack all the non-pointer fields in EXPR into a bitpack and write
the resulting bitpack. */
! bp = bitpack_create (ob->main_stream);
streamer_pack_tree_bitfields (ob, &bp, expr);
streamer_write_bitpack (&bp);

--- 293,339 ----
}


+ /* For EXPR lookup and return what we want to stream to OB as DECL_INITIAL. */
+
+ static tree
+ get_symbol_initial_value (struct output_block *ob, tree expr)
+ {
+ gcc_checking_assert (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL);
+
+ /* Handle DECL_INITIAL for symbols. */
+ tree initial = DECL_INITIAL (expr);
+ if (TREE_CODE (expr) == VAR_DECL
+ && (TREE_STATIC (expr) || DECL_EXTERNAL (expr))
+ && initial)
+ {
+ lto_symtab_encoder_t encoder;
+ struct varpool_node *vnode;
+
+ encoder = ob->decl_state->symtab_node_encoder;
+ vnode = varpool_get_node (expr);
+ if (!vnode
+ || !lto_symtab_encoder_encode_initializer_p (encoder,
+ vnode))
+ initial = error_mark_node;
+ }
+
+ return initial;
+ }
+
+
/* Write a physical representation of tree node EXPR to output block
OB. If REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. IX is the index into the streamer cache
where EXPR is stored. */

static void
! lto_write_tree_1 (struct output_block *ob, tree expr, bool ref_p)
{
/* Pack all the non-pointer fields in EXPR into a bitpack and write
the resulting bitpack. */
! bitpack_d bp = bitpack_create (ob->main_stream);
streamer_pack_tree_bitfields (ob, &bp, expr);
streamer_write_bitpack (&bp);

*************** lto_write_tree (struct output_block *ob,
*** 345,366 ****

stream_write_tree (ob, initial, ref_p);
}

/* Mark the end of EXPR. */
streamer_write_zero (ob);
}

-
/* Emit the physical representation of tree node EXPR to output block
OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */

! void
! lto_output_tree (struct output_block *ob, tree expr,
! bool ref_p, bool this_ref_p)
{
unsigned ix;
- bool existed_p;

if (expr == NULL_TREE)
{
--- 365,403 ----

stream_write_tree (ob, initial, ref_p);
}
+ }
+
+ /* Write a physical representation of tree node EXPR to output block
+ OB. If REF_P is true, the leaves of EXPR are emitted as references
+ via lto_output_tree_ref. IX is the index into the streamer cache
+ where EXPR is stored. */
+
+ static void
+ lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
+ {
+ if (!lto_is_streamable (expr))
+ internal_error ("tree code %qs is not supported in LTO streams",
+ tree_code_name[TREE_CODE (expr)]);
+
+ /* Write the header, containing everything needed to materialize
+ EXPR on the reading side. */
+ streamer_write_tree_header (ob, expr);
+
+ lto_write_tree_1 (ob, expr, ref_p);

/* Mark the end of EXPR. */
streamer_write_zero (ob);
}

/* Emit the physical representation of tree node EXPR to output block
OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */

! static void
! lto_output_tree_1 (struct output_block *ob, tree expr, hashval_t hash,
! bool ref_p, bool this_ref_p)
{
unsigned ix;

if (expr == NULL_TREE)
{
*************** lto_output_tree (struct output_block *ob
*** 374,391 ****
return;
}

! /* Shared INTEGER_CST nodes are special because they need their original type
! to be materialized by the reader (to implement TYPE_CACHED_VALUES). */
! if (TREE_CODE (expr) == INTEGER_CST
! && !TREE_OVERFLOW (expr))
{
streamer_write_integer_cst (ob, expr, ref_p);
return;
}

! existed_p = streamer_tree_cache_insert (ob->writer_cache, expr, &ix);
if (existed_p)
{
/* If a node has already been streamed out, make sure that
we don't write it more than once. Otherwise, the reader
will instantiate two different nodes for the same object. */
--- 411,1328 ----
return;
}

! gcc_checking_assert (!streamer_tree_cache_lookup (ob->writer_cache, expr, &ix));
!
! streamer_tree_cache_insert (ob->writer_cache, expr, hash, &ix);
! if (streamer_handle_as_builtin_p (expr))
! {
! /* MD and NORMAL builtins do not need to be written out
! completely as they are always instantiated by the
! compiler on startup. The only builtins that need to
! be written out are BUILT_IN_FRONTEND. For all other
! builtins, we simply write the class and code. */
! streamer_write_builtin (ob, expr);
! }
! else if (TREE_CODE (expr) == INTEGER_CST
! && !TREE_OVERFLOW (expr))
{
+ /* Shared INTEGER_CST nodes are special because they need their
+ original type to be materialized by the reader (to implement
+ TYPE_CACHED_VALUES). */
streamer_write_integer_cst (ob, expr, ref_p);
+ }
+ else
+ {
+ /* This is the first time we see EXPR, write its fields
+ to OB. */
+ lto_write_tree (ob, expr, ref_p);
+ }
+ }
+
+ struct sccs
+ {
+ unsigned int dfsnum;
+ unsigned int low;
+ };
+
+ struct scc_entry
+ {
+ tree t;
+ hashval_t hash;
+ };
+
+ static unsigned int next_dfs_num;
+ static vec<scc_entry> sccstack;
+ static struct pointer_map_t *sccstate;
+ static struct obstack sccstate_obstack;
+
+ static void
+ DFS_write_tree (struct output_block *ob, sccs *from_state,
+ tree expr, bool ref_p, bool this_ref_p);
+
+ /* Handle the tree EXPR in the DFS walk with SCC state EXPR_STATE and
+ DFS recurse for all tree edges originating from it. */
+
+ static void
+ DFS_write_tree_body (struct output_block *ob,
+ tree expr, sccs *expr_state, bool ref_p)
+ {
+ #define DFS_follow_tree_edge(DEST) \
+ DFS_write_tree (ob, expr_state, DEST, ref_p, ref_p)
+
+ enum tree_code code;
+
+ code = TREE_CODE (expr);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
+ {
+ if (TREE_CODE (expr) != IDENTIFIER_NODE)
+ DFS_follow_tree_edge (TREE_TYPE (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
+ {
+ for (unsigned i = 0; i < VECTOR_CST_NELTS (expr); ++i)
+ DFS_follow_tree_edge (VECTOR_CST_ELT (expr, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
+ {
+ DFS_follow_tree_edge (TREE_REALPART (expr));
+ DFS_follow_tree_edge (TREE_IMAGPART (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
+ {
+ DFS_follow_tree_edge (DECL_NAME (expr));
+ DFS_follow_tree_edge (DECL_CONTEXT (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ DFS_follow_tree_edge (DECL_SIZE (expr));
+ DFS_follow_tree_edge (DECL_SIZE_UNIT (expr));
+
+ /* Note, DECL_INITIAL is not handled here. Since DECL_INITIAL needs
+ special handling in LTO, it must be handled by streamer hooks. */
+
+ DFS_follow_tree_edge (DECL_ATTRIBUTES (expr));
+
+ /* Do not follow DECL_ABSTRACT_ORIGIN. We cannot handle debug information
+ for early inlining so drop it on the floor instead of ICEing in
+ dwarf2out.c. */
+
+ if ((TREE_CODE (expr) == VAR_DECL
+ || TREE_CODE (expr) == PARM_DECL)
+ && DECL_HAS_VALUE_EXPR_P (expr))
+ DFS_follow_tree_edge (DECL_VALUE_EXPR (expr));
+ if (TREE_CODE (expr) == VAR_DECL)
+ DFS_follow_tree_edge (DECL_DEBUG_EXPR (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
+ {
+ if (TREE_CODE (expr) == FUNCTION_DECL)
+ {
+ for (tree t = DECL_ARGUMENTS (expr); t; t = TREE_CHAIN (t))
+ DFS_follow_tree_edge (t);
+ DFS_follow_tree_edge (DECL_RESULT (expr));
+ }
+ else if (TREE_CODE (expr) == TYPE_DECL)
+ DFS_follow_tree_edge (DECL_ORIGINAL_TYPE (expr));
+ DFS_follow_tree_edge (DECL_VINDEX (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ /* Make sure we don't inadvertently set the assembler name. */
+ if (DECL_ASSEMBLER_NAME_SET_P (expr))
+ DFS_follow_tree_edge (DECL_ASSEMBLER_NAME (expr));
+ DFS_follow_tree_edge (DECL_SECTION_NAME (expr));
+ DFS_follow_tree_edge (DECL_COMDAT_GROUP (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
+ {
+ DFS_follow_tree_edge (DECL_FIELD_OFFSET (expr));
+ DFS_follow_tree_edge (DECL_BIT_FIELD_TYPE (expr));
+ DFS_follow_tree_edge (DECL_BIT_FIELD_REPRESENTATIVE (expr));
+ DFS_follow_tree_edge (DECL_FIELD_BIT_OFFSET (expr));
+ DFS_follow_tree_edge (DECL_FCONTEXT (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ DFS_follow_tree_edge (DECL_FUNCTION_PERSONALITY (expr));
+ DFS_follow_tree_edge (DECL_FUNCTION_SPECIFIC_TARGET (expr));
+ DFS_follow_tree_edge (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ DFS_follow_tree_edge (TYPE_SIZE (expr));
+ DFS_follow_tree_edge (TYPE_SIZE_UNIT (expr));
+ DFS_follow_tree_edge (TYPE_ATTRIBUTES (expr));
+ DFS_follow_tree_edge (TYPE_NAME (expr));
+ /* Do not follow TYPE_POINTER_TO or TYPE_REFERENCE_TO. They will be
+ reconstructed during fixup. */
+ /* Do not follow TYPE_NEXT_VARIANT, we reconstruct the variant lists
+ during fixup. */
+ DFS_follow_tree_edge (TYPE_MAIN_VARIANT (expr));
+ DFS_follow_tree_edge (TYPE_CONTEXT (expr));
+ /* TYPE_CANONICAL is re-computed during type merging, so no need
+ to follow it here. */
+ DFS_follow_tree_edge (TYPE_STUB_DECL (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
+ {
+ if (TREE_CODE (expr) == ENUMERAL_TYPE)
+ DFS_follow_tree_edge (TYPE_VALUES (expr));
+ else if (TREE_CODE (expr) == ARRAY_TYPE)
+ DFS_follow_tree_edge (TYPE_DOMAIN (expr));
+ else if (RECORD_OR_UNION_TYPE_P (expr))
+ for (tree t = TYPE_FIELDS (expr); t; t = TREE_CHAIN (t))
+ DFS_follow_tree_edge (t);
+ else if (TREE_CODE (expr) == FUNCTION_TYPE
+ || TREE_CODE (expr) == METHOD_TYPE)
+ DFS_follow_tree_edge (TYPE_ARG_TYPES (expr));
+
+ if (!POINTER_TYPE_P (expr))
+ DFS_follow_tree_edge (TYPE_MINVAL (expr));
+ DFS_follow_tree_edge (TYPE_MAXVAL (expr));
+ if (RECORD_OR_UNION_TYPE_P (expr))
+ DFS_follow_tree_edge (TYPE_BINFO (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_LIST))
+ {
+ DFS_follow_tree_edge (TREE_PURPOSE (expr));
+ DFS_follow_tree_edge (TREE_VALUE (expr));
+ DFS_follow_tree_edge (TREE_CHAIN (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VEC))
+ {
+ for (int i = 0; i < TREE_VEC_LENGTH (expr); i++)
+ DFS_follow_tree_edge (TREE_VEC_ELT (expr, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_EXP))
+ {
+ for (int i = 0; i < TREE_OPERAND_LENGTH (expr); i++)
+ DFS_follow_tree_edge (TREE_OPERAND (expr, i));
+ DFS_follow_tree_edge (TREE_BLOCK (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BLOCK))
+ {
+ for (tree t = BLOCK_VARS (expr); t; t = TREE_CHAIN (t))
+ /* ??? FIXME. See also streamer_write_chain. */
+ if (!(VAR_OR_FUNCTION_DECL_P (t)
+ && DECL_EXTERNAL (t)))
+ DFS_follow_tree_edge (t);
+
+ DFS_follow_tree_edge (BLOCK_SUPERCONTEXT (expr));
+
+ /* Follow BLOCK_ABSTRACT_ORIGIN for the limited cases we can
+ handle - those that represent inlined function scopes.
+ For the drop rest them on the floor instead of ICEing
+ in dwarf2out.c. */
+ if (inlined_function_outer_scope_p (expr))
+ {
+ tree ultimate_origin = block_ultimate_origin (expr);
+ DFS_follow_tree_edge (ultimate_origin);
+ }
+ /* Do not follow BLOCK_NONLOCALIZED_VARS. We cannot handle debug
+ information for early inlined BLOCKs so drop it on the floor instead
+ of ICEing in dwarf2out.c. */
+
+ /* BLOCK_FRAGMENT_ORIGIN and BLOCK_FRAGMENT_CHAIN is not live at LTO
+ streaming time. */
+
+ /* Do not output BLOCK_SUBBLOCKS. Instead on streaming-in this
+ list is re-constructed from BLOCK_SUPERCONTEXT. */
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
+ {
+ unsigned i;
+ tree t;
+
+ /* Note that the number of BINFO slots has already been emitted in
+ EXPR's header (see streamer_write_tree_header) because this length
+ is needed to build the empty BINFO node on the reader side. */
+ FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (expr), i, t)
+ DFS_follow_tree_edge (t);
+ DFS_follow_tree_edge (BINFO_OFFSET (expr));
+ DFS_follow_tree_edge (BINFO_VTABLE (expr));
+ DFS_follow_tree_edge (BINFO_VPTR_FIELD (expr));
+
+ /* The number of BINFO_BASE_ACCESSES has already been emitted in
+ EXPR's bitfield section. */
+ FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (expr), i, t)
+ DFS_follow_tree_edge (t);
+
+ DFS_follow_tree_edge (BINFO_INHERITANCE_CHAIN (expr));
+ DFS_follow_tree_edge (BINFO_SUBVTT_INDEX (expr));
+ DFS_follow_tree_edge (BINFO_VPTR_INDEX (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
+ {
+ unsigned i;
+ tree index, value;
+
+ FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (expr), i, index, value)
+ {
+ DFS_follow_tree_edge (index);
+ DFS_follow_tree_edge (value);
+ }
+ }
+
+ #undef DFS_follow_tree_edge
+ }
+
+ /* Return a hash value for the tree T. */
+
+ static hashval_t
+ hash_tree (struct streamer_tree_cache_d *cache, tree t)
+ {
+ #define visit(SIBLING) \
+ do { \
+ unsigned ix; \
+ if (SIBLING && streamer_tree_cache_lookup (cache, SIBLING, &ix)) \
+ v = iterative_hash_hashval_t (streamer_tree_cache_get_hash (cache, ix), v); \
+ } while (0)
+
+ /* Hash TS_BASE. */
+ enum tree_code code = TREE_CODE (t);
+ hashval_t v = iterative_hash_host_wide_int (code, 0);
+ if (!TYPE_P (t))
+ {
+ v = iterative_hash_host_wide_int (TREE_SIDE_EFFECTS (t)
+ | (TREE_CONSTANT (t) << 1)
+ | (TREE_READONLY (t) << 2)
+ | (TREE_PUBLIC (t) << 3), v);
+ }
+ v = iterative_hash_host_wide_int (TREE_ADDRESSABLE (t)
+ | (TREE_THIS_VOLATILE (t) << 1), v);
+ if (DECL_P (t))
+ v = iterative_hash_host_wide_int (DECL_UNSIGNED (t), v);
+ else if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_UNSIGNED (t), v);
+ if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_ARTIFICIAL (t), v);
+ else
+ v = iterative_hash_host_wide_int (TREE_NO_WARNING (t), v);
+ v = iterative_hash_host_wide_int (TREE_NOTHROW (t)
+ | (TREE_STATIC (t) << 1)
+ | (TREE_PRIVATE (t) << 2)
+ | (TREE_PROTECTED (t) << 3)
+ | (TREE_DEPRECATED (t) << 4), v);
+ if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_SATURATING (t)
+ | (TYPE_ADDR_SPACE (t) << 1), v);
+ else if (code == SSA_NAME)
+ v = iterative_hash_host_wide_int (SSA_NAME_IS_DEFAULT_DEF (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
+ {
+ v = iterative_hash_host_wide_int (TREE_INT_CST_LOW (t), v);
+ v = iterative_hash_host_wide_int (TREE_INT_CST_HIGH (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_REAL_CST))
+ {
+ REAL_VALUE_TYPE r = TREE_REAL_CST (t);
+ v = iterative_hash_host_wide_int (r.cl, v);
+ v = iterative_hash_host_wide_int (r.decimal
+ | (r.sign << 1)
+ | (r.signalling << 2)
+ | (r.canonical << 3), v);
+ v = iterative_hash_host_wide_int (r.uexp, v);
+ for (unsigned i = 0; i < SIGSZ; ++i)
+ v = iterative_hash_host_wide_int (r.sig[i], v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIXED_CST))
+ {
+ FIXED_VALUE_TYPE f = TREE_FIXED_CST (t);
+ v = iterative_hash_host_wide_int (f.mode, v);
+ v = iterative_hash_host_wide_int (f.data.low, v);
+ v = iterative_hash_host_wide_int (f.data.high, v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ v = iterative_hash_host_wide_int (DECL_MODE (t), v);
+ v = iterative_hash_host_wide_int (DECL_NONLOCAL (t)
+ | (DECL_VIRTUAL_P (t) << 1)
+ | (DECL_IGNORED_P (t) << 2)
+ | (DECL_ABSTRACT (t) << 3)
+ | (DECL_ARTIFICIAL (t) << 4)
+ | (DECL_USER_ALIGN (t) << 5)
+ | (DECL_PRESERVE_P (t) << 6)
+ | (DECL_EXTERNAL (t) << 7)
+ | (DECL_GIMPLE_REG_P (t) << 8), v);
+ v = iterative_hash_host_wide_int (DECL_ALIGN (t), v);
+ if (code == LABEL_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_ERROR_ISSUED (t), v);
+ v = iterative_hash_host_wide_int (EH_LANDING_PAD_NR (t), v);
+ v = iterative_hash_host_wide_int (LABEL_DECL_UID (t), v);
+ }
+ else if (code == FIELD_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_PACKED (t)
+ | (DECL_NONADDRESSABLE_P (t) << 1),
+ v);
+ v = iterative_hash_host_wide_int (DECL_OFFSET_ALIGN (t), v);
+ }
+ else if (code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_HAS_DEBUG_EXPR_P (t)
+ | (DECL_NONLOCAL_FRAME (t) << 1),
+ v);
+ }
+ if (code == RESULT_DECL
+ || code == PARM_DECL
+ || code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_BY_REFERENCE (t), v);
+ if (code == VAR_DECL
+ || code == PARM_DECL)
+ v = iterative_hash_host_wide_int (DECL_HAS_VALUE_EXPR_P (t), v);
+ }
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WRTL))
+ v = iterative_hash_host_wide_int (DECL_REGISTER (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ v = iterative_hash_host_wide_int (DECL_DEFER_OUTPUT (t)
+ | (DECL_COMMON (t) << 1)
+ | (DECL_DLLIMPORT_P (t) << 2)
+ | (DECL_WEAK (t) << 3)
+ | (DECL_SEEN_IN_BIND_EXPR_P (t) << 4)
+ | (DECL_COMDAT (t) << 5)
+ | (DECL_VISIBILITY_SPECIFIED (t) << 6),
+ v);
+ v = iterative_hash_host_wide_int (DECL_VISIBILITY (t), v);
+ if (code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_HARD_REGISTER (t)
+ | (DECL_IN_TEXT_SECTION (t) << 1)
+ | (DECL_IN_CONSTANT_POOL (t) << 2),
+ v);
+ v = iterative_hash_host_wide_int (DECL_TLS_MODEL (t), v);
+ }
+ if (VAR_OR_FUNCTION_DECL_P (t))
+ v = iterative_hash_host_wide_int (DECL_INIT_PRIORITY (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ v = iterative_hash_host_wide_int (DECL_BUILT_IN_CLASS (t), v);
+ v = iterative_hash_host_wide_int (DECL_STATIC_CONSTRUCTOR (t)
+ | (DECL_STATIC_DESTRUCTOR (t) << 1)
+ | (DECL_UNINLINABLE (t) << 2)
+ | (DECL_POSSIBLY_INLINED (t) << 3)
+ | (DECL_IS_NOVOPS (t) << 4)
+ | (DECL_IS_RETURNS_TWICE (t) << 5)
+ | (DECL_IS_MALLOC (t) << 6)
+ | (DECL_IS_OPERATOR_NEW (t) << 7)
+ | (DECL_DECLARED_INLINE_P (t) << 8)
+ | (DECL_STATIC_CHAIN (t) << 9)
+ | (DECL_NO_INLINE_WARNING_P (t) << 10)
+ | (DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT (t) << 11)
+ | (DECL_NO_LIMIT_STACK (t) << 12)
+ | (DECL_DISREGARD_INLINE_LIMITS (t) << 13)
+ | (DECL_PURE_P (t) << 14)
+ | (DECL_LOOPING_CONST_OR_PURE_P (t) << 15), v);
+ if (DECL_BUILT_IN_CLASS (t) != NOT_BUILT_IN)
+ v = iterative_hash_host_wide_int (DECL_FUNCTION_CODE (t), v);
+ if (DECL_STATIC_DESTRUCTOR (t))
+ v = iterative_hash_host_wide_int (DECL_FINI_PRIORITY (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ v = iterative_hash_host_wide_int (TYPE_MODE (t), v);
+ v = iterative_hash_host_wide_int (TYPE_STRING_FLAG (t)
+ | (TYPE_NO_FORCE_BLK (t) << 1)
+ | (TYPE_NEEDS_CONSTRUCTING (t) << 2)
+ | (TYPE_PACKED (t) << 3)
+ | (TYPE_RESTRICT (t) << 4)
+ | (TYPE_CONTAINS_PLACEHOLDER_INTERNAL (t) << 5)
+ | (TYPE_USER_ALIGN (t) << 6)
+ | (TYPE_READONLY (t) << 7), v);
+ if (RECORD_OR_UNION_TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_TRANSPARENT_AGGR (t), v);
+ else if (code == ARRAY_TYPE)
+ v = iterative_hash_host_wide_int (TYPE_NONALIASED_COMPONENT (t), v);
+ v = iterative_hash_host_wide_int (TYPE_PRECISION (t), v);
+ v = iterative_hash_host_wide_int (TYPE_ALIGN (t), v);
+ v = iterative_hash_host_wide_int (TYPE_ALIAS_SET (t) == 0 ? 0 : -1, v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TRANSLATION_UNIT_DECL))
+ v = iterative_hash (TRANSLATION_UNIT_LANGUAGE (t),
+ strlen (TRANSLATION_UNIT_LANGUAGE (t)), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TARGET_OPTION))
+ v = iterative_hash (t, sizeof (struct cl_target_option), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_OPTIMIZATION))
+ v = iterative_hash (t, sizeof (struct cl_optimization), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_IDENTIFIER))
+ v = iterative_hash_host_wide_int (IDENTIFIER_HASH_VALUE (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_STRING))
+ v = iterative_hash (TREE_STRING_POINTER (t), TREE_STRING_LENGTH (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
+ {
+ if (POINTER_TYPE_P (t))
+ {
+ /* For pointers factor in the pointed-to type recursively as
+ we cannot recurse through only pointers.
+ ??? We can generalize this by keeping track of the
+ in-SCC edges for each tree (or arbitrarily the first
+ such edge) and hashing that in in a second stage
+ (instead of the quadratic mixing of the SCC we do now). */
+ hashval_t x;
+ unsigned ix;
+ if (streamer_tree_cache_lookup (cache, TREE_TYPE (t), &ix))
+ x = streamer_tree_cache_get_hash (cache, ix);
+ else
+ x = hash_tree (cache, TREE_TYPE (t));
+ v = iterative_hash_hashval_t (x, v);
+ }
+ else if (code != IDENTIFIER_NODE)
+ visit (TREE_TYPE (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
+ for (unsigned i = 0; i < VECTOR_CST_NELTS (t); ++i)
+ visit (VECTOR_CST_ELT (t, i));
+
+ if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
+ {
+ visit (TREE_REALPART (t));
+ visit (TREE_IMAGPART (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
+ {
+ visit (DECL_NAME (t));
+ if (DECL_FILE_SCOPE_P (t))
+ ;
+ else
+ visit (DECL_CONTEXT (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ visit (DECL_SIZE (t));
+ visit (DECL_SIZE_UNIT (t));
+ visit (DECL_ATTRIBUTES (t));
+ if ((code == VAR_DECL
+ || code == PARM_DECL)
+ && DECL_HAS_VALUE_EXPR_P (t))
+ visit (DECL_VALUE_EXPR (t));
+ if (code == VAR_DECL
+ && DECL_HAS_DEBUG_EXPR_P (t))
+ visit (DECL_DEBUG_EXPR (t));
+ /* ??? Hash DECL_INITIAL as streamed. Needs the output-block to
+ be able to call get_symbol_initial_value. */
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
+ {
+ if (code == FUNCTION_DECL)
+ {
+ for (tree a = DECL_ARGUMENTS (t); a; a = DECL_CHAIN (a))
+ visit (a);
+ visit (DECL_RESULT (t));
+ }
+ else if (code == TYPE_DECL)
+ visit (DECL_ORIGINAL_TYPE (t));
+ visit (DECL_VINDEX (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ if (DECL_ASSEMBLER_NAME_SET_P (t))
+ visit (DECL_ASSEMBLER_NAME (t));
+ visit (DECL_SECTION_NAME (t));
+ visit (DECL_COMDAT_GROUP (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
+ {
+ visit (DECL_FIELD_OFFSET (t));
+ visit (DECL_BIT_FIELD_TYPE (t));
+ visit (DECL_BIT_FIELD_REPRESENTATIVE (t));
+ visit (DECL_FIELD_BIT_OFFSET (t));
+ visit (DECL_FCONTEXT (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ visit (DECL_FUNCTION_PERSONALITY (t));
+ visit (DECL_FUNCTION_SPECIFIC_TARGET (t));
+ visit (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ visit (TYPE_SIZE (t));
+ visit (TYPE_SIZE_UNIT (t));
+ visit (TYPE_ATTRIBUTES (t));
+ visit (TYPE_NAME (t));
+ visit (TYPE_MAIN_VARIANT (t));
+ if (TYPE_FILE_SCOPE_P (t))
+ ;
+ else
+ visit (TYPE_CONTEXT (t));
+ visit (TYPE_STUB_DECL (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
+ {
+ if (code == ENUMERAL_TYPE)
+ visit (TYPE_VALUES (t));
+ else if (code == ARRAY_TYPE)
+ visit (TYPE_DOMAIN (t));
+ else if (RECORD_OR_UNION_TYPE_P (t))
+ for (tree f = TYPE_FIELDS (t); f; f = TREE_CHAIN (f))
+ visit (f);
+ else if (code == FUNCTION_TYPE
+ || code == METHOD_TYPE)
+ visit (TYPE_ARG_TYPES (t));
+ if (!POINTER_TYPE_P (t))
+ visit (TYPE_MINVAL (t));
+ visit (TYPE_MAXVAL (t));
+ if (RECORD_OR_UNION_TYPE_P (t))
+ visit (TYPE_BINFO (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_LIST))
+ {
+ visit (TREE_PURPOSE (t));
+ visit (TREE_VALUE (t));
+ visit (TREE_CHAIN (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VEC))
+ for (int i = 0; i < TREE_VEC_LENGTH (t); ++i)
+ visit (TREE_VEC_ELT (t, i));
+
+ if (CODE_CONTAINS_STRUCT (code, TS_EXP))
+ {
+ v = iterative_hash_host_wide_int (TREE_OPERAND_LENGTH (t), v);
+ for (int i = 0; i < TREE_OPERAND_LENGTH (t); ++i)
+ visit (TREE_OPERAND (t, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
+ {
+ unsigned i;
+ tree b;
+ FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (t), i, b)
+ visit (b);
+ visit (BINFO_OFFSET (t));
+ visit (BINFO_VTABLE (t));
+ visit (BINFO_VPTR_FIELD (t));
+ FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (t), i, b)
+ visit (b);
+ visit (BINFO_INHERITANCE_CHAIN (t));
+ visit (BINFO_SUBVTT_INDEX (t));
+ visit (BINFO_VPTR_INDEX (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
+ {
+ unsigned i;
+ tree index, value;
+ v = iterative_hash_host_wide_int (CONSTRUCTOR_NELTS (t), v);
+ FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (t), i, index, value)
+ {
+ visit (index);
+ visit (value);
+ }
+ }
+
+ return v;
+
+ #undef visit
+ }
+
+ /* Compare two SCC entries by their hash value for qsorting them. */
+
+ static int
+ scc_entry_compare (const void *p1_, const void *p2_)
+ {
+ const scc_entry *p1 = (const scc_entry *) p1_;
+ const scc_entry *p2 = (const scc_entry *) p2_;
+ if (p1->hash < p2->hash)
+ return -1;
+ else if (p1->hash > p2->hash)
+ return 1;
+ return 0;
+ }
+
+ /* Return a hash value for the SCC on the SCC stack from FIRST with
+ size SIZE. */
+
+ static hashval_t
+ hash_scc (struct streamer_tree_cache_d *cache, unsigned first, unsigned size)
+ {
+ /* Compute hash values for the SCC members. */
+ for (unsigned i = 0; i < size; ++i)
+ sccstack[first+i].hash = hash_tree (cache, sccstack[first+i].t);
+
+ if (size == 1)
+ return sccstack[first].hash;
+
+ /* Sort the SCC of type, hash pairs so that when we mix in
+ all members of the SCC the hash value becomes independent on
+ the order we visited the SCC. Disregard hashes equal to
+ the hash of the tree we mix into because we cannot guarantee
+ a stable sort for those across different TUs. */
+ qsort (&sccstack[first], size, sizeof (scc_entry), scc_entry_compare);
+ hashval_t *tem = XALLOCAVEC (hashval_t, size);
+ for (unsigned i = 0; i < size; ++i)
+ {
+ hashval_t hash = sccstack[first+i].hash;
+ hashval_t orig_hash = hash;
+ unsigned j;
+ /* Skip same hashes. */
+ for (j = i + 1;
+ j < size && sccstack[first+j].hash == orig_hash; ++j)
+ ;
+ for (; j < size; ++j)
+ hash = iterative_hash_hashval_t (sccstack[first+j].hash, hash);
+ for (j = 0; sccstack[first+j].hash != orig_hash; ++j)
+ hash = iterative_hash_hashval_t (sccstack[first+j].hash, hash);
+ tem[i] = hash;
+ }
+ hashval_t scc_hash = 0;
+ for (unsigned i = 0; i < size; ++i)
+ {
+ sccstack[first+i].hash = tem[i];
+ scc_hash = iterative_hash_hashval_t (tem[i], scc_hash);
+ }
+ return scc_hash;
+ }
+
+ /* DFS walk EXPR and stream SCCs of tree bodies if they are not
+ already in the streamer cache. Main routine called for
+ each visit of EXPR. */
+
+ static void
+ DFS_write_tree (struct output_block *ob, sccs *from_state,
+ tree expr, bool ref_p, bool this_ref_p)
+ {
+ unsigned ix;
+ sccs **slot;
+
+ /* Handle special cases. */
+ if (expr == NULL_TREE)
+ return;
+
+ /* Do not DFS walk into indexable trees. */
+ if (this_ref_p && tree_is_indexable (expr))
+ return;
+
+ /* Check if we already streamed EXPR. */
+ if (streamer_tree_cache_lookup (ob->writer_cache, expr, &ix))
+ return;
+
+ slot = (sccs **)pointer_map_insert (sccstate, expr);
+ sccs *cstate = *slot;
+ if (!cstate)
+ {
+ scc_entry e = { expr, 0 };
+ /* Not yet visited. DFS recurse and push it onto the stack. */
+ *slot = cstate = XOBNEW (&sccstate_obstack, struct sccs);
+ sccstack.safe_push (e);
+ cstate->dfsnum = next_dfs_num++;
+ cstate->low = cstate->dfsnum;
+
+ if (streamer_handle_as_builtin_p (expr))
+ ;
+ else if (TREE_CODE (expr) == INTEGER_CST
+ && !TREE_OVERFLOW (expr))
+ DFS_write_tree (ob, cstate, TREE_TYPE (expr), ref_p, ref_p);
+ else
+ {
+ DFS_write_tree_body (ob, expr, cstate, ref_p);
+
+ /* Walk any LTO-specific edges. */
+ if (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
+ {
+ /* Handle DECL_INITIAL for symbols. */
+ tree initial = get_symbol_initial_value (ob, expr);
+ DFS_write_tree (ob, cstate, initial, ref_p, ref_p);
+ }
+ }
+
+ /* See if we found an SCC. */
+ if (cstate->low == cstate->dfsnum)
+ {
+ unsigned first, size;
+ tree x;
+
+ /* Pop the SCC and compute its size. */
+ first = sccstack.length ();
+ do
+ {
+ x = sccstack[--first].t;
+ }
+ while (x != expr);
+ size = sccstack.length () - first;
+
+ /* No need to compute hashes for LTRANS units, we don't perform
+ any merging there. */
+ hashval_t scc_hash = 0;
+ unsigned scc_entry_len = 0;
+ if (!flag_wpa)
+ {
+ scc_hash = hash_scc (ob->writer_cache, first, size);
+
+ /* Put the entries with the least number of collisions first. */
+ unsigned entry_start = 0;
+ scc_entry_len = size + 1;
+ for (unsigned i = 0; i < size;)
+ {
+ unsigned from = i;
+ for (i = i + 1; i < size
+ && (sccstack[first + i].hash
+ == sccstack[first + from].hash); ++i)
+ ;
+ if (i - from < scc_entry_len)
+ {
+ scc_entry_len = i - from;
+ entry_start = from;
+ }
+ }
+ for (unsigned i = 0; i < scc_entry_len; ++i)
+ {
+ scc_entry tem = sccstack[first + i];
+ sccstack[first + i] = sccstack[first + entry_start + i];
+ sccstack[first + entry_start + i] = tem;
+ }
+ }
+
+ /* Write LTO_tree_scc. */
+ streamer_write_record_start (ob, LTO_tree_scc);
+ streamer_write_uhwi (ob, size);
+ streamer_write_uhwi (ob, scc_hash);
+
+ /* Write size-1 SCCs without wrapping them inside SCC bundles.
+ All INTEGER_CSTs need to be handled this way as we need
+ their type to materialize them. Also builtins are handled
+ this way.
+ ??? We still wrap these in LTO_tree_scc so at the
+ input side we can properly identify the tree we want
+ to ultimatively return. */
+ size_t old_len = ob->writer_cache->nodes.length ();
+ if (size == 1)
+ lto_output_tree_1 (ob, expr, scc_hash, ref_p, this_ref_p);
+ else
+ {
+ /* Write the size of the SCC entry candidates. */
+ streamer_write_uhwi (ob, scc_entry_len);
+
+ /* Write all headers and populate the streamer cache. */
+ for (unsigned i = 0; i < size; ++i)
+ {
+ hashval_t hash = sccstack[first+i].hash;
+ tree t = sccstack[first+i].t;
+ bool exists_p = streamer_tree_cache_insert (ob->writer_cache,
+ t, hash, &ix);
+ gcc_assert (!exists_p);
+
+ if (!lto_is_streamable (t))
+ internal_error ("tree code %qs is not supported "
+ "in LTO streams",
+ tree_code_name[TREE_CODE (t)]);
+
+ gcc_checking_assert (!streamer_handle_as_builtin_p (t));
+
+ /* Write the header, containing everything needed to
+ materialize EXPR on the reading side. */
+ streamer_write_tree_header (ob, t);
+ }
+
+ /* Write the bitpacks and tree references. */
+ for (unsigned i = 0; i < size; ++i)
+ {
+ lto_write_tree_1 (ob, sccstack[first+i].t, ref_p);
+
+ /* Mark the end of the tree. */
+ streamer_write_zero (ob);
+ }
+ }
+ gcc_assert (old_len + size == ob->writer_cache->nodes.length ());
+
+ /* Finally truncate the vector. */
+ sccstack.truncate (first);
+
+ if (from_state)
+ from_state->low = MIN (from_state->low, cstate->low);
+ return;
+ }
+
+ if (from_state)
+ from_state->low = MIN (from_state->low, cstate->low);
+ }
+ gcc_checking_assert (from_state);
+ if (cstate->dfsnum < from_state->dfsnum)
+ from_state->low = MIN (cstate->dfsnum, from_state->low);
+ }
+
+ extern unsigned long num_trees_output;
+ extern unsigned long num_grefs_output;
+ extern unsigned long num_lrefs_output;
+
+ /* Emit the physical representation of tree node EXPR to output block
+ OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
+ via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */
+
+ void
+ lto_output_tree (struct output_block *ob, tree expr,
+ bool ref_p, bool this_ref_p)
+ {
+ unsigned ix;
+ bool existed_p;
+
+ num_trees_output++;
+
+ if (expr == NULL_TREE)
+ {
+ streamer_write_record_start (ob, LTO_null);
+ return;
+ }
+
+ if (this_ref_p && tree_is_indexable (expr))
+ {
+ num_grefs_output++;
+ lto_output_tree_ref (ob, expr);
return;
}

! existed_p = streamer_tree_cache_lookup (ob->writer_cache, expr, &ix);
if (existed_p)
{
+ num_lrefs_output++;
/* If a node has already been streamed out, make sure that
we don't write it more than once. Otherwise, the reader
will instantiate two different nodes for the same object. */
*************** lto_output_tree (struct output_block *ob
*** 394,413 ****
streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
lto_tree_code_to_tag (TREE_CODE (expr)));
}
- else if (streamer_handle_as_builtin_p (expr))
- {
- /* MD and NORMAL builtins do not need to be written out
- completely as they are always instantiated by the
- compiler on startup. The only builtins that need to
- be written out are BUILT_IN_FRONTEND. For all other
- builtins, we simply write the class and code. */
- streamer_write_builtin (ob, expr);
- }
else
{
! /* This is the first time we see EXPR, write its fields
! to OB. */
! lto_write_tree (ob, expr, ref_p);
}
}

--- 1331,1361 ----
streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
lto_tree_code_to_tag (TREE_CODE (expr)));
}
else
{
! /* This is the first time we see EXPR, write all reachable
! trees to OB. */
!
! /* Start the DFS walk. */
! /* Save ob state ... */
! /* let's see ... */
! sccstate = pointer_map_create ();
! gcc_obstack_init (&sccstate_obstack);
! next_dfs_num = 1;
! DFS_write_tree (ob, NULL, expr, ref_p, this_ref_p);
! sccstack.release ();
! pointer_map_destroy (sccstate);
! obstack_free (&sccstate_obstack, NULL);
!
! /* Finally append a reference to the tree we were writing.
! ??? If expr ended up as a singleton we could have
! inlined it here and avoid outputting a reference. */
! existed_p = streamer_tree_cache_lookup (ob->writer_cache, expr, &ix);
! gcc_assert (existed_p);
! streamer_write_record_start (ob, LTO_tree_pickle_reference);
! streamer_write_uhwi (ob, ix);
! streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
! lto_tree_code_to_tag (TREE_CODE (expr)));
}
}

Index: trunk/gcc/tree-streamer-in.c
===================================================================
*** trunk.orig/gcc/tree-streamer-in.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree-streamer-in.c 2013-06-13 15:35:55.522262466 +0200
*************** unpack_ts_base_value_fields (struct bitp
*** 122,128 ****
TYPE_ARTIFICIAL (expr) = (unsigned) bp_unpack_value (bp, 1);
else
TREE_NO_WARNING (expr) = (unsigned) bp_unpack_value (bp, 1);
- TREE_USED (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_NOTHROW (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_STATIC (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_PRIVATE (expr) = (unsigned) bp_unpack_value (bp, 1);
--- 122,127 ----
*************** lto_input_ts_list_tree_pointers (struct
*** 811,817 ****
{
TREE_PURPOSE (expr) = stream_read_tree (ib, data_in);
TREE_VALUE (expr) = stream_read_tree (ib, data_in);
! TREE_CHAIN (expr) = streamer_read_chain (ib, data_in);
}


--- 810,816 ----
{
TREE_PURPOSE (expr) = stream_read_tree (ib, data_in);
TREE_VALUE (expr) = stream_read_tree (ib, data_in);
! TREE_CHAIN (expr) = stream_read_tree (ib, data_in);
}


*************** streamer_read_tree_body (struct lto_inpu
*** 1021,1039 ****
}


- /* Read and INTEGER_CST node from input block IB using the per-file
- context in DATA_IN. */
-
- tree
- streamer_read_integer_cst (struct lto_input_block *ib, struct data_in *data_in)
- {
- tree type = stream_read_tree (ib, data_in);
- unsigned HOST_WIDE_INT low = streamer_read_uhwi (ib);
- HOST_WIDE_INT high = streamer_read_hwi (ib);
- return build_int_cst_wide (type, low, high);
- }
-
-
/* Read an index IX from input block IB and return the tree node at
DATA_IN->FILE_DATA->GLOBALS_INDEX[IX]. */

--- 1020,1025 ----
*************** streamer_get_pickled_tree (struct lto_in
*** 1047,1053 ****
ix = streamer_read_uhwi (ib);
expected_tag = streamer_read_enum (ib, LTO_tags, LTO_NUM_TAGS);

! result = streamer_tree_cache_get (data_in->reader_cache, ix);
gcc_assert (result
&& TREE_CODE (result) == lto_tag_to_tree_code (expected_tag));

--- 1033,1039 ----
ix = streamer_read_uhwi (ib);
expected_tag = streamer_read_enum (ib, LTO_tags, LTO_NUM_TAGS);

! result = streamer_tree_cache_get_tree (data_in->reader_cache, ix);
gcc_assert (result
&& TREE_CODE (result) == lto_tag_to_tree_code (expected_tag));

*************** streamer_get_builtin_tree (struct lto_in
*** 1091,1097 ****
if (asmname)
set_builtin_user_assembler_name (result, asmname);

! streamer_tree_cache_append (data_in->reader_cache, result);

return result;
}
--- 1077,1083 ----
if (asmname)
set_builtin_user_assembler_name (result, asmname);

! streamer_tree_cache_append (data_in->reader_cache, result, 0);

return result;
}
Index: trunk/gcc/lto/lto.c
===================================================================
*** trunk.orig/gcc/lto/lto.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto/lto.c 2013-06-13 15:36:51.976937026 +0200
*************** along with GCC; see the file COPYING3.
*** 45,50 ****
--- 45,51 ----
#include "tree-streamer.h"
#include "splay-tree.h"
#include "lto-partition.h"
+ #include "data-streamer.h"

static GTY(()) tree first_personality_decl;

*************** lto_read_in_decl_state (struct data_in *
*** 254,260 ****
uint32_t i, j;

ix = *data++;
! decl = streamer_tree_cache_get (data_in->reader_cache, ix);
if (TREE_CODE (decl) != FUNCTION_DECL)
{
gcc_assert (decl == void_type_node);
--- 255,261 ----
uint32_t i, j;

ix = *data++;
! decl = streamer_tree_cache_get_tree (data_in->reader_cache, ix);
if (TREE_CODE (decl) != FUNCTION_DECL)
{
gcc_assert (decl == void_type_node);
*************** lto_read_in_decl_state (struct data_in *
*** 268,274 ****
tree *decls = ggc_alloc_vec_tree (size);

for (j = 0; j < size; j++)
! decls[j] = streamer_tree_cache_get (data_in->reader_cache, data[j]);

state->streams[i].size = size;
state->streams[i].trees = decls;
--- 269,275 ----
tree *decls = ggc_alloc_vec_tree (size);

for (j = 0; j < size; j++)
! decls[j] = streamer_tree_cache_get_tree (data_in->reader_cache, data[j]);

state->streams[i].size = size;
state->streams[i].trees = decls;
*************** lto_read_in_decl_state (struct data_in *
*** 280,285 ****
--- 281,289 ----



+ /* ??? Old hashing and merging code follows, we keep it for statistics
+ purposes for now. */
+
/* Global type table. FIXME, it should be possible to re-use some
of the type hashing routines in tree.c (type_hash_canon, type_hash_lookup,
etc), but those assume that types were built with the various
*************** struct sccs
*** 366,398 ****
static unsigned int next_dfs_num;
static unsigned int gtc_next_dfs_num;

- /* GIMPLE type merging cache. A direct-mapped cache based on TYPE_UID. */
-
- typedef struct GTY(()) gimple_type_leader_entry_s {
- tree type;
- tree leader;
- } gimple_type_leader_entry;
-
- #define GIMPLE_TYPE_LEADER_SIZE 16381
- static GTY((length("GIMPLE_TYPE_LEADER_SIZE")))
- gimple_type_leader_entry *gimple_type_leader;
-
- /* Lookup an existing leader for T and return it or NULL_TREE, if
- there is none in the cache. */
-
- static inline tree
- gimple_lookup_type_leader (tree t)
- {
- gimple_type_leader_entry *leader;
-
- leader = &gimple_type_leader[TYPE_UID (t) % GIMPLE_TYPE_LEADER_SIZE];
- if (leader->type != t)
- return NULL_TREE;
-
- return leader->leader;
- }
-
-
/* Return true if T1 and T2 have the same name. If FOR_COMPLETION_P is
true then if any type has no name return false, otherwise return
true if both types have no names. */
--- 370,375 ----
*************** gtc_visit (tree t1, tree t2,
*** 450,456 ****
struct sccs *cstate = NULL;
type_pair_t p;
void **slot;
- tree leader1, leader2;

/* Check first for the obvious case of pointer identity. */
if (t1 == t2)
--- 427,432 ----
*************** gtc_visit (tree t1, tree t2,
*** 507,521 ****
/* For other types fall through to more complex checks. */
}

- /* If the types have been previously registered and found equal
- they still are. */
- leader1 = gimple_lookup_type_leader (t1);
- leader2 = gimple_lookup_type_leader (t2);
- if (leader1 == t2
- || t1 == leader2
- || (leader1 && leader1 == leader2))
- return true;
-
/* If the hash values of t1 and t2 are different the types can't
possibly be the same. This helps keeping the type-pair hashtable
small, only tracking comparisons for hash collisions. */
--- 483,488 ----
*************** gimple_types_compatible_p (tree t1, tree
*** 879,885 ****
struct obstack sccstate_obstack;
type_pair_t p = NULL;
bool res;
- tree leader1, leader2;

/* Before starting to set up the SCC machinery handle simple cases. */

--- 846,851 ----
*************** gimple_types_compatible_p (tree t1, tree
*** 938,952 ****
/* For other types fall through to more complex checks. */
}

- /* If the types have been previously registered and found equal
- they still are. */
- leader1 = gimple_lookup_type_leader (t1);
- leader2 = gimple_lookup_type_leader (t2);
- if (leader1 == t2
- || t1 == leader2
- || (leader1 && leader1 == leader2))
- return true;
-
/* If the hash values of t1 and t2 are different the types can't
possibly be the same. This helps keeping the type-pair hashtable
small, only tracking comparisons for hash collisions. */
--- 904,909 ----
*************** gimple_type_eq (const void *p1, const vo
*** 1335,1402 ****
CONST_CAST_TREE (t2));
}

!
! /* Worker for gimple_register_type.
! Register type T in the global type table gimple_types.
! When REGISTERING_MV is false first recurse for the main variant of T. */

static tree
! gimple_register_type_1 (tree t, bool registering_mv)
{
void **slot;
- gimple_type_leader_entry *leader;
-
- /* If we registered this type before return the cached result. */
- leader = &gimple_type_leader[TYPE_UID (t) % GIMPLE_TYPE_LEADER_SIZE];
- if (leader->type == t)
- return leader->leader;
-
- /* Always register the main variant first. This is important so we
- pick up the non-typedef variants as canonical, otherwise we'll end
- up taking typedef ids for structure tags during comparison.
- It also makes sure that main variants will be merged to main variants.
- As we are operating on a possibly partially fixed up type graph
- do not bother to recurse more than once, otherwise we may end up
- walking in circles.
- If we are registering a main variant it will either remain its
- own main variant or it will be merged to something else in which
- case we do not care for the main variant leader. */
- if (!registering_mv
- && TYPE_MAIN_VARIANT (t) != t)
- gimple_register_type_1 (TYPE_MAIN_VARIANT (t), true);

/* See if we already have an equivalent type registered. */
slot = htab_find_slot (gimple_types, t, INSERT);
if (*slot
&& *(tree *)slot != t)
! {
! tree new_type = (tree) *((tree *) slot);
! leader->type = t;
! leader->leader = new_type;
! return new_type;
! }

/* If not, insert it to the cache and the hash. */
- leader->type = t;
- leader->leader = t;
*slot = (void *) t;
return t;
}

! /* Register type T in the global type table gimple_types.
! If another type T', compatible with T, already existed in
! gimple_types then return T', otherwise return T. This is used by
! LTO to merge identical types read from different TUs. */
!
! static tree
! gimple_register_type (tree t)
! {
! gcc_assert (TYPE_P (t));
! return gimple_register_type_1 (t, false);
! }
!
! #define GIMPLE_REGISTER_TYPE(tt) \
! (TREE_VISITED (tt) ? gimple_register_type (tt) : tt)



--- 1292,1316 ----
CONST_CAST_TREE (t2));
}

! /* Register type T in the global type table gimple_types. */

static tree
! gimple_register_type (tree t)
{
void **slot;

/* See if we already have an equivalent type registered. */
slot = htab_find_slot (gimple_types, t, INSERT);
if (*slot
&& *(tree *)slot != t)
! return (tree) *((tree *) slot);

/* If not, insert it to the cache and the hash. */
*slot = (void *) t;
return t;
}

! /* End of old merging code. */



*************** remember_with_vars (tree t)
*** 1413,1632 ****
*(tree *) htab_find_slot (tree_with_vars, t, INSERT) = t;
}

! #define LTO_FIXUP_TREE(tt) \
do \
{ \
if (tt) \
{ \
- if (TYPE_P (tt)) \
- (tt) = GIMPLE_REGISTER_TYPE (tt); \
if (VAR_OR_FUNCTION_DECL_P (tt) && TREE_PUBLIC (tt)) \
remember_with_vars (t); \
- if (TREE_CODE (tt) == INTEGER_CST) \
- (tt) = fixup_integer_cst (tt); \
} \
} while (0)

- static void lto_fixup_types (tree);
-
- /* Return integer_cst T with updated type. */
-
- static tree
- fixup_integer_cst (tree t)
- {
- tree type = GIMPLE_REGISTER_TYPE (TREE_TYPE (t));
-
- if (type == TREE_TYPE (t))
- return t;
-
- /* If overflow was set, streamer_read_integer_cst
- produced local copy of T. */
- if (TREE_OVERFLOW (t))
- {
- TREE_TYPE (t) = type;
- return t;
- }
- else
- /* Otherwise produce new shared node for the new type. */
- return build_int_cst_wide (type, TREE_INT_CST_LOW (t),
- TREE_INT_CST_HIGH (t));
- }
-
/* Fix up fields of a tree_typed T. */

static void
! lto_ft_typed (tree t)
{
! LTO_FIXUP_TREE (TREE_TYPE (t));
}

/* Fix up fields of a tree_common T. */

static void
! lto_ft_common (tree t)
{
! lto_ft_typed (t);
! LTO_FIXUP_TREE (TREE_CHAIN (t));
}

/* Fix up fields of a decl_minimal T. */

static void
! lto_ft_decl_minimal (tree t)
{
! lto_ft_common (t);
! LTO_FIXUP_TREE (DECL_NAME (t));
! LTO_FIXUP_TREE (DECL_CONTEXT (t));
}

/* Fix up fields of a decl_common T. */

static void
! lto_ft_decl_common (tree t)
{
! lto_ft_decl_minimal (t);
! LTO_FIXUP_TREE (DECL_SIZE (t));
! LTO_FIXUP_TREE (DECL_SIZE_UNIT (t));
! LTO_FIXUP_TREE (DECL_INITIAL (t));
! LTO_FIXUP_TREE (DECL_ATTRIBUTES (t));
! LTO_FIXUP_TREE (DECL_ABSTRACT_ORIGIN (t));
}

/* Fix up fields of a decl_with_vis T. */

static void
! lto_ft_decl_with_vis (tree t)
{
! lto_ft_decl_common (t);

/* Accessor macro has side-effects, use field-name here. */
! LTO_FIXUP_TREE (t->decl_with_vis.assembler_name);
! LTO_FIXUP_TREE (DECL_SECTION_NAME (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! lto_ft_decl_non_common (tree t)
{
! lto_ft_decl_with_vis (t);
! LTO_FIXUP_TREE (DECL_ARGUMENT_FLD (t));
! LTO_FIXUP_TREE (DECL_RESULT_FLD (t));
! LTO_FIXUP_TREE (DECL_VINDEX (t));
! /* The C frontends may create exact duplicates for DECL_ORIGINAL_TYPE
! like for 'typedef enum foo foo'. We have no way of avoiding to
! merge them and dwarf2out.c cannot deal with this,
! so fix this up by clearing DECL_ORIGINAL_TYPE in this case. */
! if (TREE_CODE (t) == TYPE_DECL
! && DECL_ORIGINAL_TYPE (t) == TREE_TYPE (t))
! DECL_ORIGINAL_TYPE (t) = NULL_TREE;
}

/* Fix up fields of a decl_non_common T. */

static void
! lto_ft_function (tree t)
{
! lto_ft_decl_non_common (t);
! LTO_FIXUP_TREE (DECL_FUNCTION_PERSONALITY (t));
}

/* Fix up fields of a field_decl T. */

static void
! lto_ft_field_decl (tree t)
{
! lto_ft_decl_common (t);
! LTO_FIXUP_TREE (DECL_FIELD_OFFSET (t));
! LTO_FIXUP_TREE (DECL_BIT_FIELD_TYPE (t));
! LTO_FIXUP_TREE (DECL_QUALIFIER (t));
! LTO_FIXUP_TREE (DECL_FIELD_BIT_OFFSET (t));
! LTO_FIXUP_TREE (DECL_FCONTEXT (t));
}

/* Fix up fields of a type T. */

static void
! lto_ft_type (tree t)
{
! lto_ft_common (t);
! LTO_FIXUP_TREE (TYPE_CACHED_VALUES (t));
! LTO_FIXUP_TREE (TYPE_SIZE (t));
! LTO_FIXUP_TREE (TYPE_SIZE_UNIT (t));
! LTO_FIXUP_TREE (TYPE_ATTRIBUTES (t));
! LTO_FIXUP_TREE (TYPE_NAME (t));

/* Accessors are for derived node types only. */
if (!POINTER_TYPE_P (t))
! LTO_FIXUP_TREE (TYPE_MINVAL (t));
! LTO_FIXUP_TREE (TYPE_MAXVAL (t));

/* Accessor is for derived node types only. */
! LTO_FIXUP_TREE (t->type_non_common.binfo);

! LTO_FIXUP_TREE (TYPE_CONTEXT (t));
}

/* Fix up fields of a BINFO T. */

static void
! lto_ft_binfo (tree t)
{
unsigned HOST_WIDE_INT i, n;
- tree base, saved_base;

! lto_ft_common (t);
! LTO_FIXUP_TREE (BINFO_VTABLE (t));
! LTO_FIXUP_TREE (BINFO_OFFSET (t));
! LTO_FIXUP_TREE (BINFO_VIRTUALS (t));
! LTO_FIXUP_TREE (BINFO_VPTR_FIELD (t));
n = vec_safe_length (BINFO_BASE_ACCESSES (t));
for (i = 0; i < n; i++)
! {
! saved_base = base = BINFO_BASE_ACCESS (t, i);
! LTO_FIXUP_TREE (base);
! if (base != saved_base)
! (*BINFO_BASE_ACCESSES (t))[i] = base;
! }
! LTO_FIXUP_TREE (BINFO_INHERITANCE_CHAIN (t));
! LTO_FIXUP_TREE (BINFO_SUBVTT_INDEX (t));
! LTO_FIXUP_TREE (BINFO_VPTR_INDEX (t));
n = BINFO_N_BASE_BINFOS (t);
for (i = 0; i < n; i++)
! {
! saved_base = base = BINFO_BASE_BINFO (t, i);
! LTO_FIXUP_TREE (base);
! if (base != saved_base)
! (*BINFO_BASE_BINFOS (t))[i] = base;
! }
}

/* Fix up fields of a CONSTRUCTOR T. */

static void
! lto_ft_constructor (tree t)
{
unsigned HOST_WIDE_INT idx;
constructor_elt *ce;

! lto_ft_typed (t);

for (idx = 0; vec_safe_iterate (CONSTRUCTOR_ELTS (t), idx, &ce); idx++)
{
! LTO_FIXUP_TREE (ce->index);
! LTO_FIXUP_TREE (ce->value);
}
}

/* Fix up fields of an expression tree T. */

static void
! lto_ft_expr (tree t)
{
int i;
! lto_ft_typed (t);
for (i = TREE_OPERAND_LENGTH (t) - 1; i >= 0; --i)
! LTO_FIXUP_TREE (TREE_OPERAND (t, i));
}

/* Given a tree T fixup fields of T by replacing types with their merged
--- 1327,1499 ----
*(tree *) htab_find_slot (tree_with_vars, t, INSERT) = t;
}

! #define MAYBE_REMEMBER_WITH_VARS(tt) \
do \
{ \
if (tt) \
{ \
if (VAR_OR_FUNCTION_DECL_P (tt) && TREE_PUBLIC (tt)) \
remember_with_vars (t); \
} \
} while (0)

/* Fix up fields of a tree_typed T. */

static void
! maybe_remember_with_vars_typed (tree t)
{
! MAYBE_REMEMBER_WITH_VARS (TREE_TYPE (t));
}

/* Fix up fields of a tree_common T. */

static void
! maybe_remember_with_vars_common (tree t)
{
! maybe_remember_with_vars_typed (t);
! MAYBE_REMEMBER_WITH_VARS (TREE_CHAIN (t));
}

/* Fix up fields of a decl_minimal T. */

static void
! maybe_remember_with_vars_decl_minimal (tree t)
{
! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_NAME (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_CONTEXT (t));
}

/* Fix up fields of a decl_common T. */

static void
! maybe_remember_with_vars_decl_common (tree t)
{
! maybe_remember_with_vars_decl_minimal (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_SIZE (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_SIZE_UNIT (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_INITIAL (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_ATTRIBUTES (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_ABSTRACT_ORIGIN (t));
}

/* Fix up fields of a decl_with_vis T. */

static void
! maybe_remember_with_vars_decl_with_vis (tree t)
{
! maybe_remember_with_vars_decl_common (t);

/* Accessor macro has side-effects, use field-name here. */
! MAYBE_REMEMBER_WITH_VARS (t->decl_with_vis.assembler_name);
! MAYBE_REMEMBER_WITH_VARS (DECL_SECTION_NAME (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! maybe_remember_with_vars_decl_non_common (tree t)
{
! maybe_remember_with_vars_decl_with_vis (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_ARGUMENT_FLD (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_RESULT_FLD (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_VINDEX (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! maybe_remember_with_vars_function (tree t)
{
! maybe_remember_with_vars_decl_non_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_FUNCTION_PERSONALITY (t));
}

/* Fix up fields of a field_decl T. */

static void
! maybe_remember_with_vars_field_decl (tree t)
{
! maybe_remember_with_vars_decl_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_FIELD_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_BIT_FIELD_TYPE (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_QUALIFIER (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_FIELD_BIT_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_FCONTEXT (t));
}

/* Fix up fields of a type T. */

static void
! maybe_remember_with_vars_type (tree t)
{
! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (TYPE_CACHED_VALUES (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_SIZE (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_SIZE_UNIT (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_ATTRIBUTES (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_NAME (t));

/* Accessors are for derived node types only. */
if (!POINTER_TYPE_P (t))
! MAYBE_REMEMBER_WITH_VARS (TYPE_MINVAL (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_MAXVAL (t));

/* Accessor is for derived node types only. */
! MAYBE_REMEMBER_WITH_VARS (t->type_non_common.binfo);

! MAYBE_REMEMBER_WITH_VARS (TYPE_CONTEXT (t));
}

/* Fix up fields of a BINFO T. */

static void
! maybe_remember_with_vars_binfo (tree t)
{
unsigned HOST_WIDE_INT i, n;

! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (BINFO_VTABLE (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VIRTUALS (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VPTR_FIELD (t));
n = vec_safe_length (BINFO_BASE_ACCESSES (t));
for (i = 0; i < n; i++)
! MAYBE_REMEMBER_WITH_VARS (BINFO_BASE_ACCESS (t, i));
! MAYBE_REMEMBER_WITH_VARS (BINFO_INHERITANCE_CHAIN (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_SUBVTT_INDEX (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VPTR_INDEX (t));
n = BINFO_N_BASE_BINFOS (t);
for (i = 0; i < n; i++)
! MAYBE_REMEMBER_WITH_VARS (BINFO_BASE_BINFO (t, i));
}

/* Fix up fields of a CONSTRUCTOR T. */

static void
! maybe_remember_with_vars_constructor (tree t)
{
unsigned HOST_WIDE_INT idx;
constructor_elt *ce;

! maybe_remember_with_vars_typed (t);

for (idx = 0; vec_safe_iterate (CONSTRUCTOR_ELTS (t), idx, &ce); idx++)
{
! MAYBE_REMEMBER_WITH_VARS (ce->index);
! MAYBE_REMEMBER_WITH_VARS (ce->value);
}
}

/* Fix up fields of an expression tree T. */

static void
! maybe_remember_with_vars_expr (tree t)
{
int i;
! maybe_remember_with_vars_typed (t);
for (i = TREE_OPERAND_LENGTH (t) - 1; i >= 0; --i)
! MAYBE_REMEMBER_WITH_VARS (TREE_OPERAND (t, i));
}

/* Given a tree T fixup fields of T by replacing types with their merged
*************** lto_ft_expr (tree t)
*** 1635,1641 ****
for instance integer or string constants. */

static void
! lto_fixup_types (tree t)
{
switch (TREE_CODE (t))
{
--- 1502,1508 ----
for instance integer or string constants. */

static void
! maybe_remember_with_vars (tree t)
{
switch (TREE_CODE (t))
{
*************** lto_fixup_types (tree t)
*** 1643,1655 ****
break;

case TREE_LIST:
! LTO_FIXUP_TREE (TREE_VALUE (t));
! LTO_FIXUP_TREE (TREE_PURPOSE (t));
! LTO_FIXUP_TREE (TREE_CHAIN (t));
break;

case FIELD_DECL:
! lto_ft_field_decl (t);
break;

case LABEL_DECL:
--- 1510,1522 ----
break;

case TREE_LIST:
! MAYBE_REMEMBER_WITH_VARS (TREE_VALUE (t));
! MAYBE_REMEMBER_WITH_VARS (TREE_PURPOSE (t));
! MAYBE_REMEMBER_WITH_VARS (TREE_CHAIN (t));
break;

case FIELD_DECL:
! maybe_remember_with_vars_field_decl (t);
break;

case LABEL_DECL:
*************** lto_fixup_types (tree t)
*** 1657,1683 ****
case PARM_DECL:
case RESULT_DECL:
case IMPORTED_DECL:
! lto_ft_decl_common (t);
break;

case VAR_DECL:
! lto_ft_decl_with_vis (t);
break;

case TYPE_DECL:
! lto_ft_decl_non_common (t);
break;

case FUNCTION_DECL:
! lto_ft_function (t);
break;

case TREE_BINFO:
! lto_ft_binfo (t);
break;

case PLACEHOLDER_EXPR:
! lto_ft_common (t);
break;

case BLOCK:
--- 1524,1550 ----
case PARM_DECL:
case RESULT_DECL:
case IMPORTED_DECL:
! maybe_remember_with_vars_decl_common (t);
break;

case VAR_DECL:
! maybe_remember_with_vars_decl_with_vis (t);
break;

case TYPE_DECL:
! maybe_remember_with_vars_decl_non_common (t);
break;

case FUNCTION_DECL:
! maybe_remember_with_vars_function (t);
break;

case TREE_BINFO:
! maybe_remember_with_vars_binfo (t);
break;

case PLACEHOLDER_EXPR:
! maybe_remember_with_vars_common (t);
break;

case BLOCK:
*************** lto_fixup_types (tree t)
*** 1686,1706 ****
case TARGET_OPTION_NODE:
break;

default:
if (TYPE_P (t))
! lto_ft_type (t);
! else if (TREE_CODE (t) == CONSTRUCTOR)
! lto_ft_constructor (t);
else if (CONSTANT_CLASS_P (t))
! LTO_FIXUP_TREE (TREE_TYPE (t));
else if (EXPR_P (t))
! {
! lto_ft_expr (t);
! }
else
! {
! remember_with_vars (t);
! }
}
}

--- 1553,1571 ----
case TARGET_OPTION_NODE:
break;

+ case CONSTRUCTOR:
+ maybe_remember_with_vars_constructor (t);
+ break;
+
default:
if (TYPE_P (t))
! maybe_remember_with_vars_type (t);
else if (CONSTANT_CLASS_P (t))
! MAYBE_REMEMBER_WITH_VARS (TREE_TYPE (t));
else if (EXPR_P (t))
! maybe_remember_with_vars_expr (t);
else
! remember_with_vars (t);
}
}

*************** lto_register_function_decl_in_symtab (st
*** 1786,2006 ****
}
}

- static unsigned long num_merged_types = 0;

! /* Given a streamer cache structure DATA_IN (holding a sequence of trees
! for one compilation unit) go over all trees starting at index FROM until the
! end of the sequence and replace fields of those trees, and the trees
! themself with their canonical variants as per gimple_register_type. */

static void
! uniquify_nodes (struct data_in *data_in, unsigned from)
{
! struct streamer_tree_cache_d *cache = data_in->reader_cache;
! unsigned len = cache->nodes.length ();
! unsigned i;

! /* Go backwards because children streamed for the first time come
! as part of their parents, and hence are created after them. */

! /* First register all the types in the cache. This makes sure to
! have the original structure in the type cycles when registering
! them and computing hashes. */
! for (i = len; i-- > from;)
! {
! tree t = cache->nodes[i];
! if (t && TYPE_P (t))
! {
! tree newt = gimple_register_type (t);
! /* Mark non-prevailing types so we fix them up. No need
! to reset that flag afterwards - nothing that refers
! to those types is left and they are collected. */
! if (newt != t)
! {
! num_merged_types++;
! TREE_VISITED (t) = 1;
! }
}
}

! /* Second fixup all trees in the new cache entries. */
! for (i = len; i-- > from;)
{
! tree t = cache->nodes[i];
! tree oldt = t;
! if (!t)
! continue;
!
! /* First fixup the fields of T. */
! lto_fixup_types (t);
!
! if (!TYPE_P (t))
! continue;
!
! /* Now try to find a canonical variant of T itself. */
! t = GIMPLE_REGISTER_TYPE (t);
!
! if (t == oldt)
! {
! /* The following re-creates proper variant lists while fixing up
! the variant leaders. We do not stream TYPE_NEXT_VARIANT so the
! variant list state before fixup is broken. */
! tree tem, mv;
!
! #ifdef ENABLE_CHECKING
! /* Remove us from our main variant list if we are not the
! variant leader. */
! if (TYPE_MAIN_VARIANT (t) != t)
! {
! tem = TYPE_MAIN_VARIANT (t);
! while (tem && TYPE_NEXT_VARIANT (tem) != t)
! tem = TYPE_NEXT_VARIANT (tem);
! gcc_assert (!tem && !TYPE_NEXT_VARIANT (t));
! }
! #endif

! /* Query our new main variant. */
! mv = GIMPLE_REGISTER_TYPE (TYPE_MAIN_VARIANT (t));

! /* If we were the variant leader and we get replaced ourselves drop
! all variants from our list. */
! if (TYPE_MAIN_VARIANT (t) == t
! && mv != t)
! {
! tem = t;
! while (tem)
! {
! tree tem2 = TYPE_NEXT_VARIANT (tem);
! TYPE_NEXT_VARIANT (tem) = NULL_TREE;
! tem = tem2;
! }
! }

! /* If we are not our own variant leader link us into our new leaders
! variant list. */
! if (mv != t)
! {
! TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (mv);
! TYPE_NEXT_VARIANT (mv) = t;
! if (RECORD_OR_UNION_TYPE_P (t))
! TYPE_BINFO (t) = TYPE_BINFO (mv);
! /* Preserve the invariant that type variants share their
! TYPE_FIELDS. */
! if (RECORD_OR_UNION_TYPE_P (t)
! && TYPE_FIELDS (mv) != TYPE_FIELDS (t))
! {
! tree f1, f2;
! for (f1 = TYPE_FIELDS (mv), f2 = TYPE_FIELDS (t);
! f1 && f2; f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! {
! unsigned ix;
! gcc_assert (f1 != f2
! && DECL_NAME (f1) == DECL_NAME (f2));
! if (!streamer_tree_cache_lookup (cache, f2, &ix))
! gcc_unreachable ();
! /* If we're going to replace an element which we'd
! still visit in the next iterations, we wouldn't
! handle it, so do it here. We do have to handle it
! even though the field_decl itself will be removed,
! as it could refer to e.g. integer_cst which we
! wouldn't reach via any other way, hence they
! (and their type) would stay uncollected. */
! /* ??? We should rather make sure to replace all
! references to f2 with f1. That means handling
! COMPONENT_REFs and CONSTRUCTOR elements in
! lto_fixup_types and special-case the field-decl
! operand handling. */
! /* ??? Not sure the above is all relevant in this
! path canonicalizing TYPE_FIELDS to that of the
! main variant. */
! if (ix < i)
! lto_fixup_types (f2);
! streamer_tree_cache_insert_at (cache, f1, ix);
! }
! TYPE_FIELDS (t) = TYPE_FIELDS (mv);
! }
! }

! /* Finally adjust our main variant and fix it up. */
! TYPE_MAIN_VARIANT (t) = mv;

! /* The following reconstructs the pointer chains
! of the new pointed-to type if we are a main variant. We do
! not stream those so they are broken before fixup. */
! if (TREE_CODE (t) == POINTER_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_PTR_TO (t) = TYPE_POINTER_TO (TREE_TYPE (t));
! TYPE_POINTER_TO (TREE_TYPE (t)) = t;
! }
! else if (TREE_CODE (t) == REFERENCE_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_REF_TO (t) = TYPE_REFERENCE_TO (TREE_TYPE (t));
! TYPE_REFERENCE_TO (TREE_TYPE (t)) = t;
! }
}

! else
{
! if (RECORD_OR_UNION_TYPE_P (t))
! {
! tree f1, f2;
! if (TYPE_FIELDS (t) != TYPE_FIELDS (oldt))
! for (f1 = TYPE_FIELDS (t), f2 = TYPE_FIELDS (oldt);
! f1 && f2; f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! {
! unsigned ix;
! gcc_assert (f1 != f2 && DECL_NAME (f1) == DECL_NAME (f2));
! if (!streamer_tree_cache_lookup (cache, f2, &ix))
! gcc_unreachable ();
! /* If we're going to replace an element which we'd
! still visit in the next iterations, we wouldn't
! handle it, so do it here. We do have to handle it
! even though the field_decl itself will be removed,
! as it could refer to e.g. integer_cst which we
! wouldn't reach via any other way, hence they
! (and their type) would stay uncollected. */
! /* ??? We should rather make sure to replace all
! references to f2 with f1. That means handling
! COMPONENT_REFs and CONSTRUCTOR elements in
! lto_fixup_types and special-case the field-decl
! operand handling. */
! if (ix < i)
! lto_fixup_types (f2);
! streamer_tree_cache_insert_at (cache, f1, ix);
! }
! }

! /* If we found a tree that is equal to oldt replace it in the
! cache, so that further users (in the various LTO sections)
! make use of it. */
! streamer_tree_cache_insert_at (cache, t, i);
}
}

! /* Finally compute the canonical type of all TREE_TYPEs and register
! VAR_DECL and FUNCTION_DECL nodes in the symbol table.
! From this point there are no longer any types with
! TYPE_STRUCTURAL_EQUALITY_P and its type-based alias problems.
! This step requires the TYPE_POINTER_TO lists being present, so
! make sure it is done last. */
! for (i = len; i-- > from;)
! {
! tree t = cache->nodes[i];
! if (t == NULL_TREE)
! continue;
!
! if (TREE_CODE (t) == VAR_DECL)
! lto_register_var_decl_in_symtab (data_in, t);
! else if (TREE_CODE (t) == FUNCTION_DECL && !DECL_BUILT_IN (t))
! lto_register_function_decl_in_symtab (data_in, t);
! else if (!flag_wpa
! && TREE_CODE (t) == TYPE_DECL)
! debug_hooks->type_decl (t, !DECL_FILE_SCOPE_P (t));
! else if (TYPE_P (t) && !TYPE_CANONICAL (t))
! TYPE_CANONICAL (t) = gimple_register_canonical_type (t);
}
}


--- 1651,2378 ----
}
}


! /* For the type T re-materialize it in the type variant list and
! the pointer/reference-to chains. */

static void
! lto_fixup_prevailing_type (tree t)
{
! /* The following re-creates proper variant lists while fixing up
! the variant leaders. We do not stream TYPE_NEXT_VARIANT so the
! variant list state before fixup is broken. */

! /* If we are not our own variant leader link us into our new leaders
! variant list. */
! if (TYPE_MAIN_VARIANT (t) != t)
! {
! tree mv = TYPE_MAIN_VARIANT (t);
! TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (mv);
! TYPE_NEXT_VARIANT (mv) = t;
! }

! /* The following reconstructs the pointer chains
! of the new pointed-to type if we are a main variant. We do
! not stream those so they are broken before fixup. */
! if (TREE_CODE (t) == POINTER_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_PTR_TO (t) = TYPE_POINTER_TO (TREE_TYPE (t));
! TYPE_POINTER_TO (TREE_TYPE (t)) = t;
! }
! else if (TREE_CODE (t) == REFERENCE_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_REF_TO (t) = TYPE_REFERENCE_TO (TREE_TYPE (t));
! TYPE_REFERENCE_TO (TREE_TYPE (t)) = t;
! }
! }
!
!
! /* We keep prevailing tree SCCs in a hashtable with manual collision
! handling (in case all hashes compare the same) and keep the colliding
! entries in the tree_scc->next chain. */
!
! struct tree_scc
! {
! tree_scc *next;
! /* Hash of the whole SCC. */
! hashval_t hash;
! /* Number of trees in the SCC. */
! unsigned len;
! /* Number of possible entries into the SCC (tree nodes [0..entry_len-1]
! which share the same individual tree hash). */
! unsigned entry_len;
! /* The members of the SCC.
! We only need to remember the first entry node candidate for prevailing
! SCCs (but of course have access to all entries for SCCs we are
! processing).
! ??? For prevailing SCCs we really only need hash and the first
! entry candidate, but that's too awkward to implement. */
! tree entries[1];
! };
!
! struct tree_scc_hasher : typed_noop_remove <tree_scc>
! {
! typedef tree_scc value_type;
! typedef tree_scc compare_type;
! static inline hashval_t hash (const value_type *);
! static inline bool equal (const value_type *, const compare_type *);
! };
!
! hashval_t
! tree_scc_hasher::hash (const value_type *scc)
! {
! return scc->hash;
! }
!
! bool
! tree_scc_hasher::equal (const value_type *scc1, const compare_type *scc2)
! {
! if (scc1->hash != scc2->hash
! || scc1->len != scc2->len
! || scc1->entry_len != scc2->entry_len)
! return false;
! return true;
! }
!
! static hash_table <tree_scc_hasher> tree_scc_hash;
! static struct obstack tree_scc_hash_obstack;
!
! static unsigned long num_merged_types;
! static unsigned long num_prevailing_types;
! static unsigned long num_not_merged_types;
! static unsigned long num_not_merged_types_in_same_scc;
! static unsigned long num_not_merged_types_trees;
! static unsigned long num_not_merged_types_in_same_scc_trees;
! static unsigned long num_type_scc_trees;
! static unsigned long total_scc_size;
! static unsigned long num_sccs_read;
! static unsigned long total_scc_size_merged;
! static unsigned long num_sccs_merged;
! static unsigned long num_scc_compares;
! static unsigned long num_scc_compare_collisions;
!
!
! /* Compare the two entries T1 and T2 of two SCCs that are possibly equal,
! recursing through in-SCC tree edges. Returns true if the SCCs entered
! through T1 and T2 are equal and fills in *MAP with the pairs of
! SCC entries we visited, starting with (*MAP)[0] = T1 and (*MAP)[1] = T2. */
!
! static bool
! compare_tree_sccs_1 (tree t1, tree t2, tree **map)
! {
! enum tree_code code;
!
! /* Handle combinations where at least one of t1 or t2 is NULL_TREE. */
! if (!t1 && !t2)
! return true;
! if (!t1 || !t2)
! return false;
!
! /* Outside of the SCC trees have to compare pointer equal. */
! if (!TREE_VISITED (t2))
! return t1 == t2;
!
! /* Inside of the SCC tree they cannot be pointer equal. */
! gcc_assert (t1 != t2);
!
! /* If we have already visited t1, we reached a cycle. Optimistically
! return equal. */
! if (TREE_ASM_WRITTEN (t2))
! return true;
!
! /* Mark already visited nodes. */
! TREE_ASM_WRITTEN (t2) = 1;
!
! /* Push the pair onto map. */
! (*map)[0] = t1;
! (*map)[1] = t2;
! *map = *map + 2;
!
! /* Compare value-fields. */
! #define compare_values(X) \
! do { \
! if (X(t1) != X(t2)) \
! return false; \
! } while (0)
!
! compare_values (TREE_CODE);
! code = TREE_CODE (t1);
!
! if (!TYPE_P (t1))
! {
! compare_values (TREE_SIDE_EFFECTS);
! compare_values (TREE_CONSTANT);
! compare_values (TREE_READONLY);
! compare_values (TREE_PUBLIC);
! }
! compare_values (TREE_ADDRESSABLE);
! compare_values (TREE_THIS_VOLATILE);
! if (DECL_P (t1))
! compare_values (DECL_UNSIGNED);
! else if (TYPE_P (t1))
! compare_values (TYPE_UNSIGNED);
! if (TYPE_P (t1))
! compare_values (TYPE_ARTIFICIAL);
! else
! compare_values (TREE_NO_WARNING);
! compare_values (TREE_NOTHROW);
! compare_values (TREE_STATIC);
! compare_values (TREE_PRIVATE);
! compare_values (TREE_PROTECTED);
! compare_values (TREE_DEPRECATED);
! if (TYPE_P (t1))
! {
! compare_values (TYPE_SATURATING);
! compare_values (TYPE_ADDR_SPACE);
! }
! else if (code == SSA_NAME)
! compare_values (SSA_NAME_IS_DEFAULT_DEF);
!
! if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
! {
! compare_values (TREE_INT_CST_LOW);
! compare_values (TREE_INT_CST_HIGH);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_REAL_CST))
! {
! /* ??? No suitable compare routine available. */
! REAL_VALUE_TYPE r1 = TREE_REAL_CST (t1);
! REAL_VALUE_TYPE r2 = TREE_REAL_CST (t2);
! if (r1.cl != r2.cl
! || r1.decimal != r2.decimal
! || r1.sign != r2.sign
! || r1.signalling != r2.signalling
! || r1.canonical != r2.canonical
! || r1.uexp != r2.uexp)
! return false;
! for (unsigned i = 0; i < SIGSZ; ++i)
! if (r1.sig[i] != r2.sig[i])
! return false;
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FIXED_CST))
! if (!fixed_compare (EQ_EXPR,
! TREE_FIXED_CST_PTR (t1), TREE_FIXED_CST_PTR (t2)))
! return false;
!
!
! /* We don't want to compare locations, so there is nothing do compare
! for TS_DECL_MINIMAL. */
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
! {
! compare_values (DECL_MODE);
! compare_values (DECL_NONLOCAL);
! compare_values (DECL_VIRTUAL_P);
! compare_values (DECL_IGNORED_P);
! compare_values (DECL_ABSTRACT);
! compare_values (DECL_ARTIFICIAL);
! compare_values (DECL_USER_ALIGN);
! compare_values (DECL_PRESERVE_P);
! compare_values (DECL_EXTERNAL);
! compare_values (DECL_GIMPLE_REG_P);
! compare_values (DECL_ALIGN);
! if (code == LABEL_DECL)
! {
! compare_values (DECL_ERROR_ISSUED);
! compare_values (EH_LANDING_PAD_NR);
! compare_values (LABEL_DECL_UID);
! }
! else if (code == FIELD_DECL)
! {
! compare_values (DECL_PACKED);
! compare_values (DECL_NONADDRESSABLE_P);
! compare_values (DECL_OFFSET_ALIGN);
! }
! else if (code == VAR_DECL)
! {
! compare_values (DECL_HAS_DEBUG_EXPR_P);
! compare_values (DECL_NONLOCAL_FRAME);
! }
! if (code == RESULT_DECL
! || code == PARM_DECL
! || code == VAR_DECL)
! {
! compare_values (DECL_BY_REFERENCE);
! if (code == VAR_DECL
! || code == PARM_DECL)
! compare_values (DECL_HAS_VALUE_EXPR_P);
! }
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WRTL))
! compare_values (DECL_REGISTER);
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
! {
! compare_values (DECL_DEFER_OUTPUT);
! compare_values (DECL_COMMON);
! compare_values (DECL_DLLIMPORT_P);
! compare_values (DECL_WEAK);
! compare_values (DECL_SEEN_IN_BIND_EXPR_P);
! compare_values (DECL_COMDAT);
! compare_values (DECL_VISIBILITY);
! compare_values (DECL_VISIBILITY_SPECIFIED);
! if (code == VAR_DECL)
! {
! compare_values (DECL_HARD_REGISTER);
! compare_values (DECL_IN_TEXT_SECTION);
! compare_values (DECL_IN_CONSTANT_POOL);
! compare_values (DECL_TLS_MODEL);
! }
! if (VAR_OR_FUNCTION_DECL_P (t1))
! compare_values (DECL_INIT_PRIORITY);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
! {
! compare_values (DECL_BUILT_IN_CLASS);
! compare_values (DECL_STATIC_CONSTRUCTOR);
! compare_values (DECL_STATIC_DESTRUCTOR);
! compare_values (DECL_UNINLINABLE);
! compare_values (DECL_POSSIBLY_INLINED);
! compare_values (DECL_IS_NOVOPS);
! compare_values (DECL_IS_RETURNS_TWICE);
! compare_values (DECL_IS_MALLOC);
! compare_values (DECL_IS_OPERATOR_NEW);
! compare_values (DECL_DECLARED_INLINE_P);
! compare_values (DECL_STATIC_CHAIN);
! compare_values (DECL_NO_INLINE_WARNING_P);
! compare_values (DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT);
! compare_values (DECL_NO_LIMIT_STACK);
! compare_values (DECL_DISREGARD_INLINE_LIMITS);
! compare_values (DECL_PURE_P);
! compare_values (DECL_LOOPING_CONST_OR_PURE_P);
! if (DECL_BUILT_IN_CLASS (t1) != NOT_BUILT_IN)
! compare_values (DECL_FUNCTION_CODE);
! if (DECL_STATIC_DESTRUCTOR (t1))
! compare_values (DECL_FINI_PRIORITY);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
! {
! compare_values (TYPE_MODE);
! compare_values (TYPE_STRING_FLAG);
! compare_values (TYPE_NO_FORCE_BLK);
! compare_values (TYPE_NEEDS_CONSTRUCTING);
! if (RECORD_OR_UNION_TYPE_P (t1))
! compare_values (TYPE_TRANSPARENT_AGGR);
! else if (code == ARRAY_TYPE)
! compare_values (TYPE_NONALIASED_COMPONENT);
! compare_values (TYPE_PACKED);
! compare_values (TYPE_RESTRICT);
! compare_values (TYPE_CONTAINS_PLACEHOLDER_INTERNAL);
! compare_values (TYPE_USER_ALIGN);
! compare_values (TYPE_READONLY);
! compare_values (TYPE_PRECISION);
! compare_values (TYPE_ALIGN);
! compare_values (TYPE_ALIAS_SET);
! }
!
! /* We don't want to compare locations, so there is nothing do compare
! for TS_EXP. */
!
! /* BLOCKs are function local and we don't merge anything there, so
! simply refuse to merge. */
! if (CODE_CONTAINS_STRUCT (code, TS_BLOCK))
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_TRANSLATION_UNIT_DECL))
! if (strcmp (TRANSLATION_UNIT_LANGUAGE (t1),
! TRANSLATION_UNIT_LANGUAGE (t2)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_TARGET_OPTION))
! if (memcmp (TREE_TARGET_OPTION (t1), TREE_TARGET_OPTION (t2),
! sizeof (struct cl_target_option)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_OPTIMIZATION))
! if (memcmp (TREE_OPTIMIZATION (t1), TREE_OPTIMIZATION (t2),
! sizeof (struct cl_optimization)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
! if (vec_safe_length (BINFO_BASE_ACCESSES (t1))
! != vec_safe_length (BINFO_BASE_ACCESSES (t2)))
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
! compare_values (CONSTRUCTOR_NELTS);
!
! if (CODE_CONTAINS_STRUCT (code, TS_IDENTIFIER))
! if (IDENTIFIER_LENGTH (t1) != IDENTIFIER_LENGTH (t2)
! || memcmp (IDENTIFIER_POINTER (t1), IDENTIFIER_POINTER (t2),
! IDENTIFIER_LENGTH (t1)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_STRING))
! if (TREE_STRING_LENGTH (t1) != TREE_STRING_LENGTH (t2)
! || memcmp (TREE_STRING_POINTER (t1), TREE_STRING_POINTER (t2),
! TREE_STRING_LENGTH (t1)) != 0)
! return false;
!
! #undef compare_values
!
!
! /* Compare pointer fields. */
!
! /* Recurse. Search & Replaced from DFS_write_tree_body. */
! #define compare_tree_edges(E1, E2) \
! do { \
! if (!compare_tree_sccs_1 ((E1), (E2), map)) \
! return false; \
! } while (0)
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
! {
! if (code != IDENTIFIER_NODE)
! compare_tree_edges (TREE_TYPE (t1), TREE_TYPE (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
! {
! unsigned i;
! /* Note that the number of elements for EXPR has already been emitted
! in EXPR's header (see streamer_write_tree_header). */
! for (i = 0; i < VECTOR_CST_NELTS (t1); ++i)
! compare_tree_edges (VECTOR_CST_ELT (t1, i), VECTOR_CST_ELT (t2, i));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
! {
! compare_tree_edges (TREE_REALPART (t1), TREE_REALPART (t2));
! compare_tree_edges (TREE_IMAGPART (t1), TREE_IMAGPART (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
! {
! compare_tree_edges (DECL_NAME (t1), DECL_NAME (t2));
! /* ??? Global decls from different TUs have non-matching
! TRANSLATION_UNIT_DECLs. Only consider a small set of
! decls equivalent, we should not end up merging others. */
! if ((code == TYPE_DECL
! || code == NAMESPACE_DECL
! || code == IMPORTED_DECL
! || code == CONST_DECL
! || (VAR_OR_FUNCTION_DECL_P (t1)
! && (TREE_PUBLIC (t1) || DECL_EXTERNAL (t1))))
! && DECL_FILE_SCOPE_P (t1) && DECL_FILE_SCOPE_P (t2))
! ;
! else
! compare_tree_edges (DECL_CONTEXT (t1), DECL_CONTEXT (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
! {
! compare_tree_edges (DECL_SIZE (t1), DECL_SIZE (t2));
! compare_tree_edges (DECL_SIZE_UNIT (t1), DECL_SIZE_UNIT (t2));
! compare_tree_edges (DECL_ATTRIBUTES (t1), DECL_ATTRIBUTES (t2));
! if ((code == VAR_DECL
! || code == PARM_DECL)
! && DECL_HAS_VALUE_EXPR_P (t1))
! compare_tree_edges (DECL_VALUE_EXPR (t1), DECL_VALUE_EXPR (t2));
! if (code == VAR_DECL
! && DECL_HAS_DEBUG_EXPR_P (t1))
! compare_tree_edges (DECL_DEBUG_EXPR (t1), DECL_DEBUG_EXPR (t2));
! /* LTO specific edges. */
! if (code != FUNCTION_DECL
! && code != TRANSLATION_UNIT_DECL)
! compare_tree_edges (DECL_INITIAL (t1), DECL_INITIAL (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
! {
! if (code == FUNCTION_DECL)
! {
! tree a1, a2;
! for (a1 = DECL_ARGUMENTS (t1), a2 = DECL_ARGUMENTS (t2);
! a1 || a2;
! a1 = TREE_CHAIN (a1), a2 = TREE_CHAIN (a2))
! compare_tree_edges (a1, a2);
! compare_tree_edges (DECL_RESULT (t1), DECL_RESULT (t2));
! }
! else if (code == TYPE_DECL)
! compare_tree_edges (DECL_ORIGINAL_TYPE (t1), DECL_ORIGINAL_TYPE (t2));
! compare_tree_edges (DECL_VINDEX (t1), DECL_VINDEX (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
! {
! /* Make sure we don't inadvertently set the assembler name. */
! if (DECL_ASSEMBLER_NAME_SET_P (t1))
! compare_tree_edges (DECL_ASSEMBLER_NAME (t1),
! DECL_ASSEMBLER_NAME (t2));
! compare_tree_edges (DECL_SECTION_NAME (t1), DECL_SECTION_NAME (t2));
! compare_tree_edges (DECL_COMDAT_GROUP (t1), DECL_COMDAT_GROUP (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
! {
! compare_tree_edges (DECL_FIELD_OFFSET (t1), DECL_FIELD_OFFSET (t2));
! compare_tree_edges (DECL_BIT_FIELD_TYPE (t1), DECL_BIT_FIELD_TYPE (t2));
! compare_tree_edges (DECL_BIT_FIELD_REPRESENTATIVE (t1),
! DECL_BIT_FIELD_REPRESENTATIVE (t2));
! compare_tree_edges (DECL_FIELD_BIT_OFFSET (t1),
! DECL_FIELD_BIT_OFFSET (t2));
! compare_tree_edges (DECL_FCONTEXT (t1), DECL_FCONTEXT (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
! {
! compare_tree_edges (DECL_FUNCTION_PERSONALITY (t1),
! DECL_FUNCTION_PERSONALITY (t2));
! compare_tree_edges (DECL_FUNCTION_SPECIFIC_TARGET (t1),
! DECL_FUNCTION_SPECIFIC_TARGET (t2));
! compare_tree_edges (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t1),
! DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
! {
! compare_tree_edges (TYPE_SIZE (t1), TYPE_SIZE (t2));
! compare_tree_edges (TYPE_SIZE_UNIT (t1), TYPE_SIZE_UNIT (t2));
! compare_tree_edges (TYPE_ATTRIBUTES (t1), TYPE_ATTRIBUTES (t2));
! compare_tree_edges (TYPE_NAME (t1), TYPE_NAME (t2));
! /* Do not compare TYPE_POINTER_TO or TYPE_REFERENCE_TO. They will be
! reconstructed during fixup. */
! /* Do not compare TYPE_NEXT_VARIANT, we reconstruct the variant lists
! during fixup. */
! compare_tree_edges (TYPE_MAIN_VARIANT (t1), TYPE_MAIN_VARIANT (t2));
! /* ??? Global types from different TUs have non-matching
! TRANSLATION_UNIT_DECLs. Still merge them if they are otherwise
! equal. */
! if (TYPE_FILE_SCOPE_P (t1) && TYPE_FILE_SCOPE_P (t2))
! ;
! else
! compare_tree_edges (TYPE_CONTEXT (t1), TYPE_CONTEXT (t2));
! /* TYPE_CANONICAL is re-computed during type merging, so do not
! compare it here. */
! compare_tree_edges (TYPE_STUB_DECL (t1), TYPE_STUB_DECL (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
! {
! if (code == ENUMERAL_TYPE)
! compare_tree_edges (TYPE_VALUES (t1), TYPE_VALUES (t2));
! else if (code == ARRAY_TYPE)
! compare_tree_edges (TYPE_DOMAIN (t1), TYPE_DOMAIN (t2));
! else if (RECORD_OR_UNION_TYPE_P (t1))
! {
! tree f1, f2;
! for (f1 = TYPE_FIELDS (t1), f2 = TYPE_FIELDS (t2);
! f1 || f2;
! f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! compare_tree_edges (f1, f2);
! compare_tree_edges (TYPE_BINFO (t1), TYPE_BINFO (t2));
}
+ else if (code == FUNCTION_TYPE
+ || code == METHOD_TYPE)
+ compare_tree_edges (TYPE_ARG_TYPES (t1), TYPE_ARG_TYPES (t2));
+ if (!POINTER_TYPE_P (t1))
+ compare_tree_edges (TYPE_MINVAL (t1), TYPE_MINVAL (t2));
+ compare_tree_edges (TYPE_MAXVAL (t1), TYPE_MAXVAL (t2));
}

! if (CODE_CONTAINS_STRUCT (code, TS_LIST))
{
! compare_tree_edges (TREE_PURPOSE (t1), TREE_PURPOSE (t2));
! compare_tree_edges (TREE_VALUE (t1), TREE_VALUE (t2));
! compare_tree_edges (TREE_CHAIN (t1), TREE_CHAIN (t2));
! }

! if (CODE_CONTAINS_STRUCT (code, TS_VEC))
! for (int i = 0; i < TREE_VEC_LENGTH (t1); i++)
! compare_tree_edges (TREE_VEC_ELT (t1, i), TREE_VEC_ELT (t2, i));

! if (CODE_CONTAINS_STRUCT (code, TS_EXP))
! {
! for (int i = 0; i < TREE_OPERAND_LENGTH (t1); i++)
! compare_tree_edges (TREE_OPERAND (t1, i),
! TREE_OPERAND (t2, i));

! /* BLOCKs are function local and we don't merge anything there. */
! if (TREE_BLOCK (t1) || TREE_BLOCK (t2))
! return false;
! }

! if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
! {
! unsigned i;
! tree t;
! /* Lengths have already been compared above. */
! FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (t1), i, t)
! compare_tree_edges (t, BINFO_BASE_BINFO (t2, i));
! FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (t1), i, t)
! compare_tree_edges (t, BINFO_BASE_ACCESS (t2, i));
! compare_tree_edges (BINFO_OFFSET (t1), BINFO_OFFSET (t2));
! compare_tree_edges (BINFO_VTABLE (t1), BINFO_VTABLE (t2));
! compare_tree_edges (BINFO_VPTR_FIELD (t1), BINFO_VPTR_FIELD (t2));
! compare_tree_edges (BINFO_INHERITANCE_CHAIN (t1),
! BINFO_INHERITANCE_CHAIN (t2));
! compare_tree_edges (BINFO_SUBVTT_INDEX (t1),
! BINFO_SUBVTT_INDEX (t2));
! compare_tree_edges (BINFO_VPTR_INDEX (t1),
! BINFO_VPTR_INDEX (t2));
! }

! if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
! {
! unsigned i;
! tree index, value;
! /* Lengths have already been compared above. */
! FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (t1), i, index, value)
! {
! compare_tree_edges (index, CONSTRUCTOR_ELT (t2, i)->index);
! compare_tree_edges (value, CONSTRUCTOR_ELT (t2, i)->value);
}
+ }

! #undef compare_tree_edges
!
! return true;
! }
!
! /* Compare the tree scc SCC to the prevailing candidate PSCC, filling
! out MAP if they are equal. */
!
! static bool
! compare_tree_sccs (tree_scc *pscc, tree_scc *scc,
! tree *map)
! {
! /* Assume SCC entry hashes are sorted after their cardinality. Which
! means we can simply take the first n-tuple of equal hashes
! (which is recorded as entry_len) and do n SCC entry candidate
! comparisons. */
! for (unsigned i = 0; i < pscc->entry_len; ++i)
! {
! tree *mapp = map;
! num_scc_compare_collisions++;
! if (compare_tree_sccs_1 (pscc->entries[0], scc->entries[i], &mapp))
{
! /* Equal - no need to reset TREE_VISITED or TREE_ASM_WRITTEN
! on the scc as all trees will be freed. */
! return true;
! }
! /* Reset TREE_ASM_WRITTEN on scc for the next compare or in case
! the SCC prevails. */
! for (unsigned j = 0; j < scc->len; ++j)
! TREE_ASM_WRITTEN (scc->entries[j]) = 0;
! }
!
! return false;
! }
!
! /* Try to unify the SCC with nodes FROM to FROM + LEN in CACHE and
! hash value SCC_HASH with an already recorded SCC. Return true if
! that was successful, otherwise return false. */

! static bool
! unify_scc (struct streamer_tree_cache_d *cache, unsigned from,
! unsigned len, unsigned scc_entry_len, hashval_t scc_hash)
! {
! bool unified_p = false;
! tree_scc *scc
! = (tree_scc *) alloca (sizeof (tree_scc) + (len - 1) * sizeof (tree));
! scc->next = NULL;
! scc->hash = scc_hash;
! scc->len = len;
! scc->entry_len = scc_entry_len;
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = streamer_tree_cache_get_tree (cache, from + i);
! scc->entries[i] = t;
! /* Do not merge SCCs with local entities inside them. Also do
! not merge TRANSLATION_UNIT_DECLs. */
! if (TREE_CODE (t) == TRANSLATION_UNIT_DECL
! || (VAR_OR_FUNCTION_DECL_P (t)
! && !(TREE_PUBLIC (t) || DECL_EXTERNAL (t)))
! || TREE_CODE (t) == LABEL_DECL)
! {
! /* Avoid doing any work for these cases and do not worry to
! record the SCCs for further merging. */
! return false;
}
}

! /* Look for the list of candidate SCCs to compare against. */
! tree_scc **slot;
! slot = tree_scc_hash.find_slot_with_hash (scc, scc_hash, INSERT);
! if (*slot)
! {
! /* Try unifying against each candidate. */
! num_scc_compares++;
!
! /* Set TREE_VISITED on the scc so we can easily identify tree nodes
! outside of the scc when following tree edges. Make sure
! that TREE_ASM_WRITTEN is unset so we can use it as 2nd bit
! to track whether we visited the SCC member during the compare.
! We cannot use TREE_VISITED on the pscc members as the extended
! scc and pscc can overlap. */
! for (unsigned i = 0; i < scc->len; ++i)
! {
! TREE_VISITED (scc->entries[i]) = 1;
! gcc_assert (!TREE_ASM_WRITTEN (scc->entries[i]));
! }
!
! tree *map = XALLOCAVEC (tree, 2 * len);
! for (tree_scc *pscc = *slot; pscc; pscc = pscc->next)
! {
! if (!compare_tree_sccs (pscc, scc, map))
! continue;
!
! /* Found an equal SCC. */
! unified_p = true;
! num_scc_compare_collisions--;
! num_sccs_merged++;
! total_scc_size_merged += len;
!
! /* Fixup the streamer cache with the prevailing nodes according
! to the tree node mapping computed by compare_tree_sccs. */
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = map[2*i+1];
! enum tree_code code = TREE_CODE (t);
! unsigned ix;
! bool r;
! /* IDENTIFIER_NODEs should be singletons and are merged by the
! streamer. The others should be singletons, too, and we
! should not merge them in any way. */
! gcc_assert (code != TRANSLATION_UNIT_DECL
! && code != IDENTIFIER_NODE
! && !streamer_handle_as_builtin_p (t));
! r = streamer_tree_cache_lookup (cache, t, &ix);
! gcc_assert (r && ix >= from);
! streamer_tree_cache_replace_tree (cache, map[2 * i], ix);
! if (TYPE_P (t))
! num_merged_types++;
! }
! /* Free the tree nodes from the read SCC. */
! for (unsigned i = 0; i < len; ++i)
! ggc_free (scc->entries[i]);
! break;
! }
!
! /* Reset TREE_VISITED if we didn't unify the SCC with another. */
! if (!unified_p)
! for (unsigned i = 0; i < scc->len; ++i)
! TREE_VISITED (scc->entries[i]) = 0;
! }
!
! /* If we didn't unify it to any candidate duplicate the relevant
! pieces to permanent storage and link it into the chain. */
! if (!unified_p)
! {
! tree_scc *pscc
! = XOBNEWVAR (&tree_scc_hash_obstack, tree_scc, sizeof (tree_scc));
! memcpy (pscc, scc, sizeof (tree_scc));
! pscc->next = (*slot);
! *slot = pscc;
}
+ return unified_p;
}


*************** lto_read_decls (struct lto_file_decl_dat
*** 2036,2044 ****
{
tree t;
unsigned from = data_in->reader_cache->nodes.length ();
! t = stream_read_tree (&ib_main, data_in);
! gcc_assert (t && ib_main.p <= ib_main.len);
! uniquify_nodes (data_in, from);
}

/* Read in lto_in_decl_state objects. */
--- 2408,2530 ----
{
tree t;
unsigned from = data_in->reader_cache->nodes.length ();
! /* Read and uniquify SCCs as in the input stream. */
! enum LTO_tags tag = streamer_read_record_start (&ib_main);
! if (tag == LTO_tree_scc)
! {
! unsigned len_;
! unsigned scc_entry_len;
! hashval_t scc_hash = lto_input_scc (&ib_main, data_in, &len_,
! &scc_entry_len);
! unsigned len = data_in->reader_cache->nodes.length () - from;
! gcc_assert (len == len_);
!
! total_scc_size += len;
! num_sccs_read++;
!
! /* We have the special case of size-1 SCCs that are pre-merged
! by means of identifier and string sharing for example.
! ??? Maybe we should avoid streaming those as SCCs. */
! tree first = streamer_tree_cache_get_tree (data_in->reader_cache,
! from);
! if (len == 1
! && (TREE_CODE (first) == IDENTIFIER_NODE
! || TREE_CODE (first) == INTEGER_CST
! || TREE_CODE (first) == TRANSLATION_UNIT_DECL
! || streamer_handle_as_builtin_p (first)))
! continue;
!
! /* Try to unify the SCC with already existing ones. */
! if (!flag_ltrans
! && unify_scc (data_in->reader_cache, from,
! len, scc_entry_len, scc_hash))
! continue;
!
! /* Do remaining fixup tasks for prevailing nodes. */
! bool seen_type = false;
! bool not_merged_type_same_scc = false;
! bool not_merged_type_not_same_scc = false;
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = streamer_tree_cache_get_tree (data_in->reader_cache,
! from + i);
! /* For statistics, see if the old code would have merged
! the type. */
! if (TYPE_P (t)
! && (flag_lto_report || (flag_wpa && flag_lto_report_wpa)))
! {
! tree newt = gimple_register_type (t);
! if (newt != t)
! {
! num_not_merged_types++;
! unsigned j;
! /* Check if we can never merge the types because
! they are in the same SCC and thus the old
! code was broken. */
! for (j = 0; j < len; ++j)
! if (i != j
! && streamer_tree_cache_get_tree
! (data_in->reader_cache, from + j) == newt)
! {
! num_not_merged_types_in_same_scc++;
! not_merged_type_same_scc = true;
! break;
! }
! if (j == len)
! not_merged_type_not_same_scc = true;
! }
! }
! /* Reconstruct the type variant and pointer-to/reference-to
! chains. */
! if (TYPE_P (t))
! {
! seen_type = true;
! num_prevailing_types++;
! lto_fixup_prevailing_type (t);
! }
! /* Compute the canonical type of all types.
! ??? Should be able to assert that !TYPE_CANONICAL. */
! if (TYPE_P (t) && !TYPE_CANONICAL (t))
! TYPE_CANONICAL (t) = gimple_register_canonical_type (t);
! /* Link shared INTEGER_CSTs into TYPE_CACHED_VALUEs of its
! type which is also member of this SCC. */
! if (TREE_CODE (t) == INTEGER_CST
! && !TREE_OVERFLOW (t))
! cache_integer_cst (t);
! /* Register TYPE_DECLs with the debuginfo machinery. */
! if (!flag_wpa
! && TREE_CODE (t) == TYPE_DECL)
! debug_hooks->type_decl (t, !DECL_FILE_SCOPE_P (t));
! if (!flag_ltrans)
! {
! /* Register variables and functions with the
! symbol table. */
! if (TREE_CODE (t) == VAR_DECL)
! lto_register_var_decl_in_symtab (data_in, t);
! else if (TREE_CODE (t) == FUNCTION_DECL
! && !DECL_BUILT_IN (t))
! lto_register_function_decl_in_symtab (data_in, t);
! /* Scan the tree for references to global functions or
! variables and record those for later fixup. */
! maybe_remember_with_vars (t);
! }
! }
! if (not_merged_type_same_scc)
! {
! num_not_merged_types_in_same_scc_trees += len;
! num_not_merged_types_trees += len;
! }
! else if (not_merged_type_not_same_scc)
! num_not_merged_types_trees += len;
! if (seen_type)
! num_type_scc_trees += len;
! }
! else
! {
! /* Pickle stray references. */
! t = lto_input_tree_1 (&ib_main, data_in, tag, 0);
! gcc_assert (t && data_in->reader_cache->nodes.length () == from);
! }
}

/* Read in lto_in_decl_state objects. */
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2898,2906 ****
type_hash_cache = htab_create_ggc (512, tree_int_map_hash,
tree_int_map_eq, NULL);
type_pair_cache = XCNEWVEC (struct type_pair_d, GIMPLE_TYPE_PAIR_SIZE);
- gimple_type_leader = ggc_alloc_cleared_vec_gimple_type_leader_entry_s
- (GIMPLE_TYPE_LEADER_SIZE);
gimple_types = htab_create_ggc (16381, gimple_type_hash, gimple_type_eq, 0);

if (!quiet_flag)
fprintf (stderr, "Reading object files:");
--- 3384,3392 ----
type_hash_cache = htab_create_ggc (512, tree_int_map_hash,
tree_int_map_eq, NULL);
type_pair_cache = XCNEWVEC (struct type_pair_d, GIMPLE_TYPE_PAIR_SIZE);
gimple_types = htab_create_ggc (16381, gimple_type_hash, gimple_type_eq, 0);
+ gcc_obstack_init (&tree_scc_hash_obstack);
+ tree_scc_hash.create (4096);

if (!quiet_flag)
fprintf (stderr, "Reading object files:");
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2933,2939 ****
lto_obj_file_close (current_lto_file);
free (current_lto_file);
current_lto_file = NULL;
- ggc_collect ();
}

lto_flatten_files (decl_data, count, last_file_ix);
--- 3419,3424 ----
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2955,2961 ****
type_hash_cache = NULL;
free (type_pair_cache);
type_pair_cache = NULL;
! gimple_type_leader = NULL;
free_gimple_type_tables ();
ggc_collect ();

--- 3440,3447 ----
type_hash_cache = NULL;
free (type_pair_cache);
type_pair_cache = NULL;
! tree_scc_hash.dispose ();
! obstack_free (&tree_scc_hash_obstack, NULL);
free_gimple_type_tables ();
ggc_collect ();

*************** print_lto_report_1 (void)
*** 3121,3147 ****
const char *pfx = (flag_lto) ? "LTO" : (flag_wpa) ? "WPA" : "LTRANS";
fprintf (stderr, "%s statistics\n", pfx);

! if (gimple_types)
! fprintf (stderr, "[%s] GIMPLE type table: size %ld, %ld elements, "
! "%ld searches, %ld collisions (ratio: %f)\n", pfx,
! (long) htab_size (gimple_types),
! (long) htab_elements (gimple_types),
! (long) gimple_types->searches,
! (long) gimple_types->collisions,
! htab_collisions (gimple_types));
! else
! fprintf (stderr, "[%s] GIMPLE type table is empty\n", pfx);
! if (type_hash_cache)
! fprintf (stderr, "[%s] GIMPLE type hash cache table: size %ld, %ld elements, "
! "%ld searches, %ld collisions (ratio: %f)\n", pfx,
! (long) htab_size (type_hash_cache),
! (long) htab_elements (type_hash_cache),
! (long) type_hash_cache->searches,
! (long) type_hash_cache->collisions,
! htab_collisions (type_hash_cache));
! else
! fprintf (stderr, "[%s] GIMPLE type hash cache table is empty\n", pfx);
! fprintf (stderr, "[%s] Merged %lu types\n", pfx, num_merged_types);

print_gimple_types_stats (pfx);
print_lto_report (pfx);
--- 3607,3655 ----
const char *pfx = (flag_lto) ? "LTO" : (flag_wpa) ? "WPA" : "LTRANS";
fprintf (stderr, "%s statistics\n", pfx);

! fprintf (stderr, "[%s] read %lu SCCs of average size %f\n",
! pfx, num_sccs_read, total_scc_size / (double)num_sccs_read);
! fprintf (stderr, "[%s] %lu tree bodies read in total\n", pfx, total_scc_size);
! if (flag_wpa && tree_scc_hash.is_created ())
! {
! fprintf (stderr, "[%s] tree SCC table: size %ld, %ld elements, "
! "collision ratio: %f\n", pfx,
! (long) tree_scc_hash.size (),
! (long) tree_scc_hash.elements (),
! tree_scc_hash.collisions ());
! hash_table<tree_scc_hasher>::iterator hiter;
! tree_scc *scc, *max_scc = NULL;
! unsigned max_length = 0;
! FOR_EACH_HASH_TABLE_ELEMENT (tree_scc_hash, scc, x, hiter)
! {
! unsigned length = 0;
! tree_scc *s = scc;
! for (; s; s = s->next)
! length++;
! if (length > max_length)
! {
! max_length = length;
! max_scc = scc;
! }
! }
! fprintf (stderr, "[%s] tree SCC max chain length %u (size %u)\n",
! pfx, max_length, max_scc->len);
! fprintf (stderr, "[%s] Compared %lu SCCs, %lu collisions (%f)\n", pfx,
! num_scc_compares, num_scc_compare_collisions,
! num_scc_compare_collisions / (double) num_scc_compares);
! fprintf (stderr, "[%s] Merged %lu SCCs\n", pfx, num_sccs_merged);
! fprintf (stderr, "[%s] Merged %lu tree bodies\n", pfx,
! total_scc_size_merged);
! fprintf (stderr, "[%s] Merged %lu types\n", pfx, num_merged_types);
! fprintf (stderr, "[%s] %lu types prevailed (%lu associated trees)\n",
! pfx, num_prevailing_types, num_type_scc_trees);
! fprintf (stderr, "[%s] Old merging code merges an additional %lu types"
! " of which %lu are in the same SCC with their "
! "prevailing variant (%lu and %lu associated trees)\n",
! pfx, num_not_merged_types, num_not_merged_types_in_same_scc,
! num_not_merged_types_trees,
! num_not_merged_types_in_same_scc_trees);
! }

print_gimple_types_stats (pfx);
print_lto_report (pfx);
Index: trunk/gcc/lto/Make-lang.in
===================================================================
*** trunk.orig/gcc/lto/Make-lang.in 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto/Make-lang.in 2013-06-13 14:35:53.257108072 +0200
*************** lto/lto.o: lto/lto.c $(CONFIG_H) $(SYSTE
*** 85,91 ****
langhooks.h $(VEC_H) $(BITMAP_H) pointer-set.h $(IPA_PROP_H) \
$(COMMON_H) debug.h $(GIMPLE_H) $(LTO_H) $(LTO_TREE_H) \
$(LTO_TAGS_H) $(LTO_STREAMER_H) $(SPLAY_TREE_H) gt-lto-lto.h \
! $(TREE_STREAMER_H) lto/lto-partition.h
lto/lto-partition.o: lto/lto-partition.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
toplev.h $(TREE_H) $(TM_H) \
$(CGRAPH_H) $(TIMEVAR_H) \
--- 85,91 ----
langhooks.h $(VEC_H) $(BITMAP_H) pointer-set.h $(IPA_PROP_H) \
$(COMMON_H) debug.h $(GIMPLE_H) $(LTO_H) $(LTO_TREE_H) \
$(LTO_TAGS_H) $(LTO_STREAMER_H) $(SPLAY_TREE_H) gt-lto-lto.h \
! $(TREE_STREAMER_H) $(DATA_STREAMER_H) lto/lto-partition.h
lto/lto-partition.o: lto/lto-partition.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
toplev.h $(TREE_H) $(TM_H) \
$(CGRAPH_H) $(TIMEVAR_H) \
Index: trunk/gcc/tree-streamer-out.c
===================================================================
*** trunk.orig/gcc/tree-streamer-out.c 2013-06-13 14:35:48.000000000 +0200
--- trunk/gcc/tree-streamer-out.c 2013-06-13 15:35:37.231043943 +0200
*************** pack_ts_base_value_fields (struct bitpac
*** 95,101 ****
bp_pack_value (bp, TYPE_ARTIFICIAL (expr), 1);
else
bp_pack_value (bp, TREE_NO_WARNING (expr), 1);
- bp_pack_value (bp, TREE_USED (expr), 1);
bp_pack_value (bp, TREE_NOTHROW (expr), 1);
bp_pack_value (bp, TREE_STATIC (expr), 1);
bp_pack_value (bp, TREE_PRIVATE (expr), 1);
--- 95,100 ----
*************** streamer_write_chain (struct output_bloc
*** 491,499 ****
to the global decls section as we do not want to have them
enter decl merging. This is, of course, only for the call
for streaming BLOCK_VARS, but other callers are safe. */
if (VAR_OR_FUNCTION_DECL_P (t)
&& DECL_EXTERNAL (t))
! stream_write_tree_shallow_non_ref (ob, t, ref_p);
else
stream_write_tree (ob, t, ref_p);

--- 490,499 ----
to the global decls section as we do not want to have them
enter decl merging. This is, of course, only for the call
for streaming BLOCK_VARS, but other callers are safe. */
+ /* ??? FIXME wrt SCC streaming. Drop these for now. */
if (VAR_OR_FUNCTION_DECL_P (t)
&& DECL_EXTERNAL (t))
! ; /* stream_write_tree_shallow_non_ref (ob, t, ref_p); */
else
stream_write_tree (ob, t, ref_p);

*************** write_ts_list_tree_pointers (struct outp
*** 716,722 ****
{
stream_write_tree (ob, TREE_PURPOSE (expr), ref_p);
stream_write_tree (ob, TREE_VALUE (expr), ref_p);
! streamer_write_chain (ob, TREE_CHAIN (expr), ref_p);
}


--- 716,722 ----
{
stream_write_tree (ob, TREE_PURPOSE (expr), ref_p);
stream_write_tree (ob, TREE_VALUE (expr), ref_p);
! stream_write_tree (ob, TREE_CHAIN (expr), ref_p);
}


*************** write_ts_constructor_tree_pointers (stru
*** 834,839 ****
--- 834,841 ----
}
}

+ extern unsigned long num_tree_bodies_written;
+
/* Write all pointer fields in EXPR to output block OB. If REF_P is true,
the leaves of EXPR are emitted as references. */

*************** streamer_write_tree_body (struct output_
*** 842,847 ****
--- 844,851 ----
{
enum tree_code code;

+ num_tree_bodies_written++;
+
code = TREE_CODE (expr);

if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
Index: trunk/gcc/tree-streamer.c
===================================================================
*** trunk.orig/gcc/tree-streamer.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree-streamer.c 2013-06-13 14:35:53.258108084 +0200
*************** streamer_check_handled_ts_structures (vo
*** 92,107 ****

static void
streamer_tree_cache_add_to_node_array (struct streamer_tree_cache_d *cache,
! unsigned ix, tree t)
{
/* Make sure we're either replacing an old element or
appending consecutively. */
gcc_assert (ix <= cache->nodes.length ());

if (ix == cache->nodes.length ())
! cache->nodes.safe_push (t);
else
! cache->nodes[ix] = t;
}


--- 92,115 ----

static void
streamer_tree_cache_add_to_node_array (struct streamer_tree_cache_d *cache,
! unsigned ix, tree t, hashval_t hash)
{
/* Make sure we're either replacing an old element or
appending consecutively. */
gcc_assert (ix <= cache->nodes.length ());

if (ix == cache->nodes.length ())
! {
! cache->nodes.safe_push (t);
! if (cache->hashes.exists ())
! cache->hashes.safe_push (hash);
! }
else
! {
! cache->nodes[ix] = t;
! if (cache->hashes.exists ())
! cache->hashes[ix] = hash;
! }
}


*************** streamer_tree_cache_add_to_node_array (s
*** 117,123 ****

static bool
streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
! tree t, unsigned *ix_p,
bool insert_at_next_slot_p)
{
void **slot;
--- 125,131 ----

static bool
streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
! tree t, hashval_t hash, unsigned *ix_p,
bool insert_at_next_slot_p)
{
void **slot;
*************** streamer_tree_cache_insert_1 (struct str
*** 136,142 ****
ix = *ix_p;
*slot = (void *)(size_t) (ix + 1);

! streamer_tree_cache_add_to_node_array (cache, ix, t);

/* Indicate that the item was not present in the cache. */
existed_p = false;
--- 144,150 ----
ix = *ix_p;
*slot = (void *)(size_t) (ix + 1);

! streamer_tree_cache_add_to_node_array (cache, ix, t, hash);

/* Indicate that the item was not present in the cache. */
existed_p = false;
*************** streamer_tree_cache_insert_1 (struct str
*** 151,157 ****
location, and ENTRY->TO does not match *IX_P, add T to
the requested location slot. */
ix = *ix_p;
! streamer_tree_cache_add_to_node_array (cache, ix, t);
*slot = (void *)(size_t) (ix + 1);
}

--- 159,165 ----
location, and ENTRY->TO does not match *IX_P, add T to
the requested location slot. */
ix = *ix_p;
! streamer_tree_cache_add_to_node_array (cache, ix, t, hash);
*slot = (void *)(size_t) (ix + 1);
}

*************** streamer_tree_cache_insert_1 (struct str
*** 174,203 ****

bool
streamer_tree_cache_insert (struct streamer_tree_cache_d *cache, tree t,
! unsigned *ix_p)
{
! return streamer_tree_cache_insert_1 (cache, t, ix_p, true);
}


! /* Insert tree node T in CACHE at slot IX. If T already
! existed in the cache return true. Otherwise, return false. */

! bool
! streamer_tree_cache_insert_at (struct streamer_tree_cache_d *cache,
! tree t, unsigned ix)
{
! return streamer_tree_cache_insert_1 (cache, t, &ix, false);
}


/* Appends tree node T to CACHE, even if T already existed in it. */

void
! streamer_tree_cache_append (struct streamer_tree_cache_d *cache, tree t)
{
unsigned ix = cache->nodes.length ();
! streamer_tree_cache_insert_1 (cache, t, &ix, false);
}

/* Return true if tree node T exists in CACHE, otherwise false. If IX_P is
--- 182,214 ----

bool
streamer_tree_cache_insert (struct streamer_tree_cache_d *cache, tree t,
! hashval_t hash, unsigned *ix_p)
{
! return streamer_tree_cache_insert_1 (cache, t, hash, ix_p, true);
}


! /* Replace the tree node with T in CACHE at slot IX. */

! void
! streamer_tree_cache_replace_tree (struct streamer_tree_cache_d *cache,
! tree t, unsigned ix)
{
! hashval_t hash = 0;
! if (cache->hashes.exists ())
! hash = streamer_tree_cache_get_hash (cache, ix);
! streamer_tree_cache_insert_1 (cache, t, hash, &ix, false);
}


/* Appends tree node T to CACHE, even if T already existed in it. */

void
! streamer_tree_cache_append (struct streamer_tree_cache_d *cache,
! tree t, hashval_t hash)
{
unsigned ix = cache->nodes.length ();
! streamer_tree_cache_insert_1 (cache, t, hash, &ix, false);
}

/* Return true if tree node T exists in CACHE, otherwise false. If IX_P is
*************** record_common_node (struct streamer_tree
*** 257,263 ****
if (!node)
node = error_mark_node;

! streamer_tree_cache_append (cache, node);

if (POINTER_TYPE_P (node)
|| TREE_CODE (node) == COMPLEX_TYPE
--- 268,277 ----
if (!node)
node = error_mark_node;

! /* ??? FIXME, devise a better hash value. But the hash needs to be equal
! for all frontend and lto1 invocations. So just use the position
! in the cache as hash value. */
! streamer_tree_cache_append (cache, node, cache->nodes.length ());

if (POINTER_TYPE_P (node)
|| TREE_CODE (node) == COMPLEX_TYPE
*************** preload_common_nodes (struct streamer_tr
*** 305,317 ****
/* Create a cache of pickled nodes. */

struct streamer_tree_cache_d *
! streamer_tree_cache_create (void)
{
struct streamer_tree_cache_d *cache;

cache = XCNEW (struct streamer_tree_cache_d);

cache->node_map = pointer_map_create ();

/* Load all the well-known tree nodes that are always created by
the compiler on startup. This prevents writing them out
--- 319,334 ----
/* Create a cache of pickled nodes. */

struct streamer_tree_cache_d *
! streamer_tree_cache_create (bool with_hashes)
{
struct streamer_tree_cache_d *cache;

cache = XCNEW (struct streamer_tree_cache_d);

cache->node_map = pointer_map_create ();
+ cache->nodes.create (165);
+ if (with_hashes)
+ cache->hashes.create (165);

/* Load all the well-known tree nodes that are always created by
the compiler on startup. This prevents writing them out
*************** streamer_tree_cache_delete (struct strea
*** 332,336 ****
--- 349,354 ----

pointer_map_destroy (c->node_map);
c->nodes.release ();
+ c->hashes.release ();
free (c);
}
Index: trunk/gcc/tree-streamer.h
===================================================================
*** trunk.orig/gcc/tree-streamer.h 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree-streamer.h 2013-06-13 14:35:53.258108084 +0200
*************** along with GCC; see the file COPYING3.
*** 43,48 ****
--- 43,49 ----
T. The reconstructed T is inserted in some array so that when
the reference index for T is found in the input stream, it can be
used to look up into the array to get the reconstructed T. */
+
struct streamer_tree_cache_d
{
/* The mapping between tree nodes and slots into the nodes array. */
*************** struct streamer_tree_cache_d
*** 50,55 ****
--- 51,58 ----

/* The nodes pickled so far. */
vec<tree> nodes;
+ /* The node hashes (if available). */
+ vec<hashval_t> hashes;
};

/* Return true if tree node EXPR should be streamed as a builtin. For
*************** tree streamer_alloc_tree (struct lto_inp
*** 71,77 ****
void streamer_read_tree_body (struct lto_input_block *, struct data_in *, tree);
tree streamer_get_pickled_tree (struct lto_input_block *, struct data_in *);
tree streamer_get_builtin_tree (struct lto_input_block *, struct data_in *);
- tree streamer_read_integer_cst (struct lto_input_block *, struct data_in *);
struct bitpack_d streamer_read_tree_bitfields (struct lto_input_block *,
struct data_in *, tree);

--- 74,79 ----
*************** void streamer_write_builtin (struct outp
*** 89,110 ****
/* In tree-streamer.c. */
void streamer_check_handled_ts_structures (void);
bool streamer_tree_cache_insert (struct streamer_tree_cache_d *, tree,
! unsigned *);
! bool streamer_tree_cache_insert_at (struct streamer_tree_cache_d *, tree,
! unsigned);
! void streamer_tree_cache_append (struct streamer_tree_cache_d *, tree);
bool streamer_tree_cache_lookup (struct streamer_tree_cache_d *, tree,
unsigned *);
! struct streamer_tree_cache_d *streamer_tree_cache_create (void);
void streamer_tree_cache_delete (struct streamer_tree_cache_d *);

/* Return the tree node at slot IX in CACHE. */

static inline tree
! streamer_tree_cache_get (struct streamer_tree_cache_d *cache, unsigned ix)
{
return cache->nodes[ix];
}


#endif /* GCC_TREE_STREAMER_H */
--- 91,121 ----
/* In tree-streamer.c. */
void streamer_check_handled_ts_structures (void);
bool streamer_tree_cache_insert (struct streamer_tree_cache_d *, tree,
! hashval_t, unsigned *);
! void streamer_tree_cache_replace_tree (struct streamer_tree_cache_d *, tree,
! unsigned);
! void streamer_tree_cache_append (struct streamer_tree_cache_d *, tree,
! hashval_t);
bool streamer_tree_cache_lookup (struct streamer_tree_cache_d *, tree,
unsigned *);
! struct streamer_tree_cache_d *streamer_tree_cache_create (bool);
void streamer_tree_cache_delete (struct streamer_tree_cache_d *);

/* Return the tree node at slot IX in CACHE. */

static inline tree
! streamer_tree_cache_get_tree (struct streamer_tree_cache_d *cache, unsigned ix)
{
return cache->nodes[ix];
}

+ /* Return the tree hash value at slot IX in CACHE. */
+
+ static inline hashval_t
+ streamer_tree_cache_get_hash (struct streamer_tree_cache_d *cache, unsigned ix)
+ {
+ return cache->hashes[ix];
+ }
+

#endif /* GCC_TREE_STREAMER_H */
Index: trunk/gcc/lto-streamer.c
===================================================================
*** trunk.orig/gcc/lto-streamer.c 2013-06-13 14:35:48.000000000 +0200
--- trunk/gcc/lto-streamer.c 2013-06-13 14:35:53.258108084 +0200
*************** lto_get_section_name (int section_type,
*** 176,181 ****
--- 176,185 ----
return concat (LTO_SECTION_NAME_PREFIX, sep, add, post, NULL);
}

+ unsigned long num_tree_bodies_written = 0;
+ unsigned long num_trees_output = 0;
+ unsigned long num_grefs_output = 0;
+ unsigned long num_lrefs_output = 0;

/* Show various memory usage statistics related to LTO. */

*************** print_lto_report (const char *s)
*** 227,232 ****
--- 231,245 ----
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_output_symtab_nodes);

+ fprintf (stderr, "[%s] # of output trees: %lu\n",
+ s, num_trees_output);
+ fprintf (stderr, "[%s] # of output references to globals: %lu\n",
+ s, num_grefs_output);
+ fprintf (stderr, "[%s] # of output local references: %lu\n",
+ s, num_lrefs_output);
+ fprintf (stderr, "[%s] # of output tree bodies: %lu\n",
+ s, num_tree_bodies_written);
+
fprintf (stderr, "[%s] # callgraph partitions: "
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_cgraph_partitions);
Index: trunk/gcc/tree.c
===================================================================
*** trunk.orig/gcc/tree.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree.c 2013-06-13 14:35:53.261108120 +0200
*************** build_int_cst_wide (tree type, unsigned
*** 1266,1271 ****
--- 1266,1364 ----
return t;
}

+ void
+ cache_integer_cst (tree t)
+ {
+ tree type = TREE_TYPE (t);
+ HOST_WIDE_INT hi = TREE_INT_CST_HIGH (t);
+ unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (t);
+ int ix = -1;
+ int limit = 0;
+
+ gcc_assert (!TREE_OVERFLOW (t));
+
+ switch (TREE_CODE (type))
+ {
+ case NULLPTR_TYPE:
+ gcc_assert (hi == 0 && low == 0);
+ /* Fallthru. */
+
+ case POINTER_TYPE:
+ case REFERENCE_TYPE:
+ /* Cache NULL pointer. */
+ if (!hi && !low)
+ {
+ limit = 1;
+ ix = 0;
+ }
+ break;
+
+ case BOOLEAN_TYPE:
+ /* Cache false or true. */
+ limit = 2;
+ if (!hi && low < 2)
+ ix = low;
+ break;
+
+ case INTEGER_TYPE:
+ case OFFSET_TYPE:
+ if (TYPE_UNSIGNED (type))
+ {
+ /* Cache 0..N */
+ limit = INTEGER_SHARE_LIMIT;
+ if (!hi && low < (unsigned HOST_WIDE_INT)INTEGER_SHARE_LIMIT)
+ ix = low;
+ }
+ else
+ {
+ /* Cache -1..N */
+ limit = INTEGER_SHARE_LIMIT + 1;
+ if (!hi && low < (unsigned HOST_WIDE_INT)INTEGER_SHARE_LIMIT)
+ ix = low + 1;
+ else if (hi == -1 && low == -(unsigned HOST_WIDE_INT)1)
+ ix = 0;
+ }
+ break;
+
+ case ENUMERAL_TYPE:
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+
+ if (ix >= 0)
+ {
+ /* Look for it in the type's vector of small shared ints. */
+ if (!TYPE_CACHED_VALUES_P (type))
+ {
+ TYPE_CACHED_VALUES_P (type) = 1;
+ TYPE_CACHED_VALUES (type) = make_tree_vec (limit);
+ }
+
+ gcc_assert (TREE_VEC_ELT (TYPE_CACHED_VALUES (type), ix) == NULL_TREE);
+ TREE_VEC_ELT (TYPE_CACHED_VALUES (type), ix) = t;
+ }
+ else
+ {
+ /* Use the cache of larger shared ints. */
+ void **slot;
+
+ slot = htab_find_slot (int_cst_hash_table, t, INSERT);
+ /* If there is already an entry for the number verify it's the
+ same. */
+ if (*slot)
+ {
+ gcc_assert (TREE_INT_CST_LOW ((tree)*slot) == low
+ && TREE_INT_CST_HIGH ((tree)*slot) == hi);
+ return;
+ }
+ /* Otherwise insert this one into the hash table. */
+ *slot = t;
+ }
+ }
+
+
/* Builds an integer constant in TYPE such that lowest BITS bits are ones
and the rest are zeros. */

Index: trunk/gcc/tree.h
===================================================================
*** trunk.orig/gcc/tree.h 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree.h 2013-06-13 14:35:53.262108132 +0200
*************** extern const_tree strip_invariant_refs (
*** 5637,5642 ****
--- 5637,5643 ----
extern tree lhd_gcc_personality (void);
extern void assign_assembler_name_if_neeeded (tree);
extern void warn_deprecated_use (tree, tree);
+ extern void cache_integer_cst (tree);


/* In cgraph.c */
Index: trunk/gcc/lto-symtab.c
===================================================================
*** trunk.orig/gcc/lto-symtab.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto-symtab.c 2013-06-13 14:35:53.262108132 +0200
*************** lto_varpool_replace_node (struct varpool
*** 96,104 ****

ipa_clone_referring ((symtab_node)prevailing_node, &vnode->symbol.ref_list);

- /* Be sure we can garbage collect the initializer. */
- if (DECL_INITIAL (vnode->symbol.decl))
- DECL_INITIAL (vnode->symbol.decl) = error_mark_node;
/* Finally remove the replaced node. */
varpool_remove_node (vnode);
}
--- 96,101 ----
Index: trunk/gcc/varpool.c
===================================================================
*** trunk.orig/gcc/varpool.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/varpool.c 2013-06-13 14:35:53.262108132 +0200
*************** varpool_remove_node (struct varpool_node
*** 77,91 ****

/* Renove node initializer when it is no longer needed. */
void
! varpool_remove_initializer (struct varpool_node *node)
{
- if (DECL_INITIAL (node->symbol.decl)
- && !DECL_IN_CONSTANT_POOL (node->symbol.decl)
- /* Keep vtables for BINFO folding. */
- && !DECL_VIRTUAL_P (node->symbol.decl)
- /* FIXME: http://gcc.gnu.org/PR55395 */
- && debug_info_level == DINFO_LEVEL_NONE)
- DECL_INITIAL (node->symbol.decl) = error_mark_node;
}

/* Dump given cgraph node. */
--- 77,84 ----

/* Renove node initializer when it is no longer needed. */
void
! varpool_remove_initializer (struct varpool_node *)
{
}

/* Dump given cgraph node. */
Jan Hubicka
2013-06-13 21:37:05 UTC
Permalink
Post by Richard Biener
Ok, not streaming and comparing TREE_USED gets it improved to
I will try to gather better data tomorrow. My mozilla build died on disk space,
but according to stats we are now at about 7GB of GGC memory after merging.
I was playing with the following patch that implements testing whether types
are same in my (probably naive and wrong) understanding of ODR rule in C++

It prints type pairs that seems same and then it verifies that they are having
same names and they are in same namespaces and records. On Javascript there are
5000 types found same by devirtualization code this way that are not having
the same MAIN VARIANT.

I gess those trees may be good starting point for you to look why they are not merged.

I suppose that once we have maintenable code base we can get into more
aggressive merging in special cases. Requiring trees to be exactly same is a
good default behaviour. We however may take advantage of extra knowledge. FE
may tag types/decls that are subject to ODR rule and for those we can reduce
the hash to be based only on name+context and we can even output sane
diagnostic on mismatches.

Simiarly I think it would help a lot if we proactively merged !can_prevail_p
decls with matching types into those that can prevail by hashing PUBLIC decls
only by their assembler name. Merging those should subsequently allow
collapsing the types that are otherwise kept separate just because associated
vtables are having differences in EXTERNAL and PUBLIC flags on the methods and
such.

Index: tree.c
===================================================================
--- tree.c (revision 200064)
+++ tree.c (working copy)
@@ -11618,6 +11711,91 @@ lhd_gcc_personality (void)
return gcc_eh_personality_decl;
}

+/* For languages with One Definition Rule, work out if
+ decls are actually the same even if the tree representation
+ differs. This handles only decls appearing in TYPE_NAME
+ and TYPE_CONTEXT. That is NAMESPACE_DECL, TYPE_DECL,
+ RECORD_TYPE and IDENTIFIER_NODE. */
+
+static bool
+decls_same_for_odr (tree decl1, tree decl2)
+{
+ if (decl1 == decl2)
+ return true;
+ if (!decl1 || !decl2)
+ {
+ fprintf (stderr, "Nesting mismatch\n");
+ debug_tree (decl1);
+ debug_tree (decl2);
+ return false;
+ }
+ if (TREE_CODE (decl1) != TREE_CODE (decl2))
+ {
+ fprintf (stderr, "Code mismatch\n");
+ debug_tree (decl1);
+ debug_tree (decl2);
+ return false;
+ }
+ if (TREE_CODE (decl1) == TRANSLATION_UNIT_DECL)
+ return true;
+ if (TREE_CODE (decl1) != NAMESPACE_DECL
+ && TREE_CODE (decl1) != RECORD_TYPE
+ && TREE_CODE (decl1) != TYPE_DECL)
+ {
+ fprintf (stderr, "Decl type mismatch\n");
+ debug_tree (decl1);
+ return false;
+ }
+ if (!DECL_NAME (decl1))
+ {
+ fprintf (stderr, "Anonymous; name mysmatch\n");
+ debug_tree (decl1);
+ return false;
+ }
+ if (!decls_same_for_odr (DECL_NAME (decl1), DECL_NAME (decl2)))
+ return false;
+ return decls_same_for_odr (DECL_CONTEXT (decl1),
+ DECL_CONTEXT (decl2));
+}
+
+/* For languages with One Definition Rule, work out if
+ types are same even if the tree representation differs.
+ This is non-trivial for LTO where minnor differences in
+ the type representation may have prevented type merging
+ to merge two copies of otherwise equivalent type. */
+
+static bool
+types_same_for_odr (tree type1, tree type2)
+{
+ type1 = TYPE_MAIN_VARIANT (type1);
+ type2 = TYPE_MAIN_VARIANT (type2);
+ if (type1 == type2)
+ return true;
+ if (!type1 || !type2)
+ return false;
+
+ /* If types are not structuraly same, do not bother to contnue.
+ Match in the remainder of code would mean ODR violation. */
+ if (!types_compatible_p (type1, type2))
+ return false;
+
+ debug_tree (type1);
+ debug_tree (type2);
+ if (!TYPE_NAME (type1))
+ {
+ fprintf (stderr, "Anonymous; name mysmatch\n");
+ return false;
+ }
+ if (!decls_same_for_odr (TYPE_NAME (type1), TYPE_NAME (type2)))
+ return false;
+ if (!decls_same_for_odr (TYPE_CONTEXT (type1), TYPE_CONTEXT (type2)))
+ return false;
+ fprintf (stderr, "type match!\n");
+ gcc_assert (in_lto_p);
+
+ return true;
+}
+
/* Try to find a base info of BINFO that would have its field decl at offset
OFFSET within the BINFO type and which is of EXPECTED_TYPE. If it can be
found, return, otherwise return NULL_TREE. */
@@ -11633,8 +11811,8 @@ get_binfo_at_offset (tree binfo, HOST_WI
tree fld;
int i;

- if (TYPE_MAIN_VARIANT (type) == TYPE_MAIN_VARIANT (expected_type))
- return binfo;
+ if (types_same_for_odr (type, expected_type))
+ return binfo;
if (offset < 0)
return NULL_TREE;

@@ -11663,7 +11841,7 @@ get_binfo_at_offset (tree binfo, HOST_WI
{
tree base_binfo, found_binfo = NULL_TREE;
for (i = 0; BINFO_BASE_ITERATE (binfo, i, base_binfo); i++)
- if (TREE_TYPE (base_binfo) == TREE_TYPE (fld))
+ if (types_same_for_odr (base_binfo, fld))
{
found_binfo = base_binfo;
break;
Jan Hubicka
2013-06-13 22:16:35 UTC
Permalink
Post by Jan Hubicka
Post by Richard Biener
Ok, not streaming and comparing TREE_USED gets it improved to
I will try to gather better data tomorrow. My mozilla build died on disk space,
but according to stats we are now at about 7GB of GGC memory after merging.
I was playing with the following patch that implements testing whether types
are same in my (probably naive and wrong) understanding of ODR rule in C++
So i can confirm that we now need 3GB of TMP space instead of 8GB with earlier
version of patch. I will compare to mainline tomorrow, but I think it is
about the same.
phase opt and generate : 96.39 ( 9%) usr 40.45 (45%) sys 136.91 (12%) wall 271042 kB ( 7%) ggc
phase stream in : 457.87 (43%) usr 8.38 ( 9%) sys 466.44 (40%) wall 3798844 kB (93%) ggc
phase stream out : 509.39 (48%) usr 40.82 (46%) sys 550.88 (48%) wall 7149 kB ( 0%) ggc
ipa cp : 13.62 ( 1%) usr 5.00 ( 6%) sys 18.61 ( 2%) wall 425204 kB (10%) ggc
ipa inlining heuristics : 60.52 ( 6%) usr 36.15 (40%) sys 96.71 ( 8%) wall 1353370 kB (33%) ggc
ipa lto decl in : 346.94 (33%) usr 5.49 ( 6%) sys 352.60 (31%) wall 7042 kB ( 0%) ggc
ipa lto decl out : 481.19 (45%) usr 23.28 (26%) sys 504.68 (44%) wall 0 kB ( 0%) ggc
TOTAL :1063.67 89.65 1154.26 4078436 kB

So we are still bound by streaming. I am running -flto-report overnight.

My ODR patch finds 36377 matches and also weird looking mismatches of type:
<record_type 0x7fbd30d46dc8 sockaddr_storage BLK
size <integer_cst 0x7fbd416bc1e0 type <integer_type 0x7fbd415660a8 bitsizetype> constant 1024>
unit size <integer_cst 0x7fbd416bc700 type <integer_type 0x7fbd41566000 sizetype> constant 128>
align 64 symtab 0 alias set -1 canonical type 0x7fbd30f0bc78
fields <field_decl 0x7fbd30e99ed8 ss_family
type <integer_type 0x7fbd3b98c000 sa_family_t public unsigned HI
size <integer_cst 0x7fbd41555fe0 constant 16>
unit size <integer_cst 0x7fbd4156a000 constant 2>
align 16 symtab 0 alias set -1 canonical type 0x7fbd41566540 precision 16 min <integer_cst 0x7fbd4156a020 0> max <integer_cst 0x7fbd41555fc0 65535>>
unsigned nonlocal HI file /usr/include/bits/socket.h line 189 col 0 size <integer_cst 0x7fbd41555fe0 16> unit size <integer_cst 0x7fbd4156a000 2>
align 16 offset_align 128
offset <integer_cst 0x7fbd41555d60 constant 0>
bit offset <integer_cst 0x7fbd41555de0 constant 0> context <record_type 0x7fbd30d46dc8 sockaddr_storage>
chain <field_decl 0x7fbd30e99000 __ss_align type <integer_type 0x7fbd415667e0 long unsigned int>
unsigned nonlocal DI file /usr/include/bits/socket.h line 190 col 0
size <integer_cst 0x7fbd41555d20 constant 64>
unit size <integer_cst 0x7fbd41555d40 constant 8>
align 64 offset_align 128 offset <integer_cst 0x7fbd41555d60 0> bit offset <integer_cst 0x7fbd41555d20 64> context <record_type 0x7fbd30d46dc8 sockaddr_storage> chain <field_decl 0x7fbd30e99e40 __ss_padding>>> context <translation_unit_decl 0x7fbd30cbc2e0 D.967968>
chain <type_decl 0x7fbd30d47da8 sockaddr_storage>>
<record_type 0x7fbd30f0bc78 sockaddr_storage BLK
size <integer_cst 0x7fbd416bc1e0 type <integer_type 0x7fbd415660a8 bitsizetype> constant 1024>
unit size <integer_cst 0x7fbd416bc700 type <integer_type 0x7fbd41566000 sizetype> constant 128>
align 64 symtab 0 alias set -1 canonical type 0x7fbd30f0bc78
fields <field_decl 0x7fbd30ef9558 ss_family
type <integer_type 0x7fbd3b98c000 sa_family_t public unsigned HI
size <integer_cst 0x7fbd41555fe0 constant 16>
unit size <integer_cst 0x7fbd4156a000 constant 2>
align 16 symtab 0 alias set -1 canonical type 0x7fbd41566540 precision 16 min <integer_cst 0x7fbd4156a020 0> max <integer_cst 0x7fbd41555fc0 65535>>
unsigned HI file /usr/include/bits/socket.h line 189 col 0 size <integer_cst 0x7fbd41555fe0 16> unit size <integer_cst 0x7fbd4156a000 2>
align 16 offset_align 128
offset <integer_cst 0x7fbd41555d60 constant 0>
bit offset <integer_cst 0x7fbd41555de0 constant 0> context <record_type 0x7fbd30f0bc78 sockaddr_storage>
chain <field_decl 0x7fbd30ef94c0 __ss_align type <integer_type 0x7fbd415667e0 long unsigned int>
unsigned DI file /usr/include/bits/socket.h line 190 col 0
size <integer_cst 0x7fbd41555d20 constant 64>
unit size <integer_cst 0x7fbd41555d40 constant 8>
align 64 offset_align 128 offset <integer_cst 0x7fbd41555d60 0> bit offset <integer_cst 0x7fbd41555d20 64> context <record_type 0x7fbd30f0bc78 sockaddr_storage> chain <field_decl 0x7fbd30ef9428 __ss_padding>>> context <translation_unit_decl 0x7fbd30ea9f18 D.936417>
pointer_to_this <pointer_type 0x7fbd30f0bd20> chain <type_decl 0x7fbd30ea9398 D.938243>>

that mismatch because we run into following difference:
<type_decl 0x7fbd30d47da8 sockaddr_storage
type <record_type 0x7fbd30d46dc8 sockaddr_storage BLK
size <integer_cst 0x7fbd416bc1e0 constant 1024>
unit size <integer_cst 0x7fbd416bc700 constant 128>
align 64 symtab 0 alias set -1 canonical type 0x7fbd30f0bc78
fields <field_decl 0x7fbd30e99ed8 ss_family type <integer_type 0x7fbd3b98c000 sa_family_t>
unsigned nonlocal HI file /usr/include/bits/socket.h line 189 col 0
size <integer_cst 0x7fbd41555fe0 constant 16>
unit size <integer_cst 0x7fbd4156a000 constant 2>
align 16 offset_align 128
offset <integer_cst 0x7fbd41555d60 constant 0>
bit offset <integer_cst 0x7fbd41555de0 constant 0> context <record_type 0x7fbd30d46dc8 sockaddr_storage> chain <field_decl 0x7fbd30e99000 __ss_align>> context <translation_unit_decl 0x7fbd30cbc2e0 D.967968>
chain <type_decl 0x7fbd30d47da8 sockaddr_storage>>
public VOID file /usr/include/bits/socket.h line 187 col 0
align 8 context <translation_unit_decl 0x7fbd30cbc2e0 D.967968>>
<identifier_node 0x7fbd30f06d70 sockaddr_storage>

I am not sure what means that one type has more TYPE_DECLs stacked than the other.

Honza
Jan Hubicka
2013-06-14 05:45:54 UTC
Permalink
Post by Jan Hubicka
Post by Jan Hubicka
Post by Richard Biener
Ok, not streaming and comparing TREE_USED gets it improved to
I will try to gather better data tomorrow. My mozilla build died on disk space,
but according to stats we are now at about 7GB of GGC memory after merging.
I was playing with the following patch that implements testing whether types
are same in my (probably naive and wrong) understanding of ODR rule in C++
So i can confirm that we now need 3GB of TMP space instead of 8GB with earlier
version of patch. I will compare to mainline tomorrow, but I think it is
about the same.
phase opt and generate : 96.39 ( 9%) usr 40.45 (45%) sys 136.91 (12%) wall 271042 kB ( 7%) ggc
phase stream in : 457.87 (43%) usr 8.38 ( 9%) sys 466.44 (40%) wall 3798844 kB (93%) ggc
phase stream out : 509.39 (48%) usr 40.82 (46%) sys 550.88 (48%) wall 7149 kB ( 0%) ggc
ipa cp : 13.62 ( 1%) usr 5.00 ( 6%) sys 18.61 ( 2%) wall 425204 kB (10%) ggc
ipa inlining heuristics : 60.52 ( 6%) usr 36.15 (40%) sys 96.71 ( 8%) wall 1353370 kB (33%) ggc
ipa lto decl in : 346.94 (33%) usr 5.49 ( 6%) sys 352.60 (31%) wall 7042 kB ( 0%) ggc
ipa lto decl out : 481.19 (45%) usr 23.28 (26%) sys 504.68 (44%) wall 0 kB ( 0%) ggc
TOTAL :1063.67 89.65 1154.26 4078436 kB
So we are still bound by streaming. I am running -flto-report overnight.
[WPA] read 43363300 SCCs of average size 2.264113
[WPA] 98179403 tree bodies read in total
[WPA] tree SCC table: size 16777213, 6422251 elements, collision ratio: 0.811639
[WPA] tree SCC max chain length 88 (size 1)
[WPA] Compared 16544560 SCCs, 275298 collisions (0.016640)
[WPA] Merged 16458553 SCCs
[WPA] Merged 46453870 tree bodies
[WPA] Merged 9535385 types
[WPA] 6771259 types prevailed (21348860 associated trees)
[WPA] Old merging code merges an additional 1759918 types of which 379059 are in the same SCC with their prevailing variant (19696849 and 15301625 associated trees)
[WPA] GIMPLE canonical type table: size 131071, 77875 elements, 6771394 searches, 1528380 collisions (ratio: 0.225711)
[WPA] GIMPLE canonical type hash table: size 16777213, 6771339 elements, 23174504 searches, 21075518 collisions (ratio: 0.909427)
....
[LTRANS] read 228296 SCCs of average size 11.882460
[LTRANS] 2712718 tree bodies read in total
[LTRANS] GIMPLE canonical type table: size 16381, 7025 elements, 704670 searches, 24040 collisions (ratio: 0.034115)
[LTRANS] GIMPLE canonical type hash table: size 1048573, 704613 elements, 2269381 searches, 2021919 collisions (ratio: 0.890956)

We manage to get stuck in one of ltranses on LRA
LRA hard reg assignment : 476.07 (44%) usr 0.03 ( 0%) sys 476.08 (44%) wall 0 kB ( 0%) ggc

28607 12.1151 lto1 alloc_page(unsigned int)
3564 1.5094 lto1 record_reg_classes(int, int, rtx_def**, machine_mode*, char const**, rtx_def*, reg_class*)
3235 1.3700 libc-2.11.1.so _int_malloc
3056 1.2942 lto1 ggc_set_mark(void const*)
2646 1.1206 lto1 gt_ggc_mx_lang_tree_node(void*)
2539 1.0753 lto1 bitmap_set_bit(bitmap_head_def*, int)
2333 0.9880 opreport /usr/bin/opreport
2210 0.9359 lto1 for_each_rtx_1(rtx_def*, int, int (*)(rtx_def**, void*), void*)
2133 0.9033 lto1 constrain_operands(int)
2128 0.9012 lto1 lookup_page_table_entry(void const*)
1586 0.6717 lto1 preprocess_constraints()

While GGC memory is now under 7GB after type streaming and we GGC just once in WPA, the TOP usage still goes to about 12GB.

With the ODR patch there are 424 devirtualizations happening during WPA and some extra (do not have stats for)
during ltrans.

Honza
Richard Biener
2013-06-14 08:31:13 UTC
Permalink
Post by Jan Hubicka
Post by Jan Hubicka
Post by Richard Biener
Ok, not streaming and comparing TREE_USED gets it improved to
I will try to gather better data tomorrow. My mozilla build died on disk space,
but according to stats we are now at about 7GB of GGC memory after merging.
I was playing with the following patch that implements testing whether types
are same in my (probably naive and wrong) understanding of ODR rule in C++
So i can confirm that we now need 3GB of TMP space instead of 8GB with earlier
version of patch. I will compare to mainline tomorrow, but I think it is
about the same.
phase opt and generate : 96.39 ( 9%) usr 40.45 (45%) sys 136.91 (12%) wall 271042 kB ( 7%) ggc
phase stream in : 457.87 (43%) usr 8.38 ( 9%) sys 466.44 (40%) wall 3798844 kB (93%) ggc
phase stream out : 509.39 (48%) usr 40.82 (46%) sys 550.88 (48%) wall 7149 kB ( 0%) ggc
ipa cp : 13.62 ( 1%) usr 5.00 ( 6%) sys 18.61 ( 2%) wall 425204 kB (10%) ggc
ipa inlining heuristics : 60.52 ( 6%) usr 36.15 (40%) sys 96.71 ( 8%) wall 1353370 kB (33%) ggc
ipa lto decl in : 346.94 (33%) usr 5.49 ( 6%) sys 352.60 (31%) wall 7042 kB ( 0%) ggc
ipa lto decl out : 481.19 (45%) usr 23.28 (26%) sys 504.68 (44%) wall 0 kB ( 0%) ggc
TOTAL :1063.67 89.65 1154.26 4078436 kB
So we are still bound by streaming. I am running -flto-report overnight.
<record_type 0x7fbd30d46dc8 sockaddr_storage BLK
size <integer_cst 0x7fbd416bc1e0 type <integer_type 0x7fbd415660a8 bitsizetype> constant 1024>
unit size <integer_cst 0x7fbd416bc700 type <integer_type 0x7fbd41566000 sizetype> constant 128>
align 64 symtab 0 alias set -1 canonical type 0x7fbd30f0bc78
fields <field_decl 0x7fbd30e99ed8 ss_family
type <integer_type 0x7fbd3b98c000 sa_family_t public unsigned HI
size <integer_cst 0x7fbd41555fe0 constant 16>
unit size <integer_cst 0x7fbd4156a000 constant 2>
align 16 symtab 0 alias set -1 canonical type 0x7fbd41566540 precision 16 min <integer_cst 0x7fbd4156a020 0> max <integer_cst 0x7fbd41555fc0 65535>>
unsigned nonlocal HI file /usr/include/bits/socket.h line 189 col 0 size <integer_cst 0x7fbd41555fe0 16> unit size <integer_cst 0x7fbd4156a000 2>
align 16 offset_align 128
offset <integer_cst 0x7fbd41555d60 constant 0>
bit offset <integer_cst 0x7fbd41555de0 constant 0> context <record_type 0x7fbd30d46dc8 sockaddr_storage>
chain <field_decl 0x7fbd30e99000 __ss_align type <integer_type 0x7fbd415667e0 long unsigned int>
unsigned nonlocal DI file /usr/include/bits/socket.h line 190 col 0
size <integer_cst 0x7fbd41555d20 constant 64>
unit size <integer_cst 0x7fbd41555d40 constant 8>
align 64 offset_align 128 offset <integer_cst 0x7fbd41555d60 0> bit offset <integer_cst 0x7fbd41555d20 64> context <record_type 0x7fbd30d46dc8 sockaddr_storage> chain <field_decl 0x7fbd30e99e40 __ss_padding>>> context <translation_unit_decl 0x7fbd30cbc2e0 D.967968>
chain <type_decl 0x7fbd30d47da8 sockaddr_storage>>
<record_type 0x7fbd30f0bc78 sockaddr_storage BLK
size <integer_cst 0x7fbd416bc1e0 type <integer_type 0x7fbd415660a8 bitsizetype> constant 1024>
unit size <integer_cst 0x7fbd416bc700 type <integer_type 0x7fbd41566000 sizetype> constant 128>
align 64 symtab 0 alias set -1 canonical type 0x7fbd30f0bc78
fields <field_decl 0x7fbd30ef9558 ss_family
type <integer_type 0x7fbd3b98c000 sa_family_t public unsigned HI
size <integer_cst 0x7fbd41555fe0 constant 16>
unit size <integer_cst 0x7fbd4156a000 constant 2>
align 16 symtab 0 alias set -1 canonical type 0x7fbd41566540 precision 16 min <integer_cst 0x7fbd4156a020 0> max <integer_cst 0x7fbd41555fc0 65535>>
unsigned HI file /usr/include/bits/socket.h line 189 col 0 size <integer_cst 0x7fbd41555fe0 16> unit size <integer_cst 0x7fbd4156a000 2>
align 16 offset_align 128
offset <integer_cst 0x7fbd41555d60 constant 0>
bit offset <integer_cst 0x7fbd41555de0 constant 0> context <record_type 0x7fbd30f0bc78 sockaddr_storage>
chain <field_decl 0x7fbd30ef94c0 __ss_align type <integer_type 0x7fbd415667e0 long unsigned int>
unsigned DI file /usr/include/bits/socket.h line 190 col 0
size <integer_cst 0x7fbd41555d20 constant 64>
unit size <integer_cst 0x7fbd41555d40 constant 8>
align 64 offset_align 128 offset <integer_cst 0x7fbd41555d60 0> bit offset <integer_cst 0x7fbd41555d20 64> context <record_type 0x7fbd30f0bc78 sockaddr_storage> chain <field_decl 0x7fbd30ef9428 __ss_padding>>> context <translation_unit_decl 0x7fbd30ea9f18 D.936417>
pointer_to_this <pointer_type 0x7fbd30f0bd20> chain <type_decl 0x7fbd30ea9398 D.938243>>
<type_decl 0x7fbd30d47da8 sockaddr_storage
type <record_type 0x7fbd30d46dc8 sockaddr_storage BLK
size <integer_cst 0x7fbd416bc1e0 constant 1024>
unit size <integer_cst 0x7fbd416bc700 constant 128>
align 64 symtab 0 alias set -1 canonical type 0x7fbd30f0bc78
fields <field_decl 0x7fbd30e99ed8 ss_family type <integer_type 0x7fbd3b98c000 sa_family_t>
unsigned nonlocal HI file /usr/include/bits/socket.h line 189 col 0
size <integer_cst 0x7fbd41555fe0 constant 16>
unit size <integer_cst 0x7fbd4156a000 constant 2>
align 16 offset_align 128
offset <integer_cst 0x7fbd41555d60 constant 0>
bit offset <integer_cst 0x7fbd41555de0 constant 0> context <record_type 0x7fbd30d46dc8 sockaddr_storage> chain <field_decl 0x7fbd30e99000 __ss_align>> context <translation_unit_decl 0x7fbd30cbc2e0 D.967968>
chain <type_decl 0x7fbd30d47da8 sockaddr_storage>>
public VOID file /usr/include/bits/socket.h line 187 col 0
align 8 context <translation_unit_decl 0x7fbd30cbc2e0 D.967968>>
<identifier_node 0x7fbd30f06d70 sockaddr_storage>
I am not sure what means that one type has more TYPE_DECLs stacked than the other.
I think that's the usual case of two nearly identical types in the
_same_ SCC, linked through one of the types TYPE_DECL DECL_ORIGINAL_TYPE.
The C++ frontend likes to produce those ...

They are accounted as the "are in the same SCC" in the stats (for the
latest updated patch):

[WPA] Old merging code merges an additional 41493 types of which 21633 are
in the same SCC with their prevailing variant (415188 and 324371
associated trees)

I'm looking into other things now and for example see TYPE_ALIAS_SET
issues (we do stream only alias sets -1 and 0, but whether -1 got
transformed to 0 at compile-time depends on whether get_alias_set
was called on it). That's one case where we can either forcefully
call get_alias_set at streaming time, introduce a predicate
type_would_have_alias_set_zero that doesn't run the whole get_alias_set
machinery, or ignore this for merging and make sure to merge -1 and 0
as 0 (the whole 0 vs. -1 special-case is to support TUs compiled
with -fno-strict-aliasing linked together with -fstrict-aliasing TUs).

Richard.
Jan Hubicka
2013-06-14 08:45:06 UTC
Permalink
Post by Richard Biener
Post by Jan Hubicka
Post by Jan Hubicka
Post by Richard Biener
Ok, not streaming and comparing TREE_USED gets it improved to
I will try to gather better data tomorrow. My mozilla build died on disk space,
but according to stats we are now at about 7GB of GGC memory after merging.
I was playing with the following patch that implements testing whether types
are same in my (probably naive and wrong) understanding of ODR rule in C++
So i can confirm that we now need 3GB of TMP space instead of 8GB with earlier
version of patch. I will compare to mainline tomorrow, but I think it is
about the same.
phase opt and generate : 96.39 ( 9%) usr 40.45 (45%) sys 136.91 (12%) wall 271042 kB ( 7%) ggc
phase stream in : 457.87 (43%) usr 8.38 ( 9%) sys 466.44 (40%) wall 3798844 kB (93%) ggc
phase stream out : 509.39 (48%) usr 40.82 (46%) sys 550.88 (48%) wall 7149 kB ( 0%) ggc
ipa cp : 13.62 ( 1%) usr 5.00 ( 6%) sys 18.61 ( 2%) wall 425204 kB (10%) ggc
ipa inlining heuristics : 60.52 ( 6%) usr 36.15 (40%) sys 96.71 ( 8%) wall 1353370 kB (33%) ggc
ipa lto decl in : 346.94 (33%) usr 5.49 ( 6%) sys 352.60 (31%) wall 7042 kB ( 0%) ggc
ipa lto decl out : 481.19 (45%) usr 23.28 (26%) sys 504.68 (44%) wall 0 kB ( 0%) ggc
TOTAL :1063.67 89.65 1154.26 4078436 kB
So we are still bound by streaming. I am running -flto-report overnight.
<record_type 0x7fbd30d46dc8 sockaddr_storage BLK
size <integer_cst 0x7fbd416bc1e0 type <integer_type 0x7fbd415660a8 bitsizetype> constant 1024>
unit size <integer_cst 0x7fbd416bc700 type <integer_type 0x7fbd41566000 sizetype> constant 128>
align 64 symtab 0 alias set -1 canonical type 0x7fbd30f0bc78
fields <field_decl 0x7fbd30e99ed8 ss_family
type <integer_type 0x7fbd3b98c000 sa_family_t public unsigned HI
size <integer_cst 0x7fbd41555fe0 constant 16>
unit size <integer_cst 0x7fbd4156a000 constant 2>
align 16 symtab 0 alias set -1 canonical type 0x7fbd41566540 precision 16 min <integer_cst 0x7fbd4156a020 0> max <integer_cst 0x7fbd41555fc0 65535>>
unsigned nonlocal HI file /usr/include/bits/socket.h line 189 col 0 size <integer_cst 0x7fbd41555fe0 16> unit size <integer_cst 0x7fbd4156a000 2>
align 16 offset_align 128
offset <integer_cst 0x7fbd41555d60 constant 0>
bit offset <integer_cst 0x7fbd41555de0 constant 0> context <record_type 0x7fbd30d46dc8 sockaddr_storage>
chain <field_decl 0x7fbd30e99000 __ss_align type <integer_type 0x7fbd415667e0 long unsigned int>
unsigned nonlocal DI file /usr/include/bits/socket.h line 190 col 0
size <integer_cst 0x7fbd41555d20 constant 64>
unit size <integer_cst 0x7fbd41555d40 constant 8>
align 64 offset_align 128 offset <integer_cst 0x7fbd41555d60 0> bit offset <integer_cst 0x7fbd41555d20 64> context <record_type 0x7fbd30d46dc8 sockaddr_storage> chain <field_decl 0x7fbd30e99e40 __ss_padding>>> context <translation_unit_decl 0x7fbd30cbc2e0 D.967968>
chain <type_decl 0x7fbd30d47da8 sockaddr_storage>>
<record_type 0x7fbd30f0bc78 sockaddr_storage BLK
size <integer_cst 0x7fbd416bc1e0 type <integer_type 0x7fbd415660a8 bitsizetype> constant 1024>
unit size <integer_cst 0x7fbd416bc700 type <integer_type 0x7fbd41566000 sizetype> constant 128>
align 64 symtab 0 alias set -1 canonical type 0x7fbd30f0bc78
fields <field_decl 0x7fbd30ef9558 ss_family
type <integer_type 0x7fbd3b98c000 sa_family_t public unsigned HI
size <integer_cst 0x7fbd41555fe0 constant 16>
unit size <integer_cst 0x7fbd4156a000 constant 2>
align 16 symtab 0 alias set -1 canonical type 0x7fbd41566540 precision 16 min <integer_cst 0x7fbd4156a020 0> max <integer_cst 0x7fbd41555fc0 65535>>
unsigned HI file /usr/include/bits/socket.h line 189 col 0 size <integer_cst 0x7fbd41555fe0 16> unit size <integer_cst 0x7fbd4156a000 2>
align 16 offset_align 128
offset <integer_cst 0x7fbd41555d60 constant 0>
bit offset <integer_cst 0x7fbd41555de0 constant 0> context <record_type 0x7fbd30f0bc78 sockaddr_storage>
chain <field_decl 0x7fbd30ef94c0 __ss_align type <integer_type 0x7fbd415667e0 long unsigned int>
unsigned DI file /usr/include/bits/socket.h line 190 col 0
size <integer_cst 0x7fbd41555d20 constant 64>
unit size <integer_cst 0x7fbd41555d40 constant 8>
align 64 offset_align 128 offset <integer_cst 0x7fbd41555d60 0> bit offset <integer_cst 0x7fbd41555d20 64> context <record_type 0x7fbd30f0bc78 sockaddr_storage> chain <field_decl 0x7fbd30ef9428 __ss_padding>>> context <translation_unit_decl 0x7fbd30ea9f18 D.936417>
pointer_to_this <pointer_type 0x7fbd30f0bd20> chain <type_decl 0x7fbd30ea9398 D.938243>>
<type_decl 0x7fbd30d47da8 sockaddr_storage
type <record_type 0x7fbd30d46dc8 sockaddr_storage BLK
size <integer_cst 0x7fbd416bc1e0 constant 1024>
unit size <integer_cst 0x7fbd416bc700 constant 128>
align 64 symtab 0 alias set -1 canonical type 0x7fbd30f0bc78
fields <field_decl 0x7fbd30e99ed8 ss_family type <integer_type 0x7fbd3b98c000 sa_family_t>
unsigned nonlocal HI file /usr/include/bits/socket.h line 189 col 0
size <integer_cst 0x7fbd41555fe0 constant 16>
unit size <integer_cst 0x7fbd4156a000 constant 2>
align 16 offset_align 128
offset <integer_cst 0x7fbd41555d60 constant 0>
bit offset <integer_cst 0x7fbd41555de0 constant 0> context <record_type 0x7fbd30d46dc8 sockaddr_storage> chain <field_decl 0x7fbd30e99000 __ss_align>> context <translation_unit_decl 0x7fbd30cbc2e0 D.967968>
chain <type_decl 0x7fbd30d47da8 sockaddr_storage>>
public VOID file /usr/include/bits/socket.h line 187 col 0
align 8 context <translation_unit_decl 0x7fbd30cbc2e0 D.967968>>
<identifier_node 0x7fbd30f06d70 sockaddr_storage>
I am not sure what means that one type has more TYPE_DECLs stacked than the other.
I think that's the usual case of two nearly identical types in the
_same_ SCC, linked through one of the types TYPE_DECL DECL_ORIGINAL_TYPE.
The C++ frontend likes to produce those ...
Hmm, so i guess I should walk into ORIGINAL_TYPE when seeing TYPE_DECL?

Honza
Richard Biener
2013-06-14 10:52:19 UTC
Permalink
I've managed to fix nearly all reported missed merged types for cc1.
Remaining are those we'll never be able to merge (merging would
change the SCC shape) and those that eventually end up refering
to a TYPE_STUB_DECL with a make_anon_name () IDENTIFIER_NODE.
For the latter we should find a middle-end solution as a followup
in case it really matters.

WPA statistics for stage2 cc1 are

[WPA] read 2495082 SCCs of average size 2.380088
[WPA] 5938514 tree bodies read in total
[WPA] tree SCC table: size 524287, 260253 elements, collision ratio:
0.804380
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 429412 SCCs, 7039 collisions (0.016392)
[WPA] Merged 426111 SCCs
[WPA] Merged 3313709 tree bodies
[WPA] Merged 225079 types
[WPA] 162844 types prevailed (488124 associated trees)
[WPA] Old merging code merges an additional 22412 types of which 21492 are
in the same SCC with their prevailing variant (345831 and 323276
associated trees)

which shows there are 920 such TYPE_STUB_DECL issues and 21492
merges the old code did that destroyed SCCs.

Compared to the old code which only unified types and some selected
trees (INTEGER_CSTs), the new code can immediately ggc_free the
unified SCCs after they have been read which results in 55% of
all tree bodies input into WPA stage to be freed (rather than hoping
on secondary GC walk effects as the old code relied on), 58% of
all types are recycled.

Compile-time is at least on-par with the old code now and disk-usage
grows by a moderate 10% due to the streaming format change.

A final LTO bootstrap & regtest is running on x86_64-unknown-linux-gnu
now, and I'm building SPEC2k6 with LTO again.

I plan to commit this or a variant with the old merging code ripped
out on Monday (not decided on that yet) unless anyone objects.

Thus, more testing appreciated (also patch review, of course).

Thanks,
Richard.

* lto-streamer.h (enum LTO_tags): Add LTO_tree_scc.
(lto_input_scc): Declare.
(lto_input_tree_1): Likewise.
* lto-streamer-in.c (lto_read_body): Use streamer_tree_cache_get_tree.
(lto_read_tree_1): Split out from ...
(lto_read_tree): ... this.
(lto_input_scc): New function.
(lto_input_tree_1): Split out from ...
(lto_input_tree): ... this. Handle LTO_tree_scc.
(lto_data_in_create): Create the streamer cache without hashes.
* lto-streamer-out.c (create_output_block): Create the streamer
cache with hashes when not doing WPA.
(lto_write_tree_1): Split out from ...
(lto_write_tree): ... this.
(get_symbol_initial_value): New function.
(lto_output_tree_1): Split out from ...
(lto_output_tree): ... this. Write trees as series of SCCs
using a DFS walk via DFS_write_tree.
(struct sccs, struct scc_entry): New types.
(next_dfs_num, sccstack, sccstate, sccstate_obstack): New globals.
(DFS_write_tree_body): New function.
(DFS_write_tree): Likewise.
(hash_tree): Likewise.
(scc_entry_compare): Likewise.
(hash_scc): Likewise.
* tree-streamer-in.c (lto_input_ts_list_tree_pointers): Stream
TREE_CHAIN as regular reference.
(streamer_read_integer_cst): Remove.
(streamer_get_pickled_tree): Adjust.
* tree-streamer-out.c (streamer_write_chain): Disable streaming
of DECL_EXTERNALs in BLOCK_VARS for now.
(write_ts_list_tree_pointers): Stream TREE_CHAIN as regular
reference.
* tree-streamer.c (streamer_tree_cache_add_to_node_array):
Add hash value argument and record that if hashes are recorded
in the cache.
(streamer_tree_cache_insert_1): Adjust.
(streamer_tree_cache_insert): Likewise.
(streamer_tree_cache_insert_at): Rename to ...
(streamer_tree_cache_replace_tree): ... this and adjust.
(streamer_tree_cache_append): Adjust.
(record_common_node): Likewise.
(streamer_tree_cache_create): Add argument whether to
record hash values together with trees.
(streamer_tree_cache_delete): Adjust.
* tree-streamer.h (struct streamer_tree_cache_d): Add
vector of hashes.
(streamer_read_integer_cst): Remove.
(streamer_tree_cache_insert): Adjust.
(streamer_tree_cache_append): Likewise.
(streamer_tree_cache_insert_at): Rename to ...
(streamer_tree_cache_replace_tree): ... this and adjust.
(streamer_tree_cache_create): Add argument whether to record hashes.
(streamer_tree_cache_get): Rename to ...
(streamer_tree_cache_get_tree): ... this.
(streamer_tree_cache_get_hash): New function.
* tree.c (cache_integer_cst): New function.
* tree.h (cache_integer_cst): Declare.
* lto-symtab.c (lto_varpool_replace_node): Only release
DECL_INITIAL of non-prevailing decls.
* varpool.c (varpool_remove_initializer): Do not release
DECL_INITIAL when we are still in CGRAPH_LTO_STREAMING.

lto/
* Make-lang.in (lto.o): Add $(DATA_STREAMER_H) dependency.
* lto.c: Include data-streamer.h.
(lto_read_in_decl_state): Use streamer_tree_cache_get_tree.
(gimple_type_leader_entry_s, gimple_type_leader,
gimple_lookup_type_leader): Remove.
(gtc_visit): Simplify.
(gimple_types_compatible_p): Likewise.
(gimple_register_type_1): Likewise. Merge into ...
(gimple_register_type): ... this. Keep it as legacy for
statistics purposes for now.
(fixup_integer_cst): Remove.
(LTO_FIXUP_TREE, lto_fixup_types, lto_ft_*): Simplify and
rename to ...
(MAYBE_REMEMBER_WITH_VARS, maybe_remember_with_vars,
maybe_remember_with_vars_*): ... these.
(uniquify_nodes): Remove.
(lto_fixup_prevailing_type): New function.
(struct tree_scc, struct tree_scc_hasher): New type and hasher.
(tree_scc_hash, tree_scc_hash_obstack): New globals.
(num_merged_types, num_prevailing_types, num_not_merged_types,
num_not_merged_types_in_same_scc, total_scc_size, num_sccs_read,
total_scc_size_merged, num_sccs_merged, num_scc_compares,
num_scc_compare_collisions): New global counters.
(compare_tree_sccs_1): New function.
(compare_tree_sccs): Likewise.
(unify_scc): Likewise.
(lto_read_decls): Stream in tree SCCs and unify them on the
way in. Finalize prevailing SCC tree members.
(read_cgraph_and_symbols): Do not initialize or free gimple_type_leader.
Allocate and free tree_scc_hash_obstack and tree_scc_hash, do not bother
to ggc-collect during merging.
(print_lto_report_1): Adjust for new merging code.

Index: trunk/gcc/lto-streamer.h
===================================================================
*** trunk.orig/gcc/lto-streamer.h 2013-06-13 14:35:48.000000000 +0200
--- trunk/gcc/lto-streamer.h 2013-06-13 14:35:53.249107976 +0200
*************** enum LTO_tags
*** 199,204 ****
--- 199,207 ----
/* EH try/catch node. */
LTO_eh_catch,

+ /* Special for global streamer. A blob of unnamed tree nodes. */
+ LTO_tree_scc,
+
/* References to indexable tree nodes. These objects are stored in
tables that are written separately from the function bodies that
reference them. This way they can be instantiated even when the
*************** tree lto_input_tree_ref (struct lto_inpu
*** 854,859 ****
--- 857,866 ----
struct function *, enum LTO_tags);
void lto_tag_check_set (enum LTO_tags, int, ...);
void lto_init_eh (void);
+ hashval_t lto_input_scc (struct lto_input_block *, struct data_in *,
+ unsigned *, unsigned *);
+ tree lto_input_tree_1 (struct lto_input_block *, struct data_in *,
+ enum LTO_tags, hashval_t hash);
tree lto_input_tree (struct lto_input_block *, struct data_in *);


Index: trunk/gcc/lto-streamer-in.c
===================================================================
*** trunk.orig/gcc/lto-streamer-in.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto-streamer-in.c 2013-06-13 14:35:53.250107988 +0200
*************** lto_read_body (struct lto_file_decl_data
*** 1016,1022 ****
unsigned i;
for (i = len; i-- > from;)
{
! tree t = cache->nodes[i];
if (t == NULL_TREE)
continue;

--- 1016,1022 ----
unsigned i;
for (i = len; i-- > from;)
{
! tree t = streamer_tree_cache_get_tree (cache, i);
if (t == NULL_TREE)
continue;

*************** lto_input_function_body (struct lto_file
*** 1056,1067 ****
}


/* Read the physical representation of a tree node with tag TAG from
input block IB using the per-file context in DATA_IN. */

static tree
lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag)
{
/* Instantiate a new tree node. */
tree result = streamer_alloc_tree (ib, data_in, tag);
--- 1056,1098 ----
}


+ /* Read the physical representation of a tree node EXPR from
+ input block IB using the per-file context in DATA_IN. */
+
+ static void
+ lto_read_tree_1 (struct lto_input_block *ib, struct data_in *data_in, tree expr)
+ {
+ /* Read all the bitfield values in EXPR. Note that for LTO, we
+ only write language-independent bitfields, so no more unpacking is
+ needed. */
+ streamer_read_tree_bitfields (ib, data_in, expr);
+
+ /* Read all the pointer fields in EXPR. */
+ streamer_read_tree_body (ib, data_in, expr);
+
+ /* Read any LTO-specific data not read by the tree streamer. */
+ if (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
+ DECL_INITIAL (expr) = stream_read_tree (ib, data_in);
+
+ /* We should never try to instantiate an MD or NORMAL builtin here. */
+ if (TREE_CODE (expr) == FUNCTION_DECL)
+ gcc_assert (!streamer_handle_as_builtin_p (expr));
+
+ #ifdef LTO_STREAMER_DEBUG
+ /* Remove the mapping to RESULT's original address set by
+ streamer_alloc_tree. */
+ lto_orig_address_remove (expr);
+ #endif
+ }
+
/* Read the physical representation of a tree node with tag TAG from
input block IB using the per-file context in DATA_IN. */

static tree
lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag, hashval_t hash)
{
/* Instantiate a new tree node. */
tree result = streamer_alloc_tree (ib, data_in, tag);
*************** lto_read_tree (struct lto_input_block *i
*** 1069,1103 ****
/* Enter RESULT in the reader cache. This will make RESULT
available so that circular references in the rest of the tree
structure can be resolved in subsequent calls to stream_read_tree. */
! streamer_tree_cache_append (data_in->reader_cache, result);

! /* Read all the bitfield values in RESULT. Note that for LTO, we
! only write language-independent bitfields, so no more unpacking is
! needed. */
! streamer_read_tree_bitfields (ib, data_in, result);

! /* Read all the pointer fields in RESULT. */
! streamer_read_tree_body (ib, data_in, result);

! /* Read any LTO-specific data not read by the tree streamer. */
! if (DECL_P (result)
! && TREE_CODE (result) != FUNCTION_DECL
! && TREE_CODE (result) != TRANSLATION_UNIT_DECL)
! DECL_INITIAL (result) = stream_read_tree (ib, data_in);

- /* We should never try to instantiate an MD or NORMAL builtin here. */
- if (TREE_CODE (result) == FUNCTION_DECL)
- gcc_assert (!streamer_handle_as_builtin_p (result));

! /* end_marker = */ streamer_read_uchar (ib);

! #ifdef LTO_STREAMER_DEBUG
! /* Remove the mapping to RESULT's original address set by
! streamer_alloc_tree. */
! lto_orig_address_remove (result);
! #endif

! return result;
}


--- 1100,1169 ----
/* Enter RESULT in the reader cache. This will make RESULT
available so that circular references in the rest of the tree
structure can be resolved in subsequent calls to stream_read_tree. */
! streamer_tree_cache_append (data_in->reader_cache, result, hash);

! lto_read_tree_1 (ib, data_in, result);

! /* end_marker = */ streamer_read_uchar (ib);

! return result;
! }


! /* Populate the reader cache with trees materialized from the SCC
! following in the IB, DATA_IN stream. */

! hashval_t
! lto_input_scc (struct lto_input_block *ib, struct data_in *data_in,
! unsigned *len, unsigned *entry_len)
! {
! /* A blob of unnamed tree nodes, fill the cache from it and
! recurse. */
! unsigned size = streamer_read_uhwi (ib);
! hashval_t scc_hash = streamer_read_uhwi (ib);
! unsigned scc_entry_len = 1;

! if (size == 1)
! {
! enum LTO_tags tag = streamer_read_record_start (ib);
! lto_input_tree_1 (ib, data_in, tag, scc_hash);
! }
! else
! {
! unsigned int first = data_in->reader_cache->nodes.length ();
! tree result;
!
! scc_entry_len = streamer_read_uhwi (ib);
!
! /* Materialize size trees by reading their headers. */
! for (unsigned i = 0; i < size; ++i)
! {
! enum LTO_tags tag = streamer_read_record_start (ib);
! if (tag == LTO_null
! || (tag >= LTO_field_decl_ref && tag <= LTO_global_decl_ref)
! || tag == LTO_tree_pickle_reference
! || tag == LTO_builtin_decl
! || tag == LTO_integer_cst
! || tag == LTO_tree_scc)
! gcc_unreachable ();
!
! result = streamer_alloc_tree (ib, data_in, tag);
! streamer_tree_cache_append (data_in->reader_cache, result, 0);
! }
!
! /* Read the tree bitpacks and references. */
! for (unsigned i = 0; i < size; ++i)
! {
! result = streamer_tree_cache_get_tree (data_in->reader_cache,
! first + i);
! lto_read_tree_1 (ib, data_in, result);
! /* end_marker = */ streamer_read_uchar (ib);
! }
! }
!
! *len = size;
! *entry_len = scc_entry_len;
! return scc_hash;
}


*************** lto_read_tree (struct lto_input_block *i
*** 1106,1117 ****
to previously read nodes. */

tree
! lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
{
- enum LTO_tags tag;
tree result;

- tag = streamer_read_record_start (ib);
gcc_assert ((unsigned) tag < (unsigned) LTO_NUM_TAGS);

if (tag == LTO_null)
--- 1172,1182 ----
to previously read nodes. */

tree
! lto_input_tree_1 (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag, hashval_t hash)
{
tree result;

gcc_assert ((unsigned) tag < (unsigned) LTO_NUM_TAGS);

if (tag == LTO_null)
*************** lto_input_tree (struct lto_input_block *
*** 1137,1155 ****
}
else if (tag == LTO_integer_cst)
{
! /* For shared integer constants we only need the type and its hi/low
! words. */
! result = streamer_read_integer_cst (ib, data_in);
}
else
{
/* Otherwise, materialize a new node from IB. */
! result = lto_read_tree (ib, data_in, tag);
}

return result;
}


/* Input toplevel asms. */

--- 1202,1240 ----
}
else if (tag == LTO_integer_cst)
{
! /* For shared integer constants in singletons we can use the existing
! tree integer constant merging code. */
! tree type = stream_read_tree (ib, data_in);
! unsigned HOST_WIDE_INT low = streamer_read_uhwi (ib);
! HOST_WIDE_INT high = streamer_read_hwi (ib);
! result = build_int_cst_wide (type, low, high);
! streamer_tree_cache_append (data_in->reader_cache, result, hash);
! }
! else if (tag == LTO_tree_scc)
! {
! unsigned len, entry_len;
!
! /* Input and skip the SCC. */
! lto_input_scc (ib, data_in, &len, &entry_len);
!
! /* Recurse. */
! return lto_input_tree (ib, data_in);
}
else
{
/* Otherwise, materialize a new node from IB. */
! result = lto_read_tree (ib, data_in, tag, hash);
}

return result;
}

+ tree
+ lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
+ {
+ return lto_input_tree_1 (ib, data_in, streamer_read_record_start (ib), 0);
+ }
+

/* Input toplevel asms. */

*************** lto_data_in_create (struct lto_file_decl
*** 1220,1226 ****
data_in->strings = strings;
data_in->strings_len = len;
data_in->globals_resolution = resolutions;
! data_in->reader_cache = streamer_tree_cache_create ();

return data_in;
}
--- 1305,1311 ----
data_in->strings = strings;
data_in->strings_len = len;
data_in->globals_resolution = resolutions;
! data_in->reader_cache = streamer_tree_cache_create (false);

return data_in;
}
Index: trunk/gcc/lto-streamer-out.c
===================================================================
*** trunk.orig/gcc/lto-streamer-out.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto-streamer-out.c 2013-06-14 11:29:23.965591771 +0200
*************** create_output_block (enum lto_section_ty
*** 71,77 ****
ob->decl_state = lto_get_out_decl_state ();
ob->main_stream = XCNEW (struct lto_output_stream);
ob->string_stream = XCNEW (struct lto_output_stream);
! ob->writer_cache = streamer_tree_cache_create ();

if (section_type == LTO_section_function_body)
ob->cfg_stream = XCNEW (struct lto_output_stream);
--- 71,77 ----
ob->decl_state = lto_get_out_decl_state ();
ob->main_stream = XCNEW (struct lto_output_stream);
ob->string_stream = XCNEW (struct lto_output_stream);
! ob->writer_cache = streamer_tree_cache_create (!flag_wpa);

if (section_type == LTO_section_function_body)
ob->cfg_stream = XCNEW (struct lto_output_stream);
*************** lto_is_streamable (tree expr)
*** 293,319 ****
}


/* Write a physical representation of tree node EXPR to output block
OB. If REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. IX is the index into the streamer cache
where EXPR is stored. */

static void
! lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
{
- struct bitpack_d bp;
-
- if (!lto_is_streamable (expr))
- internal_error ("tree code %qs is not supported in LTO streams",
- tree_code_name[TREE_CODE (expr)]);
-
- /* Write the header, containing everything needed to materialize
- EXPR on the reading side. */
- streamer_write_tree_header (ob, expr);
-
/* Pack all the non-pointer fields in EXPR into a bitpack and write
the resulting bitpack. */
! bp = bitpack_create (ob->main_stream);
streamer_pack_tree_bitfields (ob, &bp, expr);
streamer_write_bitpack (&bp);

--- 293,339 ----
}


+ /* For EXPR lookup and return what we want to stream to OB as DECL_INITIAL. */
+
+ static tree
+ get_symbol_initial_value (struct output_block *ob, tree expr)
+ {
+ gcc_checking_assert (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL);
+
+ /* Handle DECL_INITIAL for symbols. */
+ tree initial = DECL_INITIAL (expr);
+ if (TREE_CODE (expr) == VAR_DECL
+ && (TREE_STATIC (expr) || DECL_EXTERNAL (expr))
+ && initial)
+ {
+ lto_symtab_encoder_t encoder;
+ struct varpool_node *vnode;
+
+ encoder = ob->decl_state->symtab_node_encoder;
+ vnode = varpool_get_node (expr);
+ if (!vnode
+ || !lto_symtab_encoder_encode_initializer_p (encoder,
+ vnode))
+ initial = error_mark_node;
+ }
+
+ return initial;
+ }
+
+
/* Write a physical representation of tree node EXPR to output block
OB. If REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. IX is the index into the streamer cache
where EXPR is stored. */

static void
! lto_write_tree_1 (struct output_block *ob, tree expr, bool ref_p)
{
/* Pack all the non-pointer fields in EXPR into a bitpack and write
the resulting bitpack. */
! bitpack_d bp = bitpack_create (ob->main_stream);
streamer_pack_tree_bitfields (ob, &bp, expr);
streamer_write_bitpack (&bp);

*************** lto_write_tree (struct output_block *ob,
*** 345,366 ****

stream_write_tree (ob, initial, ref_p);
}

/* Mark the end of EXPR. */
streamer_write_zero (ob);
}

-
/* Emit the physical representation of tree node EXPR to output block
OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */

! void
! lto_output_tree (struct output_block *ob, tree expr,
! bool ref_p, bool this_ref_p)
{
unsigned ix;
- bool existed_p;

if (expr == NULL_TREE)
{
--- 365,403 ----

stream_write_tree (ob, initial, ref_p);
}
+ }
+
+ /* Write a physical representation of tree node EXPR to output block
+ OB. If REF_P is true, the leaves of EXPR are emitted as references
+ via lto_output_tree_ref. IX is the index into the streamer cache
+ where EXPR is stored. */
+
+ static void
+ lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
+ {
+ if (!lto_is_streamable (expr))
+ internal_error ("tree code %qs is not supported in LTO streams",
+ tree_code_name[TREE_CODE (expr)]);
+
+ /* Write the header, containing everything needed to materialize
+ EXPR on the reading side. */
+ streamer_write_tree_header (ob, expr);
+
+ lto_write_tree_1 (ob, expr, ref_p);

/* Mark the end of EXPR. */
streamer_write_zero (ob);
}

/* Emit the physical representation of tree node EXPR to output block
OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */

! static void
! lto_output_tree_1 (struct output_block *ob, tree expr, hashval_t hash,
! bool ref_p, bool this_ref_p)
{
unsigned ix;

if (expr == NULL_TREE)
{
*************** lto_output_tree (struct output_block *ob
*** 374,391 ****
return;
}

! /* Shared INTEGER_CST nodes are special because they need their original type
! to be materialized by the reader (to implement TYPE_CACHED_VALUES). */
! if (TREE_CODE (expr) == INTEGER_CST
! && !TREE_OVERFLOW (expr))
{
streamer_write_integer_cst (ob, expr, ref_p);
return;
}

! existed_p = streamer_tree_cache_insert (ob->writer_cache, expr, &ix);
if (existed_p)
{
/* If a node has already been streamed out, make sure that
we don't write it more than once. Otherwise, the reader
will instantiate two different nodes for the same object. */
--- 411,1332 ----
return;
}

! gcc_checking_assert (!streamer_tree_cache_lookup (ob->writer_cache, expr, &ix));
!
! streamer_tree_cache_insert (ob->writer_cache, expr, hash, &ix);
! if (streamer_handle_as_builtin_p (expr))
! {
! /* MD and NORMAL builtins do not need to be written out
! completely as they are always instantiated by the
! compiler on startup. The only builtins that need to
! be written out are BUILT_IN_FRONTEND. For all other
! builtins, we simply write the class and code. */
! streamer_write_builtin (ob, expr);
! }
! else if (TREE_CODE (expr) == INTEGER_CST
! && !TREE_OVERFLOW (expr))
{
+ /* Shared INTEGER_CST nodes are special because they need their
+ original type to be materialized by the reader (to implement
+ TYPE_CACHED_VALUES). */
streamer_write_integer_cst (ob, expr, ref_p);
+ }
+ else
+ {
+ /* This is the first time we see EXPR, write its fields
+ to OB. */
+ lto_write_tree (ob, expr, ref_p);
+ }
+ }
+
+ struct sccs
+ {
+ unsigned int dfsnum;
+ unsigned int low;
+ };
+
+ struct scc_entry
+ {
+ tree t;
+ hashval_t hash;
+ };
+
+ static unsigned int next_dfs_num;
+ static vec<scc_entry> sccstack;
+ static struct pointer_map_t *sccstate;
+ static struct obstack sccstate_obstack;
+
+ static void
+ DFS_write_tree (struct output_block *ob, sccs *from_state,
+ tree expr, bool ref_p, bool this_ref_p);
+
+ /* Handle the tree EXPR in the DFS walk with SCC state EXPR_STATE and
+ DFS recurse for all tree edges originating from it. */
+
+ static void
+ DFS_write_tree_body (struct output_block *ob,
+ tree expr, sccs *expr_state, bool ref_p)
+ {
+ #define DFS_follow_tree_edge(DEST) \
+ DFS_write_tree (ob, expr_state, DEST, ref_p, ref_p)
+
+ enum tree_code code;
+
+ code = TREE_CODE (expr);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
+ {
+ if (TREE_CODE (expr) != IDENTIFIER_NODE)
+ DFS_follow_tree_edge (TREE_TYPE (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
+ {
+ for (unsigned i = 0; i < VECTOR_CST_NELTS (expr); ++i)
+ DFS_follow_tree_edge (VECTOR_CST_ELT (expr, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
+ {
+ DFS_follow_tree_edge (TREE_REALPART (expr));
+ DFS_follow_tree_edge (TREE_IMAGPART (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
+ {
+ DFS_follow_tree_edge (DECL_NAME (expr));
+ DFS_follow_tree_edge (DECL_CONTEXT (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ DFS_follow_tree_edge (DECL_SIZE (expr));
+ DFS_follow_tree_edge (DECL_SIZE_UNIT (expr));
+
+ /* Note, DECL_INITIAL is not handled here. Since DECL_INITIAL needs
+ special handling in LTO, it must be handled by streamer hooks. */
+
+ DFS_follow_tree_edge (DECL_ATTRIBUTES (expr));
+
+ /* Do not follow DECL_ABSTRACT_ORIGIN. We cannot handle debug information
+ for early inlining so drop it on the floor instead of ICEing in
+ dwarf2out.c. */
+
+ if ((TREE_CODE (expr) == VAR_DECL
+ || TREE_CODE (expr) == PARM_DECL)
+ && DECL_HAS_VALUE_EXPR_P (expr))
+ DFS_follow_tree_edge (DECL_VALUE_EXPR (expr));
+ if (TREE_CODE (expr) == VAR_DECL)
+ DFS_follow_tree_edge (DECL_DEBUG_EXPR (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
+ {
+ if (TREE_CODE (expr) == FUNCTION_DECL)
+ {
+ for (tree t = DECL_ARGUMENTS (expr); t; t = TREE_CHAIN (t))
+ DFS_follow_tree_edge (t);
+ DFS_follow_tree_edge (DECL_RESULT (expr));
+ }
+ else if (TREE_CODE (expr) == TYPE_DECL)
+ DFS_follow_tree_edge (DECL_ORIGINAL_TYPE (expr));
+ DFS_follow_tree_edge (DECL_VINDEX (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ /* Make sure we don't inadvertently set the assembler name. */
+ if (DECL_ASSEMBLER_NAME_SET_P (expr))
+ DFS_follow_tree_edge (DECL_ASSEMBLER_NAME (expr));
+ DFS_follow_tree_edge (DECL_SECTION_NAME (expr));
+ DFS_follow_tree_edge (DECL_COMDAT_GROUP (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
+ {
+ DFS_follow_tree_edge (DECL_FIELD_OFFSET (expr));
+ DFS_follow_tree_edge (DECL_BIT_FIELD_TYPE (expr));
+ DFS_follow_tree_edge (DECL_BIT_FIELD_REPRESENTATIVE (expr));
+ DFS_follow_tree_edge (DECL_FIELD_BIT_OFFSET (expr));
+ DFS_follow_tree_edge (DECL_FCONTEXT (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ DFS_follow_tree_edge (DECL_FUNCTION_PERSONALITY (expr));
+ DFS_follow_tree_edge (DECL_FUNCTION_SPECIFIC_TARGET (expr));
+ DFS_follow_tree_edge (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ DFS_follow_tree_edge (TYPE_SIZE (expr));
+ DFS_follow_tree_edge (TYPE_SIZE_UNIT (expr));
+ DFS_follow_tree_edge (TYPE_ATTRIBUTES (expr));
+ DFS_follow_tree_edge (TYPE_NAME (expr));
+ /* Do not follow TYPE_POINTER_TO or TYPE_REFERENCE_TO. They will be
+ reconstructed during fixup. */
+ /* Do not follow TYPE_NEXT_VARIANT, we reconstruct the variant lists
+ during fixup. */
+ DFS_follow_tree_edge (TYPE_MAIN_VARIANT (expr));
+ DFS_follow_tree_edge (TYPE_CONTEXT (expr));
+ /* TYPE_CANONICAL is re-computed during type merging, so no need
+ to follow it here. */
+ DFS_follow_tree_edge (TYPE_STUB_DECL (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
+ {
+ if (TREE_CODE (expr) == ENUMERAL_TYPE)
+ DFS_follow_tree_edge (TYPE_VALUES (expr));
+ else if (TREE_CODE (expr) == ARRAY_TYPE)
+ DFS_follow_tree_edge (TYPE_DOMAIN (expr));
+ else if (RECORD_OR_UNION_TYPE_P (expr))
+ for (tree t = TYPE_FIELDS (expr); t; t = TREE_CHAIN (t))
+ DFS_follow_tree_edge (t);
+ else if (TREE_CODE (expr) == FUNCTION_TYPE
+ || TREE_CODE (expr) == METHOD_TYPE)
+ DFS_follow_tree_edge (TYPE_ARG_TYPES (expr));
+
+ if (!POINTER_TYPE_P (expr))
+ DFS_follow_tree_edge (TYPE_MINVAL (expr));
+ DFS_follow_tree_edge (TYPE_MAXVAL (expr));
+ if (RECORD_OR_UNION_TYPE_P (expr))
+ DFS_follow_tree_edge (TYPE_BINFO (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_LIST))
+ {
+ DFS_follow_tree_edge (TREE_PURPOSE (expr));
+ DFS_follow_tree_edge (TREE_VALUE (expr));
+ DFS_follow_tree_edge (TREE_CHAIN (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VEC))
+ {
+ for (int i = 0; i < TREE_VEC_LENGTH (expr); i++)
+ DFS_follow_tree_edge (TREE_VEC_ELT (expr, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_EXP))
+ {
+ for (int i = 0; i < TREE_OPERAND_LENGTH (expr); i++)
+ DFS_follow_tree_edge (TREE_OPERAND (expr, i));
+ DFS_follow_tree_edge (TREE_BLOCK (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BLOCK))
+ {
+ for (tree t = BLOCK_VARS (expr); t; t = TREE_CHAIN (t))
+ /* ??? FIXME. See also streamer_write_chain. */
+ if (!(VAR_OR_FUNCTION_DECL_P (t)
+ && DECL_EXTERNAL (t)))
+ DFS_follow_tree_edge (t);
+
+ DFS_follow_tree_edge (BLOCK_SUPERCONTEXT (expr));
+
+ /* Follow BLOCK_ABSTRACT_ORIGIN for the limited cases we can
+ handle - those that represent inlined function scopes.
+ For the drop rest them on the floor instead of ICEing
+ in dwarf2out.c. */
+ if (inlined_function_outer_scope_p (expr))
+ {
+ tree ultimate_origin = block_ultimate_origin (expr);
+ DFS_follow_tree_edge (ultimate_origin);
+ }
+ /* Do not follow BLOCK_NONLOCALIZED_VARS. We cannot handle debug
+ information for early inlined BLOCKs so drop it on the floor instead
+ of ICEing in dwarf2out.c. */
+
+ /* BLOCK_FRAGMENT_ORIGIN and BLOCK_FRAGMENT_CHAIN is not live at LTO
+ streaming time. */
+
+ /* Do not output BLOCK_SUBBLOCKS. Instead on streaming-in this
+ list is re-constructed from BLOCK_SUPERCONTEXT. */
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
+ {
+ unsigned i;
+ tree t;
+
+ /* Note that the number of BINFO slots has already been emitted in
+ EXPR's header (see streamer_write_tree_header) because this length
+ is needed to build the empty BINFO node on the reader side. */
+ FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (expr), i, t)
+ DFS_follow_tree_edge (t);
+ DFS_follow_tree_edge (BINFO_OFFSET (expr));
+ DFS_follow_tree_edge (BINFO_VTABLE (expr));
+ DFS_follow_tree_edge (BINFO_VPTR_FIELD (expr));
+
+ /* The number of BINFO_BASE_ACCESSES has already been emitted in
+ EXPR's bitfield section. */
+ FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (expr), i, t)
+ DFS_follow_tree_edge (t);
+
+ DFS_follow_tree_edge (BINFO_INHERITANCE_CHAIN (expr));
+ DFS_follow_tree_edge (BINFO_SUBVTT_INDEX (expr));
+ DFS_follow_tree_edge (BINFO_VPTR_INDEX (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
+ {
+ unsigned i;
+ tree index, value;
+
+ FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (expr), i, index, value)
+ {
+ DFS_follow_tree_edge (index);
+ DFS_follow_tree_edge (value);
+ }
+ }
+
+ #undef DFS_follow_tree_edge
+ }
+
+ /* Return a hash value for the tree T. */
+
+ static hashval_t
+ hash_tree (struct streamer_tree_cache_d *cache, tree t)
+ {
+ #define visit(SIBLING) \
+ do { \
+ unsigned ix; \
+ if (SIBLING && streamer_tree_cache_lookup (cache, SIBLING, &ix)) \
+ v = iterative_hash_hashval_t (streamer_tree_cache_get_hash (cache, ix), v); \
+ } while (0)
+
+ /* Hash TS_BASE. */
+ enum tree_code code = TREE_CODE (t);
+ hashval_t v = iterative_hash_host_wide_int (code, 0);
+ if (!TYPE_P (t))
+ {
+ v = iterative_hash_host_wide_int (TREE_SIDE_EFFECTS (t)
+ | (TREE_CONSTANT (t) << 1)
+ | (TREE_READONLY (t) << 2)
+ | (TREE_PUBLIC (t) << 3), v);
+ }
+ v = iterative_hash_host_wide_int (TREE_ADDRESSABLE (t)
+ | (TREE_THIS_VOLATILE (t) << 1), v);
+ if (DECL_P (t))
+ v = iterative_hash_host_wide_int (DECL_UNSIGNED (t), v);
+ else if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_UNSIGNED (t), v);
+ if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_ARTIFICIAL (t), v);
+ else
+ v = iterative_hash_host_wide_int (TREE_NO_WARNING (t), v);
+ v = iterative_hash_host_wide_int (TREE_NOTHROW (t)
+ | (TREE_STATIC (t) << 1)
+ | (TREE_PROTECTED (t) << 2)
+ | (TREE_DEPRECATED (t) << 3), v);
+ if (code != TREE_BINFO)
+ v = iterative_hash_host_wide_int (TREE_PRIVATE (t), v);
+ if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_SATURATING (t)
+ | (TYPE_ADDR_SPACE (t) << 1), v);
+ else if (code == SSA_NAME)
+ v = iterative_hash_host_wide_int (SSA_NAME_IS_DEFAULT_DEF (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
+ {
+ v = iterative_hash_host_wide_int (TREE_INT_CST_LOW (t), v);
+ v = iterative_hash_host_wide_int (TREE_INT_CST_HIGH (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_REAL_CST))
+ {
+ REAL_VALUE_TYPE r = TREE_REAL_CST (t);
+ v = iterative_hash_host_wide_int (r.cl, v);
+ v = iterative_hash_host_wide_int (r.decimal
+ | (r.sign << 1)
+ | (r.signalling << 2)
+ | (r.canonical << 3), v);
+ v = iterative_hash_host_wide_int (r.uexp, v);
+ for (unsigned i = 0; i < SIGSZ; ++i)
+ v = iterative_hash_host_wide_int (r.sig[i], v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIXED_CST))
+ {
+ FIXED_VALUE_TYPE f = TREE_FIXED_CST (t);
+ v = iterative_hash_host_wide_int (f.mode, v);
+ v = iterative_hash_host_wide_int (f.data.low, v);
+ v = iterative_hash_host_wide_int (f.data.high, v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ v = iterative_hash_host_wide_int (DECL_MODE (t), v);
+ v = iterative_hash_host_wide_int (DECL_NONLOCAL (t)
+ | (DECL_VIRTUAL_P (t) << 1)
+ | (DECL_IGNORED_P (t) << 2)
+ | (DECL_ABSTRACT (t) << 3)
+ | (DECL_ARTIFICIAL (t) << 4)
+ | (DECL_USER_ALIGN (t) << 5)
+ | (DECL_PRESERVE_P (t) << 6)
+ | (DECL_EXTERNAL (t) << 7)
+ | (DECL_GIMPLE_REG_P (t) << 8), v);
+ v = iterative_hash_host_wide_int (DECL_ALIGN (t), v);
+ if (code == LABEL_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_ERROR_ISSUED (t), v);
+ v = iterative_hash_host_wide_int (EH_LANDING_PAD_NR (t), v);
+ v = iterative_hash_host_wide_int (LABEL_DECL_UID (t), v);
+ }
+ else if (code == FIELD_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_PACKED (t)
+ | (DECL_NONADDRESSABLE_P (t) << 1),
+ v);
+ v = iterative_hash_host_wide_int (DECL_OFFSET_ALIGN (t), v);
+ }
+ else if (code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_HAS_DEBUG_EXPR_P (t)
+ | (DECL_NONLOCAL_FRAME (t) << 1),
+ v);
+ }
+ if (code == RESULT_DECL
+ || code == PARM_DECL
+ || code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_BY_REFERENCE (t), v);
+ if (code == VAR_DECL
+ || code == PARM_DECL)
+ v = iterative_hash_host_wide_int (DECL_HAS_VALUE_EXPR_P (t), v);
+ }
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WRTL))
+ v = iterative_hash_host_wide_int (DECL_REGISTER (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ v = iterative_hash_host_wide_int (DECL_DEFER_OUTPUT (t)
+ | (DECL_COMMON (t) << 1)
+ | (DECL_DLLIMPORT_P (t) << 2)
+ | (DECL_WEAK (t) << 3)
+ | (DECL_SEEN_IN_BIND_EXPR_P (t) << 4)
+ | (DECL_COMDAT (t) << 5)
+ | (DECL_VISIBILITY_SPECIFIED (t) << 6),
+ v);
+ v = iterative_hash_host_wide_int (DECL_VISIBILITY (t), v);
+ if (code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_HARD_REGISTER (t)
+ | (DECL_IN_TEXT_SECTION (t) << 1)
+ | (DECL_IN_CONSTANT_POOL (t) << 2),
+ v);
+ v = iterative_hash_host_wide_int (DECL_TLS_MODEL (t), v);
+ }
+ if (VAR_OR_FUNCTION_DECL_P (t))
+ v = iterative_hash_host_wide_int (DECL_INIT_PRIORITY (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ v = iterative_hash_host_wide_int (DECL_BUILT_IN_CLASS (t), v);
+ v = iterative_hash_host_wide_int (DECL_STATIC_CONSTRUCTOR (t)
+ | (DECL_STATIC_DESTRUCTOR (t) << 1)
+ | (DECL_UNINLINABLE (t) << 2)
+ | (DECL_POSSIBLY_INLINED (t) << 3)
+ | (DECL_IS_NOVOPS (t) << 4)
+ | (DECL_IS_RETURNS_TWICE (t) << 5)
+ | (DECL_IS_MALLOC (t) << 6)
+ | (DECL_IS_OPERATOR_NEW (t) << 7)
+ | (DECL_DECLARED_INLINE_P (t) << 8)
+ | (DECL_STATIC_CHAIN (t) << 9)
+ | (DECL_NO_INLINE_WARNING_P (t) << 10)
+ | (DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT (t) << 11)
+ | (DECL_NO_LIMIT_STACK (t) << 12)
+ | (DECL_DISREGARD_INLINE_LIMITS (t) << 13)
+ | (DECL_PURE_P (t) << 14)
+ | (DECL_LOOPING_CONST_OR_PURE_P (t) << 15), v);
+ if (DECL_BUILT_IN_CLASS (t) != NOT_BUILT_IN)
+ v = iterative_hash_host_wide_int (DECL_FUNCTION_CODE (t), v);
+ if (DECL_STATIC_DESTRUCTOR (t))
+ v = iterative_hash_host_wide_int (DECL_FINI_PRIORITY (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ v = iterative_hash_host_wide_int (TYPE_MODE (t), v);
+ v = iterative_hash_host_wide_int (TYPE_STRING_FLAG (t)
+ | (TYPE_NO_FORCE_BLK (t) << 1)
+ | (TYPE_NEEDS_CONSTRUCTING (t) << 2)
+ | (TYPE_PACKED (t) << 3)
+ | (TYPE_RESTRICT (t) << 4)
+ | (TYPE_CONTAINS_PLACEHOLDER_INTERNAL (t) << 5)
+ | (TYPE_USER_ALIGN (t) << 6)
+ | (TYPE_READONLY (t) << 7), v);
+ if (RECORD_OR_UNION_TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_TRANSPARENT_AGGR (t), v);
+ else if (code == ARRAY_TYPE)
+ v = iterative_hash_host_wide_int (TYPE_NONALIASED_COMPONENT (t), v);
+ v = iterative_hash_host_wide_int (TYPE_PRECISION (t), v);
+ v = iterative_hash_host_wide_int (TYPE_ALIGN (t), v);
+ v = iterative_hash_host_wide_int ((TYPE_ALIAS_SET (t) == 0
+ || (!in_lto_p
+ && get_alias_set (t) == 0))
+ ? 0 : -1, v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TRANSLATION_UNIT_DECL))
+ v = iterative_hash (TRANSLATION_UNIT_LANGUAGE (t),
+ strlen (TRANSLATION_UNIT_LANGUAGE (t)), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TARGET_OPTION))
+ v = iterative_hash (t, sizeof (struct cl_target_option), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_OPTIMIZATION))
+ v = iterative_hash (t, sizeof (struct cl_optimization), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_IDENTIFIER))
+ v = iterative_hash_host_wide_int (IDENTIFIER_HASH_VALUE (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_STRING))
+ v = iterative_hash (TREE_STRING_POINTER (t), TREE_STRING_LENGTH (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
+ {
+ if (POINTER_TYPE_P (t))
+ {
+ /* For pointers factor in the pointed-to type recursively as
+ we cannot recurse through only pointers.
+ ??? We can generalize this by keeping track of the
+ in-SCC edges for each tree (or arbitrarily the first
+ such edge) and hashing that in in a second stage
+ (instead of the quadratic mixing of the SCC we do now). */
+ hashval_t x;
+ unsigned ix;
+ if (streamer_tree_cache_lookup (cache, TREE_TYPE (t), &ix))
+ x = streamer_tree_cache_get_hash (cache, ix);
+ else
+ x = hash_tree (cache, TREE_TYPE (t));
+ v = iterative_hash_hashval_t (x, v);
+ }
+ else if (code != IDENTIFIER_NODE)
+ visit (TREE_TYPE (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
+ for (unsigned i = 0; i < VECTOR_CST_NELTS (t); ++i)
+ visit (VECTOR_CST_ELT (t, i));
+
+ if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
+ {
+ visit (TREE_REALPART (t));
+ visit (TREE_IMAGPART (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
+ {
+ visit (DECL_NAME (t));
+ if (DECL_FILE_SCOPE_P (t))
+ ;
+ else
+ visit (DECL_CONTEXT (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ visit (DECL_SIZE (t));
+ visit (DECL_SIZE_UNIT (t));
+ visit (DECL_ATTRIBUTES (t));
+ if ((code == VAR_DECL
+ || code == PARM_DECL)
+ && DECL_HAS_VALUE_EXPR_P (t))
+ visit (DECL_VALUE_EXPR (t));
+ if (code == VAR_DECL
+ && DECL_HAS_DEBUG_EXPR_P (t))
+ visit (DECL_DEBUG_EXPR (t));
+ /* ??? Hash DECL_INITIAL as streamed. Needs the output-block to
+ be able to call get_symbol_initial_value. */
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
+ {
+ if (code == FUNCTION_DECL)
+ {
+ for (tree a = DECL_ARGUMENTS (t); a; a = DECL_CHAIN (a))
+ visit (a);
+ visit (DECL_RESULT (t));
+ }
+ else if (code == TYPE_DECL)
+ visit (DECL_ORIGINAL_TYPE (t));
+ visit (DECL_VINDEX (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ if (DECL_ASSEMBLER_NAME_SET_P (t))
+ visit (DECL_ASSEMBLER_NAME (t));
+ visit (DECL_SECTION_NAME (t));
+ visit (DECL_COMDAT_GROUP (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
+ {
+ visit (DECL_FIELD_OFFSET (t));
+ visit (DECL_BIT_FIELD_TYPE (t));
+ visit (DECL_BIT_FIELD_REPRESENTATIVE (t));
+ visit (DECL_FIELD_BIT_OFFSET (t));
+ visit (DECL_FCONTEXT (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ visit (DECL_FUNCTION_PERSONALITY (t));
+ visit (DECL_FUNCTION_SPECIFIC_TARGET (t));
+ visit (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ visit (TYPE_SIZE (t));
+ visit (TYPE_SIZE_UNIT (t));
+ visit (TYPE_ATTRIBUTES (t));
+ visit (TYPE_NAME (t));
+ visit (TYPE_MAIN_VARIANT (t));
+ if (TYPE_FILE_SCOPE_P (t))
+ ;
+ else
+ visit (TYPE_CONTEXT (t));
+ visit (TYPE_STUB_DECL (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
+ {
+ if (code == ENUMERAL_TYPE)
+ visit (TYPE_VALUES (t));
+ else if (code == ARRAY_TYPE)
+ visit (TYPE_DOMAIN (t));
+ else if (RECORD_OR_UNION_TYPE_P (t))
+ for (tree f = TYPE_FIELDS (t); f; f = TREE_CHAIN (f))
+ visit (f);
+ else if (code == FUNCTION_TYPE
+ || code == METHOD_TYPE)
+ visit (TYPE_ARG_TYPES (t));
+ if (!POINTER_TYPE_P (t))
+ visit (TYPE_MINVAL (t));
+ visit (TYPE_MAXVAL (t));
+ if (RECORD_OR_UNION_TYPE_P (t))
+ visit (TYPE_BINFO (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_LIST))
+ {
+ visit (TREE_PURPOSE (t));
+ visit (TREE_VALUE (t));
+ visit (TREE_CHAIN (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VEC))
+ for (int i = 0; i < TREE_VEC_LENGTH (t); ++i)
+ visit (TREE_VEC_ELT (t, i));
+
+ if (CODE_CONTAINS_STRUCT (code, TS_EXP))
+ {
+ v = iterative_hash_host_wide_int (TREE_OPERAND_LENGTH (t), v);
+ for (int i = 0; i < TREE_OPERAND_LENGTH (t); ++i)
+ visit (TREE_OPERAND (t, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
+ {
+ unsigned i;
+ tree b;
+ FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (t), i, b)
+ visit (b);
+ visit (BINFO_OFFSET (t));
+ visit (BINFO_VTABLE (t));
+ visit (BINFO_VPTR_FIELD (t));
+ FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (t), i, b)
+ visit (b);
+ visit (BINFO_INHERITANCE_CHAIN (t));
+ visit (BINFO_SUBVTT_INDEX (t));
+ visit (BINFO_VPTR_INDEX (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
+ {
+ unsigned i;
+ tree index, value;
+ v = iterative_hash_host_wide_int (CONSTRUCTOR_NELTS (t), v);
+ FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (t), i, index, value)
+ {
+ visit (index);
+ visit (value);
+ }
+ }
+
+ return v;
+
+ #undef visit
+ }
+
+ /* Compare two SCC entries by their hash value for qsorting them. */
+
+ static int
+ scc_entry_compare (const void *p1_, const void *p2_)
+ {
+ const scc_entry *p1 = (const scc_entry *) p1_;
+ const scc_entry *p2 = (const scc_entry *) p2_;
+ if (p1->hash < p2->hash)
+ return -1;
+ else if (p1->hash > p2->hash)
+ return 1;
+ return 0;
+ }
+
+ /* Return a hash value for the SCC on the SCC stack from FIRST with
+ size SIZE. */
+
+ static hashval_t
+ hash_scc (struct streamer_tree_cache_d *cache, unsigned first, unsigned size)
+ {
+ /* Compute hash values for the SCC members. */
+ for (unsigned i = 0; i < size; ++i)
+ sccstack[first+i].hash = hash_tree (cache, sccstack[first+i].t);
+
+ if (size == 1)
+ return sccstack[first].hash;
+
+ /* Sort the SCC of type, hash pairs so that when we mix in
+ all members of the SCC the hash value becomes independent on
+ the order we visited the SCC. Disregard hashes equal to
+ the hash of the tree we mix into because we cannot guarantee
+ a stable sort for those across different TUs. */
+ qsort (&sccstack[first], size, sizeof (scc_entry), scc_entry_compare);
+ hashval_t *tem = XALLOCAVEC (hashval_t, size);
+ for (unsigned i = 0; i < size; ++i)
+ {
+ hashval_t hash = sccstack[first+i].hash;
+ hashval_t orig_hash = hash;
+ unsigned j;
+ /* Skip same hashes. */
+ for (j = i + 1;
+ j < size && sccstack[first+j].hash == orig_hash; ++j)
+ ;
+ for (; j < size; ++j)
+ hash = iterative_hash_hashval_t (sccstack[first+j].hash, hash);
+ for (j = 0; sccstack[first+j].hash != orig_hash; ++j)
+ hash = iterative_hash_hashval_t (sccstack[first+j].hash, hash);
+ tem[i] = hash;
+ }
+ hashval_t scc_hash = 0;
+ for (unsigned i = 0; i < size; ++i)
+ {
+ sccstack[first+i].hash = tem[i];
+ scc_hash = iterative_hash_hashval_t (tem[i], scc_hash);
+ }
+ return scc_hash;
+ }
+
+ /* DFS walk EXPR and stream SCCs of tree bodies if they are not
+ already in the streamer cache. Main routine called for
+ each visit of EXPR. */
+
+ static void
+ DFS_write_tree (struct output_block *ob, sccs *from_state,
+ tree expr, bool ref_p, bool this_ref_p)
+ {
+ unsigned ix;
+ sccs **slot;
+
+ /* Handle special cases. */
+ if (expr == NULL_TREE)
+ return;
+
+ /* Do not DFS walk into indexable trees. */
+ if (this_ref_p && tree_is_indexable (expr))
+ return;
+
+ /* Check if we already streamed EXPR. */
+ if (streamer_tree_cache_lookup (ob->writer_cache, expr, &ix))
+ return;
+
+ slot = (sccs **)pointer_map_insert (sccstate, expr);
+ sccs *cstate = *slot;
+ if (!cstate)
+ {
+ scc_entry e = { expr, 0 };
+ /* Not yet visited. DFS recurse and push it onto the stack. */
+ *slot = cstate = XOBNEW (&sccstate_obstack, struct sccs);
+ sccstack.safe_push (e);
+ cstate->dfsnum = next_dfs_num++;
+ cstate->low = cstate->dfsnum;
+
+ if (streamer_handle_as_builtin_p (expr))
+ ;
+ else if (TREE_CODE (expr) == INTEGER_CST
+ && !TREE_OVERFLOW (expr))
+ DFS_write_tree (ob, cstate, TREE_TYPE (expr), ref_p, ref_p);
+ else
+ {
+ DFS_write_tree_body (ob, expr, cstate, ref_p);
+
+ /* Walk any LTO-specific edges. */
+ if (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
+ {
+ /* Handle DECL_INITIAL for symbols. */
+ tree initial = get_symbol_initial_value (ob, expr);
+ DFS_write_tree (ob, cstate, initial, ref_p, ref_p);
+ }
+ }
+
+ /* See if we found an SCC. */
+ if (cstate->low == cstate->dfsnum)
+ {
+ unsigned first, size;
+ tree x;
+
+ /* Pop the SCC and compute its size. */
+ first = sccstack.length ();
+ do
+ {
+ x = sccstack[--first].t;
+ }
+ while (x != expr);
+ size = sccstack.length () - first;
+
+ /* No need to compute hashes for LTRANS units, we don't perform
+ any merging there. */
+ hashval_t scc_hash = 0;
+ unsigned scc_entry_len = 0;
+ if (!flag_wpa)
+ {
+ scc_hash = hash_scc (ob->writer_cache, first, size);
+
+ /* Put the entries with the least number of collisions first. */
+ unsigned entry_start = 0;
+ scc_entry_len = size + 1;
+ for (unsigned i = 0; i < size;)
+ {
+ unsigned from = i;
+ for (i = i + 1; i < size
+ && (sccstack[first + i].hash
+ == sccstack[first + from].hash); ++i)
+ ;
+ if (i - from < scc_entry_len)
+ {
+ scc_entry_len = i - from;
+ entry_start = from;
+ }
+ }
+ for (unsigned i = 0; i < scc_entry_len; ++i)
+ {
+ scc_entry tem = sccstack[first + i];
+ sccstack[first + i] = sccstack[first + entry_start + i];
+ sccstack[first + entry_start + i] = tem;
+ }
+ }
+
+ /* Write LTO_tree_scc. */
+ streamer_write_record_start (ob, LTO_tree_scc);
+ streamer_write_uhwi (ob, size);
+ streamer_write_uhwi (ob, scc_hash);
+
+ /* Write size-1 SCCs without wrapping them inside SCC bundles.
+ All INTEGER_CSTs need to be handled this way as we need
+ their type to materialize them. Also builtins are handled
+ this way.
+ ??? We still wrap these in LTO_tree_scc so at the
+ input side we can properly identify the tree we want
+ to ultimatively return. */
+ size_t old_len = ob->writer_cache->nodes.length ();
+ if (size == 1)
+ lto_output_tree_1 (ob, expr, scc_hash, ref_p, this_ref_p);
+ else
+ {
+ /* Write the size of the SCC entry candidates. */
+ streamer_write_uhwi (ob, scc_entry_len);
+
+ /* Write all headers and populate the streamer cache. */
+ for (unsigned i = 0; i < size; ++i)
+ {
+ hashval_t hash = sccstack[first+i].hash;
+ tree t = sccstack[first+i].t;
+ bool exists_p = streamer_tree_cache_insert (ob->writer_cache,
+ t, hash, &ix);
+ gcc_assert (!exists_p);
+
+ if (!lto_is_streamable (t))
+ internal_error ("tree code %qs is not supported "
+ "in LTO streams",
+ tree_code_name[TREE_CODE (t)]);
+
+ gcc_checking_assert (!streamer_handle_as_builtin_p (t));
+
+ /* Write the header, containing everything needed to
+ materialize EXPR on the reading side. */
+ streamer_write_tree_header (ob, t);
+ }
+
+ /* Write the bitpacks and tree references. */
+ for (unsigned i = 0; i < size; ++i)
+ {
+ lto_write_tree_1 (ob, sccstack[first+i].t, ref_p);
+
+ /* Mark the end of the tree. */
+ streamer_write_zero (ob);
+ }
+ }
+ gcc_assert (old_len + size == ob->writer_cache->nodes.length ());
+
+ /* Finally truncate the vector. */
+ sccstack.truncate (first);
+
+ if (from_state)
+ from_state->low = MIN (from_state->low, cstate->low);
+ return;
+ }
+
+ if (from_state)
+ from_state->low = MIN (from_state->low, cstate->low);
+ }
+ gcc_checking_assert (from_state);
+ if (cstate->dfsnum < from_state->dfsnum)
+ from_state->low = MIN (cstate->dfsnum, from_state->low);
+ }
+
+ extern unsigned long num_trees_output;
+ extern unsigned long num_grefs_output;
+ extern unsigned long num_lrefs_output;
+
+ /* Emit the physical representation of tree node EXPR to output block
+ OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
+ via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */
+
+ void
+ lto_output_tree (struct output_block *ob, tree expr,
+ bool ref_p, bool this_ref_p)
+ {
+ unsigned ix;
+ bool existed_p;
+
+ num_trees_output++;
+
+ if (expr == NULL_TREE)
+ {
+ streamer_write_record_start (ob, LTO_null);
+ return;
+ }
+
+ if (this_ref_p && tree_is_indexable (expr))
+ {
+ num_grefs_output++;
+ lto_output_tree_ref (ob, expr);
return;
}

! existed_p = streamer_tree_cache_lookup (ob->writer_cache, expr, &ix);
if (existed_p)
{
+ num_lrefs_output++;
/* If a node has already been streamed out, make sure that
we don't write it more than once. Otherwise, the reader
will instantiate two different nodes for the same object. */
*************** lto_output_tree (struct output_block *ob
*** 394,413 ****
streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
lto_tree_code_to_tag (TREE_CODE (expr)));
}
- else if (streamer_handle_as_builtin_p (expr))
- {
- /* MD and NORMAL builtins do not need to be written out
- completely as they are always instantiated by the
- compiler on startup. The only builtins that need to
- be written out are BUILT_IN_FRONTEND. For all other
- builtins, we simply write the class and code. */
- streamer_write_builtin (ob, expr);
- }
else
{
! /* This is the first time we see EXPR, write its fields
! to OB. */
! lto_write_tree (ob, expr, ref_p);
}
}

--- 1335,1365 ----
streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
lto_tree_code_to_tag (TREE_CODE (expr)));
}
else
{
! /* This is the first time we see EXPR, write all reachable
! trees to OB. */
!
! /* Start the DFS walk. */
! /* Save ob state ... */
! /* let's see ... */
! sccstate = pointer_map_create ();
! gcc_obstack_init (&sccstate_obstack);
! next_dfs_num = 1;
! DFS_write_tree (ob, NULL, expr, ref_p, this_ref_p);
! sccstack.release ();
! pointer_map_destroy (sccstate);
! obstack_free (&sccstate_obstack, NULL);
!
! /* Finally append a reference to the tree we were writing.
! ??? If expr ended up as a singleton we could have
! inlined it here and avoid outputting a reference. */
! existed_p = streamer_tree_cache_lookup (ob->writer_cache, expr, &ix);
! gcc_assert (existed_p);
! streamer_write_record_start (ob, LTO_tree_pickle_reference);
! streamer_write_uhwi (ob, ix);
! streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
! lto_tree_code_to_tag (TREE_CODE (expr)));
}
}

Index: trunk/gcc/tree-streamer-in.c
===================================================================
*** trunk.orig/gcc/tree-streamer-in.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree-streamer-in.c 2013-06-14 10:11:29.290448306 +0200
*************** unpack_ts_base_value_fields (struct bitp
*** 122,131 ****
TYPE_ARTIFICIAL (expr) = (unsigned) bp_unpack_value (bp, 1);
else
TREE_NO_WARNING (expr) = (unsigned) bp_unpack_value (bp, 1);
- TREE_USED (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_NOTHROW (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_STATIC (expr) = (unsigned) bp_unpack_value (bp, 1);
! TREE_PRIVATE (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_PROTECTED (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_DEPRECATED (expr) = (unsigned) bp_unpack_value (bp, 1);
if (TYPE_P (expr))
--- 122,131 ----
TYPE_ARTIFICIAL (expr) = (unsigned) bp_unpack_value (bp, 1);
else
TREE_NO_WARNING (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_NOTHROW (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_STATIC (expr) = (unsigned) bp_unpack_value (bp, 1);
! if (TREE_CODE (expr) != TREE_BINFO)
! TREE_PRIVATE (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_PROTECTED (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_DEPRECATED (expr) = (unsigned) bp_unpack_value (bp, 1);
if (TYPE_P (expr))
*************** lto_input_ts_list_tree_pointers (struct
*** 811,817 ****
{
TREE_PURPOSE (expr) = stream_read_tree (ib, data_in);
TREE_VALUE (expr) = stream_read_tree (ib, data_in);
! TREE_CHAIN (expr) = streamer_read_chain (ib, data_in);
}


--- 811,817 ----
{
TREE_PURPOSE (expr) = stream_read_tree (ib, data_in);
TREE_VALUE (expr) = stream_read_tree (ib, data_in);
! TREE_CHAIN (expr) = stream_read_tree (ib, data_in);
}


*************** streamer_read_tree_body (struct lto_inpu
*** 1021,1039 ****
}


- /* Read and INTEGER_CST node from input block IB using the per-file
- context in DATA_IN. */
-
- tree
- streamer_read_integer_cst (struct lto_input_block *ib, struct data_in *data_in)
- {
- tree type = stream_read_tree (ib, data_in);
- unsigned HOST_WIDE_INT low = streamer_read_uhwi (ib);
- HOST_WIDE_INT high = streamer_read_hwi (ib);
- return build_int_cst_wide (type, low, high);
- }
-
-
/* Read an index IX from input block IB and return the tree node at
DATA_IN->FILE_DATA->GLOBALS_INDEX[IX]. */

--- 1021,1026 ----
*************** streamer_get_pickled_tree (struct lto_in
*** 1047,1053 ****
ix = streamer_read_uhwi (ib);
expected_tag = streamer_read_enum (ib, LTO_tags, LTO_NUM_TAGS);

! result = streamer_tree_cache_get (data_in->reader_cache, ix);
gcc_assert (result
&& TREE_CODE (result) == lto_tag_to_tree_code (expected_tag));

--- 1034,1040 ----
ix = streamer_read_uhwi (ib);
expected_tag = streamer_read_enum (ib, LTO_tags, LTO_NUM_TAGS);

! result = streamer_tree_cache_get_tree (data_in->reader_cache, ix);
gcc_assert (result
&& TREE_CODE (result) == lto_tag_to_tree_code (expected_tag));

*************** streamer_get_builtin_tree (struct lto_in
*** 1091,1097 ****
if (asmname)
set_builtin_user_assembler_name (result, asmname);

! streamer_tree_cache_append (data_in->reader_cache, result);

return result;
}
--- 1078,1084 ----
if (asmname)
set_builtin_user_assembler_name (result, asmname);

! streamer_tree_cache_append (data_in->reader_cache, result, 0);

return result;
}
Index: trunk/gcc/lto/lto.c
===================================================================
*** trunk.orig/gcc/lto/lto.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto/lto.c 2013-06-14 12:35:04.067989867 +0200
*************** along with GCC; see the file COPYING3.
*** 45,50 ****
--- 45,51 ----
#include "tree-streamer.h"
#include "splay-tree.h"
#include "lto-partition.h"
+ #include "data-streamer.h"

static GTY(()) tree first_personality_decl;

*************** lto_read_in_decl_state (struct data_in *
*** 254,260 ****
uint32_t i, j;

ix = *data++;
! decl = streamer_tree_cache_get (data_in->reader_cache, ix);
if (TREE_CODE (decl) != FUNCTION_DECL)
{
gcc_assert (decl == void_type_node);
--- 255,261 ----
uint32_t i, j;

ix = *data++;
! decl = streamer_tree_cache_get_tree (data_in->reader_cache, ix);
if (TREE_CODE (decl) != FUNCTION_DECL)
{
gcc_assert (decl == void_type_node);
*************** lto_read_in_decl_state (struct data_in *
*** 268,274 ****
tree *decls = ggc_alloc_vec_tree (size);

for (j = 0; j < size; j++)
! decls[j] = streamer_tree_cache_get (data_in->reader_cache, data[j]);

state->streams[i].size = size;
state->streams[i].trees = decls;
--- 269,275 ----
tree *decls = ggc_alloc_vec_tree (size);

for (j = 0; j < size; j++)
! decls[j] = streamer_tree_cache_get_tree (data_in->reader_cache, data[j]);

state->streams[i].size = size;
state->streams[i].trees = decls;
*************** lto_read_in_decl_state (struct data_in *
*** 280,285 ****
--- 281,289 ----



+ /* ??? Old hashing and merging code follows, we keep it for statistics
+ purposes for now. */
+
/* Global type table. FIXME, it should be possible to re-use some
of the type hashing routines in tree.c (type_hash_canon, type_hash_lookup,
etc), but those assume that types were built with the various
*************** struct sccs
*** 366,398 ****
static unsigned int next_dfs_num;
static unsigned int gtc_next_dfs_num;

- /* GIMPLE type merging cache. A direct-mapped cache based on TYPE_UID. */
-
- typedef struct GTY(()) gimple_type_leader_entry_s {
- tree type;
- tree leader;
- } gimple_type_leader_entry;
-
- #define GIMPLE_TYPE_LEADER_SIZE 16381
- static GTY((length("GIMPLE_TYPE_LEADER_SIZE")))
- gimple_type_leader_entry *gimple_type_leader;
-
- /* Lookup an existing leader for T and return it or NULL_TREE, if
- there is none in the cache. */
-
- static inline tree
- gimple_lookup_type_leader (tree t)
- {
- gimple_type_leader_entry *leader;
-
- leader = &gimple_type_leader[TYPE_UID (t) % GIMPLE_TYPE_LEADER_SIZE];
- if (leader->type != t)
- return NULL_TREE;
-
- return leader->leader;
- }
-
-
/* Return true if T1 and T2 have the same name. If FOR_COMPLETION_P is
true then if any type has no name return false, otherwise return
true if both types have no names. */
--- 370,375 ----
*************** gtc_visit (tree t1, tree t2,
*** 450,456 ****
struct sccs *cstate = NULL;
type_pair_t p;
void **slot;
- tree leader1, leader2;

/* Check first for the obvious case of pointer identity. */
if (t1 == t2)
--- 427,432 ----
*************** gtc_visit (tree t1, tree t2,
*** 507,521 ****
/* For other types fall through to more complex checks. */
}

- /* If the types have been previously registered and found equal
- they still are. */
- leader1 = gimple_lookup_type_leader (t1);
- leader2 = gimple_lookup_type_leader (t2);
- if (leader1 == t2
- || t1 == leader2
- || (leader1 && leader1 == leader2))
- return true;
-
/* If the hash values of t1 and t2 are different the types can't
possibly be the same. This helps keeping the type-pair hashtable
small, only tracking comparisons for hash collisions. */
--- 483,488 ----
*************** gimple_types_compatible_p (tree t1, tree
*** 879,885 ****
struct obstack sccstate_obstack;
type_pair_t p = NULL;
bool res;
- tree leader1, leader2;

/* Before starting to set up the SCC machinery handle simple cases. */

--- 846,851 ----
*************** gimple_types_compatible_p (tree t1, tree
*** 938,952 ****
/* For other types fall through to more complex checks. */
}

- /* If the types have been previously registered and found equal
- they still are. */
- leader1 = gimple_lookup_type_leader (t1);
- leader2 = gimple_lookup_type_leader (t2);
- if (leader1 == t2
- || t1 == leader2
- || (leader1 && leader1 == leader2))
- return true;
-
/* If the hash values of t1 and t2 are different the types can't
possibly be the same. This helps keeping the type-pair hashtable
small, only tracking comparisons for hash collisions. */
--- 904,909 ----
*************** gimple_type_eq (const void *p1, const vo
*** 1335,1402 ****
CONST_CAST_TREE (t2));
}

!
! /* Worker for gimple_register_type.
! Register type T in the global type table gimple_types.
! When REGISTERING_MV is false first recurse for the main variant of T. */

static tree
! gimple_register_type_1 (tree t, bool registering_mv)
{
void **slot;
- gimple_type_leader_entry *leader;
-
- /* If we registered this type before return the cached result. */
- leader = &gimple_type_leader[TYPE_UID (t) % GIMPLE_TYPE_LEADER_SIZE];
- if (leader->type == t)
- return leader->leader;
-
- /* Always register the main variant first. This is important so we
- pick up the non-typedef variants as canonical, otherwise we'll end
- up taking typedef ids for structure tags during comparison.
- It also makes sure that main variants will be merged to main variants.
- As we are operating on a possibly partially fixed up type graph
- do not bother to recurse more than once, otherwise we may end up
- walking in circles.
- If we are registering a main variant it will either remain its
- own main variant or it will be merged to something else in which
- case we do not care for the main variant leader. */
- if (!registering_mv
- && TYPE_MAIN_VARIANT (t) != t)
- gimple_register_type_1 (TYPE_MAIN_VARIANT (t), true);

/* See if we already have an equivalent type registered. */
slot = htab_find_slot (gimple_types, t, INSERT);
if (*slot
&& *(tree *)slot != t)
! {
! tree new_type = (tree) *((tree *) slot);
! leader->type = t;
! leader->leader = new_type;
! return new_type;
! }

/* If not, insert it to the cache and the hash. */
- leader->type = t;
- leader->leader = t;
*slot = (void *) t;
return t;
}

! /* Register type T in the global type table gimple_types.
! If another type T', compatible with T, already existed in
! gimple_types then return T', otherwise return T. This is used by
! LTO to merge identical types read from different TUs. */
!
! static tree
! gimple_register_type (tree t)
! {
! gcc_assert (TYPE_P (t));
! return gimple_register_type_1 (t, false);
! }
!
! #define GIMPLE_REGISTER_TYPE(tt) \
! (TREE_VISITED (tt) ? gimple_register_type (tt) : tt)



--- 1292,1316 ----
CONST_CAST_TREE (t2));
}

! /* Register type T in the global type table gimple_types. */

static tree
! gimple_register_type (tree t)
{
void **slot;

/* See if we already have an equivalent type registered. */
slot = htab_find_slot (gimple_types, t, INSERT);
if (*slot
&& *(tree *)slot != t)
! return (tree) *((tree *) slot);

/* If not, insert it to the cache and the hash. */
*slot = (void *) t;
return t;
}

! /* End of old merging code. */



*************** remember_with_vars (tree t)
*** 1413,1632 ****
*(tree *) htab_find_slot (tree_with_vars, t, INSERT) = t;
}

! #define LTO_FIXUP_TREE(tt) \
do \
{ \
if (tt) \
{ \
- if (TYPE_P (tt)) \
- (tt) = GIMPLE_REGISTER_TYPE (tt); \
if (VAR_OR_FUNCTION_DECL_P (tt) && TREE_PUBLIC (tt)) \
remember_with_vars (t); \
- if (TREE_CODE (tt) == INTEGER_CST) \
- (tt) = fixup_integer_cst (tt); \
} \
} while (0)

- static void lto_fixup_types (tree);
-
- /* Return integer_cst T with updated type. */
-
- static tree
- fixup_integer_cst (tree t)
- {
- tree type = GIMPLE_REGISTER_TYPE (TREE_TYPE (t));
-
- if (type == TREE_TYPE (t))
- return t;
-
- /* If overflow was set, streamer_read_integer_cst
- produced local copy of T. */
- if (TREE_OVERFLOW (t))
- {
- TREE_TYPE (t) = type;
- return t;
- }
- else
- /* Otherwise produce new shared node for the new type. */
- return build_int_cst_wide (type, TREE_INT_CST_LOW (t),
- TREE_INT_CST_HIGH (t));
- }
-
/* Fix up fields of a tree_typed T. */

static void
! lto_ft_typed (tree t)
{
! LTO_FIXUP_TREE (TREE_TYPE (t));
}

/* Fix up fields of a tree_common T. */

static void
! lto_ft_common (tree t)
{
! lto_ft_typed (t);
! LTO_FIXUP_TREE (TREE_CHAIN (t));
}

/* Fix up fields of a decl_minimal T. */

static void
! lto_ft_decl_minimal (tree t)
{
! lto_ft_common (t);
! LTO_FIXUP_TREE (DECL_NAME (t));
! LTO_FIXUP_TREE (DECL_CONTEXT (t));
}

/* Fix up fields of a decl_common T. */

static void
! lto_ft_decl_common (tree t)
{
! lto_ft_decl_minimal (t);
! LTO_FIXUP_TREE (DECL_SIZE (t));
! LTO_FIXUP_TREE (DECL_SIZE_UNIT (t));
! LTO_FIXUP_TREE (DECL_INITIAL (t));
! LTO_FIXUP_TREE (DECL_ATTRIBUTES (t));
! LTO_FIXUP_TREE (DECL_ABSTRACT_ORIGIN (t));
}

/* Fix up fields of a decl_with_vis T. */

static void
! lto_ft_decl_with_vis (tree t)
{
! lto_ft_decl_common (t);

/* Accessor macro has side-effects, use field-name here. */
! LTO_FIXUP_TREE (t->decl_with_vis.assembler_name);
! LTO_FIXUP_TREE (DECL_SECTION_NAME (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! lto_ft_decl_non_common (tree t)
{
! lto_ft_decl_with_vis (t);
! LTO_FIXUP_TREE (DECL_ARGUMENT_FLD (t));
! LTO_FIXUP_TREE (DECL_RESULT_FLD (t));
! LTO_FIXUP_TREE (DECL_VINDEX (t));
! /* The C frontends may create exact duplicates for DECL_ORIGINAL_TYPE
! like for 'typedef enum foo foo'. We have no way of avoiding to
! merge them and dwarf2out.c cannot deal with this,
! so fix this up by clearing DECL_ORIGINAL_TYPE in this case. */
! if (TREE_CODE (t) == TYPE_DECL
! && DECL_ORIGINAL_TYPE (t) == TREE_TYPE (t))
! DECL_ORIGINAL_TYPE (t) = NULL_TREE;
}

/* Fix up fields of a decl_non_common T. */

static void
! lto_ft_function (tree t)
{
! lto_ft_decl_non_common (t);
! LTO_FIXUP_TREE (DECL_FUNCTION_PERSONALITY (t));
}

/* Fix up fields of a field_decl T. */

static void
! lto_ft_field_decl (tree t)
{
! lto_ft_decl_common (t);
! LTO_FIXUP_TREE (DECL_FIELD_OFFSET (t));
! LTO_FIXUP_TREE (DECL_BIT_FIELD_TYPE (t));
! LTO_FIXUP_TREE (DECL_QUALIFIER (t));
! LTO_FIXUP_TREE (DECL_FIELD_BIT_OFFSET (t));
! LTO_FIXUP_TREE (DECL_FCONTEXT (t));
}

/* Fix up fields of a type T. */

static void
! lto_ft_type (tree t)
{
! lto_ft_common (t);
! LTO_FIXUP_TREE (TYPE_CACHED_VALUES (t));
! LTO_FIXUP_TREE (TYPE_SIZE (t));
! LTO_FIXUP_TREE (TYPE_SIZE_UNIT (t));
! LTO_FIXUP_TREE (TYPE_ATTRIBUTES (t));
! LTO_FIXUP_TREE (TYPE_NAME (t));

/* Accessors are for derived node types only. */
if (!POINTER_TYPE_P (t))
! LTO_FIXUP_TREE (TYPE_MINVAL (t));
! LTO_FIXUP_TREE (TYPE_MAXVAL (t));

/* Accessor is for derived node types only. */
! LTO_FIXUP_TREE (t->type_non_common.binfo);

! LTO_FIXUP_TREE (TYPE_CONTEXT (t));
}

/* Fix up fields of a BINFO T. */

static void
! lto_ft_binfo (tree t)
{
unsigned HOST_WIDE_INT i, n;
- tree base, saved_base;

! lto_ft_common (t);
! LTO_FIXUP_TREE (BINFO_VTABLE (t));
! LTO_FIXUP_TREE (BINFO_OFFSET (t));
! LTO_FIXUP_TREE (BINFO_VIRTUALS (t));
! LTO_FIXUP_TREE (BINFO_VPTR_FIELD (t));
n = vec_safe_length (BINFO_BASE_ACCESSES (t));
for (i = 0; i < n; i++)
! {
! saved_base = base = BINFO_BASE_ACCESS (t, i);
! LTO_FIXUP_TREE (base);
! if (base != saved_base)
! (*BINFO_BASE_ACCESSES (t))[i] = base;
! }
! LTO_FIXUP_TREE (BINFO_INHERITANCE_CHAIN (t));
! LTO_FIXUP_TREE (BINFO_SUBVTT_INDEX (t));
! LTO_FIXUP_TREE (BINFO_VPTR_INDEX (t));
n = BINFO_N_BASE_BINFOS (t);
for (i = 0; i < n; i++)
! {
! saved_base = base = BINFO_BASE_BINFO (t, i);
! LTO_FIXUP_TREE (base);
! if (base != saved_base)
! (*BINFO_BASE_BINFOS (t))[i] = base;
! }
}

/* Fix up fields of a CONSTRUCTOR T. */

static void
! lto_ft_constructor (tree t)
{
unsigned HOST_WIDE_INT idx;
constructor_elt *ce;

! lto_ft_typed (t);

for (idx = 0; vec_safe_iterate (CONSTRUCTOR_ELTS (t), idx, &ce); idx++)
{
! LTO_FIXUP_TREE (ce->index);
! LTO_FIXUP_TREE (ce->value);
}
}

/* Fix up fields of an expression tree T. */

static void
! lto_ft_expr (tree t)
{
int i;
! lto_ft_typed (t);
for (i = TREE_OPERAND_LENGTH (t) - 1; i >= 0; --i)
! LTO_FIXUP_TREE (TREE_OPERAND (t, i));
}

/* Given a tree T fixup fields of T by replacing types with their merged
--- 1327,1499 ----
*(tree *) htab_find_slot (tree_with_vars, t, INSERT) = t;
}

! #define MAYBE_REMEMBER_WITH_VARS(tt) \
do \
{ \
if (tt) \
{ \
if (VAR_OR_FUNCTION_DECL_P (tt) && TREE_PUBLIC (tt)) \
remember_with_vars (t); \
} \
} while (0)

/* Fix up fields of a tree_typed T. */

static void
! maybe_remember_with_vars_typed (tree t)
{
! MAYBE_REMEMBER_WITH_VARS (TREE_TYPE (t));
}

/* Fix up fields of a tree_common T. */

static void
! maybe_remember_with_vars_common (tree t)
{
! maybe_remember_with_vars_typed (t);
! MAYBE_REMEMBER_WITH_VARS (TREE_CHAIN (t));
}

/* Fix up fields of a decl_minimal T. */

static void
! maybe_remember_with_vars_decl_minimal (tree t)
{
! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_NAME (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_CONTEXT (t));
}

/* Fix up fields of a decl_common T. */

static void
! maybe_remember_with_vars_decl_common (tree t)
{
! maybe_remember_with_vars_decl_minimal (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_SIZE (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_SIZE_UNIT (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_INITIAL (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_ATTRIBUTES (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_ABSTRACT_ORIGIN (t));
}

/* Fix up fields of a decl_with_vis T. */

static void
! maybe_remember_with_vars_decl_with_vis (tree t)
{
! maybe_remember_with_vars_decl_common (t);

/* Accessor macro has side-effects, use field-name here. */
! MAYBE_REMEMBER_WITH_VARS (t->decl_with_vis.assembler_name);
! MAYBE_REMEMBER_WITH_VARS (DECL_SECTION_NAME (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! maybe_remember_with_vars_decl_non_common (tree t)
{
! maybe_remember_with_vars_decl_with_vis (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_ARGUMENT_FLD (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_RESULT_FLD (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_VINDEX (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! maybe_remember_with_vars_function (tree t)
{
! maybe_remember_with_vars_decl_non_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_FUNCTION_PERSONALITY (t));
}

/* Fix up fields of a field_decl T. */

static void
! maybe_remember_with_vars_field_decl (tree t)
{
! maybe_remember_with_vars_decl_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_FIELD_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_BIT_FIELD_TYPE (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_QUALIFIER (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_FIELD_BIT_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_FCONTEXT (t));
}

/* Fix up fields of a type T. */

static void
! maybe_remember_with_vars_type (tree t)
{
! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (TYPE_CACHED_VALUES (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_SIZE (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_SIZE_UNIT (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_ATTRIBUTES (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_NAME (t));

/* Accessors are for derived node types only. */
if (!POINTER_TYPE_P (t))
! MAYBE_REMEMBER_WITH_VARS (TYPE_MINVAL (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_MAXVAL (t));

/* Accessor is for derived node types only. */
! MAYBE_REMEMBER_WITH_VARS (t->type_non_common.binfo);

! MAYBE_REMEMBER_WITH_VARS (TYPE_CONTEXT (t));
}

/* Fix up fields of a BINFO T. */

static void
! maybe_remember_with_vars_binfo (tree t)
{
unsigned HOST_WIDE_INT i, n;

! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (BINFO_VTABLE (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VIRTUALS (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VPTR_FIELD (t));
n = vec_safe_length (BINFO_BASE_ACCESSES (t));
for (i = 0; i < n; i++)
! MAYBE_REMEMBER_WITH_VARS (BINFO_BASE_ACCESS (t, i));
! MAYBE_REMEMBER_WITH_VARS (BINFO_INHERITANCE_CHAIN (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_SUBVTT_INDEX (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VPTR_INDEX (t));
n = BINFO_N_BASE_BINFOS (t);
for (i = 0; i < n; i++)
! MAYBE_REMEMBER_WITH_VARS (BINFO_BASE_BINFO (t, i));
}

/* Fix up fields of a CONSTRUCTOR T. */

static void
! maybe_remember_with_vars_constructor (tree t)
{
unsigned HOST_WIDE_INT idx;
constructor_elt *ce;

! maybe_remember_with_vars_typed (t);

for (idx = 0; vec_safe_iterate (CONSTRUCTOR_ELTS (t), idx, &ce); idx++)
{
! MAYBE_REMEMBER_WITH_VARS (ce->index);
! MAYBE_REMEMBER_WITH_VARS (ce->value);
}
}

/* Fix up fields of an expression tree T. */

static void
! maybe_remember_with_vars_expr (tree t)
{
int i;
! maybe_remember_with_vars_typed (t);
for (i = TREE_OPERAND_LENGTH (t) - 1; i >= 0; --i)
! MAYBE_REMEMBER_WITH_VARS (TREE_OPERAND (t, i));
}

/* Given a tree T fixup fields of T by replacing types with their merged
*************** lto_ft_expr (tree t)
*** 1635,1641 ****
for instance integer or string constants. */

static void
! lto_fixup_types (tree t)
{
switch (TREE_CODE (t))
{
--- 1502,1508 ----
for instance integer or string constants. */

static void
! maybe_remember_with_vars (tree t)
{
switch (TREE_CODE (t))
{
*************** lto_fixup_types (tree t)
*** 1643,1655 ****
break;

case TREE_LIST:
! LTO_FIXUP_TREE (TREE_VALUE (t));
! LTO_FIXUP_TREE (TREE_PURPOSE (t));
! LTO_FIXUP_TREE (TREE_CHAIN (t));
break;

case FIELD_DECL:
! lto_ft_field_decl (t);
break;

case LABEL_DECL:
--- 1510,1522 ----
break;

case TREE_LIST:
! MAYBE_REMEMBER_WITH_VARS (TREE_VALUE (t));
! MAYBE_REMEMBER_WITH_VARS (TREE_PURPOSE (t));
! MAYBE_REMEMBER_WITH_VARS (TREE_CHAIN (t));
break;

case FIELD_DECL:
! maybe_remember_with_vars_field_decl (t);
break;

case LABEL_DECL:
*************** lto_fixup_types (tree t)
*** 1657,1683 ****
case PARM_DECL:
case RESULT_DECL:
case IMPORTED_DECL:
! lto_ft_decl_common (t);
break;

case VAR_DECL:
! lto_ft_decl_with_vis (t);
break;

case TYPE_DECL:
! lto_ft_decl_non_common (t);
break;

case FUNCTION_DECL:
! lto_ft_function (t);
break;

case TREE_BINFO:
! lto_ft_binfo (t);
break;

case PLACEHOLDER_EXPR:
! lto_ft_common (t);
break;

case BLOCK:
--- 1524,1550 ----
case PARM_DECL:
case RESULT_DECL:
case IMPORTED_DECL:
! maybe_remember_with_vars_decl_common (t);
break;

case VAR_DECL:
! maybe_remember_with_vars_decl_with_vis (t);
break;

case TYPE_DECL:
! maybe_remember_with_vars_decl_non_common (t);
break;

case FUNCTION_DECL:
! maybe_remember_with_vars_function (t);
break;

case TREE_BINFO:
! maybe_remember_with_vars_binfo (t);
break;

case PLACEHOLDER_EXPR:
! maybe_remember_with_vars_common (t);
break;

case BLOCK:
*************** lto_fixup_types (tree t)
*** 1686,1706 ****
case TARGET_OPTION_NODE:
break;

default:
if (TYPE_P (t))
! lto_ft_type (t);
! else if (TREE_CODE (t) == CONSTRUCTOR)
! lto_ft_constructor (t);
else if (CONSTANT_CLASS_P (t))
! LTO_FIXUP_TREE (TREE_TYPE (t));
else if (EXPR_P (t))
! {
! lto_ft_expr (t);
! }
else
! {
! remember_with_vars (t);
! }
}
}

--- 1553,1571 ----
case TARGET_OPTION_NODE:
break;

+ case CONSTRUCTOR:
+ maybe_remember_with_vars_constructor (t);
+ break;
+
default:
if (TYPE_P (t))
! maybe_remember_with_vars_type (t);
else if (CONSTANT_CLASS_P (t))
! MAYBE_REMEMBER_WITH_VARS (TREE_TYPE (t));
else if (EXPR_P (t))
! maybe_remember_with_vars_expr (t);
else
! remember_with_vars (t);
}
}

*************** lto_register_function_decl_in_symtab (st
*** 1786,2006 ****
}
}

- static unsigned long num_merged_types = 0;

! /* Given a streamer cache structure DATA_IN (holding a sequence of trees
! for one compilation unit) go over all trees starting at index FROM until the
! end of the sequence and replace fields of those trees, and the trees
! themself with their canonical variants as per gimple_register_type. */

static void
! uniquify_nodes (struct data_in *data_in, unsigned from)
{
! struct streamer_tree_cache_d *cache = data_in->reader_cache;
! unsigned len = cache->nodes.length ();
! unsigned i;

! /* Go backwards because children streamed for the first time come
! as part of their parents, and hence are created after them. */

! /* First register all the types in the cache. This makes sure to
! have the original structure in the type cycles when registering
! them and computing hashes. */
! for (i = len; i-- > from;)
! {
! tree t = cache->nodes[i];
! if (t && TYPE_P (t))
! {
! tree newt = gimple_register_type (t);
! /* Mark non-prevailing types so we fix them up. No need
! to reset that flag afterwards - nothing that refers
! to those types is left and they are collected. */
! if (newt != t)
! {
! num_merged_types++;
! TREE_VISITED (t) = 1;
! }
}
}

! /* Second fixup all trees in the new cache entries. */
! for (i = len; i-- > from;)
{
! tree t = cache->nodes[i];
! tree oldt = t;
! if (!t)
! continue;
!
! /* First fixup the fields of T. */
! lto_fixup_types (t);
!
! if (!TYPE_P (t))
! continue;
!
! /* Now try to find a canonical variant of T itself. */
! t = GIMPLE_REGISTER_TYPE (t);
!
! if (t == oldt)
! {
! /* The following re-creates proper variant lists while fixing up
! the variant leaders. We do not stream TYPE_NEXT_VARIANT so the
! variant list state before fixup is broken. */
! tree tem, mv;
!
! #ifdef ENABLE_CHECKING
! /* Remove us from our main variant list if we are not the
! variant leader. */
! if (TYPE_MAIN_VARIANT (t) != t)
! {
! tem = TYPE_MAIN_VARIANT (t);
! while (tem && TYPE_NEXT_VARIANT (tem) != t)
! tem = TYPE_NEXT_VARIANT (tem);
! gcc_assert (!tem && !TYPE_NEXT_VARIANT (t));
! }
! #endif

! /* Query our new main variant. */
! mv = GIMPLE_REGISTER_TYPE (TYPE_MAIN_VARIANT (t));

! /* If we were the variant leader and we get replaced ourselves drop
! all variants from our list. */
! if (TYPE_MAIN_VARIANT (t) == t
! && mv != t)
! {
! tem = t;
! while (tem)
! {
! tree tem2 = TYPE_NEXT_VARIANT (tem);
! TYPE_NEXT_VARIANT (tem) = NULL_TREE;
! tem = tem2;
! }
! }

! /* If we are not our own variant leader link us into our new leaders
! variant list. */
! if (mv != t)
! {
! TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (mv);
! TYPE_NEXT_VARIANT (mv) = t;
! if (RECORD_OR_UNION_TYPE_P (t))
! TYPE_BINFO (t) = TYPE_BINFO (mv);
! /* Preserve the invariant that type variants share their
! TYPE_FIELDS. */
! if (RECORD_OR_UNION_TYPE_P (t)
! && TYPE_FIELDS (mv) != TYPE_FIELDS (t))
! {
! tree f1, f2;
! for (f1 = TYPE_FIELDS (mv), f2 = TYPE_FIELDS (t);
! f1 && f2; f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! {
! unsigned ix;
! gcc_assert (f1 != f2
! && DECL_NAME (f1) == DECL_NAME (f2));
! if (!streamer_tree_cache_lookup (cache, f2, &ix))
! gcc_unreachable ();
! /* If we're going to replace an element which we'd
! still visit in the next iterations, we wouldn't
! handle it, so do it here. We do have to handle it
! even though the field_decl itself will be removed,
! as it could refer to e.g. integer_cst which we
! wouldn't reach via any other way, hence they
! (and their type) would stay uncollected. */
! /* ??? We should rather make sure to replace all
! references to f2 with f1. That means handling
! COMPONENT_REFs and CONSTRUCTOR elements in
! lto_fixup_types and special-case the field-decl
! operand handling. */
! /* ??? Not sure the above is all relevant in this
! path canonicalizing TYPE_FIELDS to that of the
! main variant. */
! if (ix < i)
! lto_fixup_types (f2);
! streamer_tree_cache_insert_at (cache, f1, ix);
! }
! TYPE_FIELDS (t) = TYPE_FIELDS (mv);
! }
! }

! /* Finally adjust our main variant and fix it up. */
! TYPE_MAIN_VARIANT (t) = mv;

! /* The following reconstructs the pointer chains
! of the new pointed-to type if we are a main variant. We do
! not stream those so they are broken before fixup. */
! if (TREE_CODE (t) == POINTER_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_PTR_TO (t) = TYPE_POINTER_TO (TREE_TYPE (t));
! TYPE_POINTER_TO (TREE_TYPE (t)) = t;
! }
! else if (TREE_CODE (t) == REFERENCE_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_REF_TO (t) = TYPE_REFERENCE_TO (TREE_TYPE (t));
! TYPE_REFERENCE_TO (TREE_TYPE (t)) = t;
! }
}

! else
{
! if (RECORD_OR_UNION_TYPE_P (t))
! {
! tree f1, f2;
! if (TYPE_FIELDS (t) != TYPE_FIELDS (oldt))
! for (f1 = TYPE_FIELDS (t), f2 = TYPE_FIELDS (oldt);
! f1 && f2; f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! {
! unsigned ix;
! gcc_assert (f1 != f2 && DECL_NAME (f1) == DECL_NAME (f2));
! if (!streamer_tree_cache_lookup (cache, f2, &ix))
! gcc_unreachable ();
! /* If we're going to replace an element which we'd
! still visit in the next iterations, we wouldn't
! handle it, so do it here. We do have to handle it
! even though the field_decl itself will be removed,
! as it could refer to e.g. integer_cst which we
! wouldn't reach via any other way, hence they
! (and their type) would stay uncollected. */
! /* ??? We should rather make sure to replace all
! references to f2 with f1. That means handling
! COMPONENT_REFs and CONSTRUCTOR elements in
! lto_fixup_types and special-case the field-decl
! operand handling. */
! if (ix < i)
! lto_fixup_types (f2);
! streamer_tree_cache_insert_at (cache, f1, ix);
! }
! }

! /* If we found a tree that is equal to oldt replace it in the
! cache, so that further users (in the various LTO sections)
! make use of it. */
! streamer_tree_cache_insert_at (cache, t, i);
}
}

! /* Finally compute the canonical type of all TREE_TYPEs and register
! VAR_DECL and FUNCTION_DECL nodes in the symbol table.
! From this point there are no longer any types with
! TYPE_STRUCTURAL_EQUALITY_P and its type-based alias problems.
! This step requires the TYPE_POINTER_TO lists being present, so
! make sure it is done last. */
! for (i = len; i-- > from;)
! {
! tree t = cache->nodes[i];
! if (t == NULL_TREE)
! continue;
!
! if (TREE_CODE (t) == VAR_DECL)
! lto_register_var_decl_in_symtab (data_in, t);
! else if (TREE_CODE (t) == FUNCTION_DECL && !DECL_BUILT_IN (t))
! lto_register_function_decl_in_symtab (data_in, t);
! else if (!flag_wpa
! && TREE_CODE (t) == TYPE_DECL)
! debug_hooks->type_decl (t, !DECL_FILE_SCOPE_P (t));
! else if (TYPE_P (t) && !TYPE_CANONICAL (t))
! TYPE_CANONICAL (t) = gimple_register_canonical_type (t);
}
}


--- 1651,2372 ----
}
}


! /* For the type T re-materialize it in the type variant list and
! the pointer/reference-to chains. */

static void
! lto_fixup_prevailing_type (tree t)
{
! /* The following re-creates proper variant lists while fixing up
! the variant leaders. We do not stream TYPE_NEXT_VARIANT so the
! variant list state before fixup is broken. */

! /* If we are not our own variant leader link us into our new leaders
! variant list. */
! if (TYPE_MAIN_VARIANT (t) != t)
! {
! tree mv = TYPE_MAIN_VARIANT (t);
! TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (mv);
! TYPE_NEXT_VARIANT (mv) = t;
! }

! /* The following reconstructs the pointer chains
! of the new pointed-to type if we are a main variant. We do
! not stream those so they are broken before fixup. */
! if (TREE_CODE (t) == POINTER_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_PTR_TO (t) = TYPE_POINTER_TO (TREE_TYPE (t));
! TYPE_POINTER_TO (TREE_TYPE (t)) = t;
! }
! else if (TREE_CODE (t) == REFERENCE_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_REF_TO (t) = TYPE_REFERENCE_TO (TREE_TYPE (t));
! TYPE_REFERENCE_TO (TREE_TYPE (t)) = t;
! }
! }
!
!
! /* We keep prevailing tree SCCs in a hashtable with manual collision
! handling (in case all hashes compare the same) and keep the colliding
! entries in the tree_scc->next chain. */
!
! struct tree_scc
! {
! tree_scc *next;
! /* Hash of the whole SCC. */
! hashval_t hash;
! /* Number of trees in the SCC. */
! unsigned len;
! /* Number of possible entries into the SCC (tree nodes [0..entry_len-1]
! which share the same individual tree hash). */
! unsigned entry_len;
! /* The members of the SCC.
! We only need to remember the first entry node candidate for prevailing
! SCCs (but of course have access to all entries for SCCs we are
! processing).
! ??? For prevailing SCCs we really only need hash and the first
! entry candidate, but that's too awkward to implement. */
! tree entries[1];
! };
!
! struct tree_scc_hasher : typed_noop_remove <tree_scc>
! {
! typedef tree_scc value_type;
! typedef tree_scc compare_type;
! static inline hashval_t hash (const value_type *);
! static inline bool equal (const value_type *, const compare_type *);
! };
!
! hashval_t
! tree_scc_hasher::hash (const value_type *scc)
! {
! return scc->hash;
! }
!
! bool
! tree_scc_hasher::equal (const value_type *scc1, const compare_type *scc2)
! {
! if (scc1->hash != scc2->hash
! || scc1->len != scc2->len
! || scc1->entry_len != scc2->entry_len)
! return false;
! return true;
! }
!
! static hash_table <tree_scc_hasher> tree_scc_hash;
! static struct obstack tree_scc_hash_obstack;
!
! static unsigned long num_merged_types;
! static unsigned long num_prevailing_types;
! static unsigned long num_not_merged_types;
! static unsigned long num_not_merged_types_in_same_scc;
! static unsigned long num_not_merged_types_trees;
! static unsigned long num_not_merged_types_in_same_scc_trees;
! static unsigned long num_type_scc_trees;
! static unsigned long total_scc_size;
! static unsigned long num_sccs_read;
! static unsigned long total_scc_size_merged;
! static unsigned long num_sccs_merged;
! static unsigned long num_scc_compares;
! static unsigned long num_scc_compare_collisions;
!
!
! /* Compare the two entries T1 and T2 of two SCCs that are possibly equal,
! recursing through in-SCC tree edges. Returns true if the SCCs entered
! through T1 and T2 are equal and fills in *MAP with the pairs of
! SCC entries we visited, starting with (*MAP)[0] = T1 and (*MAP)[1] = T2. */
!
! static bool
! compare_tree_sccs_1 (tree t1, tree t2, tree **map)
! {
! enum tree_code code;
!
! /* Mark already visited nodes. */
! TREE_ASM_WRITTEN (t2) = 1;
!
! /* Push the pair onto map. */
! (*map)[0] = t1;
! (*map)[1] = t2;
! *map = *map + 2;
!
! /* Compare value-fields. */
! #define compare_values(X) \
! do { \
! if (X(t1) != X(t2)) \
! return false; \
! } while (0)
!
! compare_values (TREE_CODE);
! code = TREE_CODE (t1);
!
! if (!TYPE_P (t1))
! {
! compare_values (TREE_SIDE_EFFECTS);
! compare_values (TREE_CONSTANT);
! compare_values (TREE_READONLY);
! compare_values (TREE_PUBLIC);
! }
! compare_values (TREE_ADDRESSABLE);
! compare_values (TREE_THIS_VOLATILE);
! if (DECL_P (t1))
! compare_values (DECL_UNSIGNED);
! else if (TYPE_P (t1))
! compare_values (TYPE_UNSIGNED);
! if (TYPE_P (t1))
! compare_values (TYPE_ARTIFICIAL);
! else
! compare_values (TREE_NO_WARNING);
! compare_values (TREE_NOTHROW);
! compare_values (TREE_STATIC);
! if (code != TREE_BINFO)
! compare_values (TREE_PRIVATE);
! compare_values (TREE_PROTECTED);
! compare_values (TREE_DEPRECATED);
! if (TYPE_P (t1))
! {
! compare_values (TYPE_SATURATING);
! compare_values (TYPE_ADDR_SPACE);
! }
! else if (code == SSA_NAME)
! compare_values (SSA_NAME_IS_DEFAULT_DEF);
!
! if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
! {
! compare_values (TREE_INT_CST_LOW);
! compare_values (TREE_INT_CST_HIGH);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_REAL_CST))
! {
! /* ??? No suitable compare routine available. */
! REAL_VALUE_TYPE r1 = TREE_REAL_CST (t1);
! REAL_VALUE_TYPE r2 = TREE_REAL_CST (t2);
! if (r1.cl != r2.cl
! || r1.decimal != r2.decimal
! || r1.sign != r2.sign
! || r1.signalling != r2.signalling
! || r1.canonical != r2.canonical
! || r1.uexp != r2.uexp)
! return false;
! for (unsigned i = 0; i < SIGSZ; ++i)
! if (r1.sig[i] != r2.sig[i])
! return false;
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FIXED_CST))
! if (!fixed_compare (EQ_EXPR,
! TREE_FIXED_CST_PTR (t1), TREE_FIXED_CST_PTR (t2)))
! return false;
!
!
! /* We don't want to compare locations, so there is nothing do compare
! for TS_DECL_MINIMAL. */
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
! {
! compare_values (DECL_MODE);
! compare_values (DECL_NONLOCAL);
! compare_values (DECL_VIRTUAL_P);
! compare_values (DECL_IGNORED_P);
! compare_values (DECL_ABSTRACT);
! compare_values (DECL_ARTIFICIAL);
! compare_values (DECL_USER_ALIGN);
! compare_values (DECL_PRESERVE_P);
! compare_values (DECL_EXTERNAL);
! compare_values (DECL_GIMPLE_REG_P);
! compare_values (DECL_ALIGN);
! if (code == LABEL_DECL)
! {
! compare_values (DECL_ERROR_ISSUED);
! compare_values (EH_LANDING_PAD_NR);
! compare_values (LABEL_DECL_UID);
! }
! else if (code == FIELD_DECL)
! {
! compare_values (DECL_PACKED);
! compare_values (DECL_NONADDRESSABLE_P);
! compare_values (DECL_OFFSET_ALIGN);
! }
! else if (code == VAR_DECL)
! {
! compare_values (DECL_HAS_DEBUG_EXPR_P);
! compare_values (DECL_NONLOCAL_FRAME);
! }
! if (code == RESULT_DECL
! || code == PARM_DECL
! || code == VAR_DECL)
! {
! compare_values (DECL_BY_REFERENCE);
! if (code == VAR_DECL
! || code == PARM_DECL)
! compare_values (DECL_HAS_VALUE_EXPR_P);
! }
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WRTL))
! compare_values (DECL_REGISTER);
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
! {
! compare_values (DECL_DEFER_OUTPUT);
! compare_values (DECL_COMMON);
! compare_values (DECL_DLLIMPORT_P);
! compare_values (DECL_WEAK);
! compare_values (DECL_SEEN_IN_BIND_EXPR_P);
! compare_values (DECL_COMDAT);
! compare_values (DECL_VISIBILITY);
! compare_values (DECL_VISIBILITY_SPECIFIED);
! if (code == VAR_DECL)
! {
! compare_values (DECL_HARD_REGISTER);
! compare_values (DECL_IN_TEXT_SECTION);
! compare_values (DECL_IN_CONSTANT_POOL);
! compare_values (DECL_TLS_MODEL);
! }
! if (VAR_OR_FUNCTION_DECL_P (t1))
! compare_values (DECL_INIT_PRIORITY);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
! {
! compare_values (DECL_BUILT_IN_CLASS);
! compare_values (DECL_STATIC_CONSTRUCTOR);
! compare_values (DECL_STATIC_DESTRUCTOR);
! compare_values (DECL_UNINLINABLE);
! compare_values (DECL_POSSIBLY_INLINED);
! compare_values (DECL_IS_NOVOPS);
! compare_values (DECL_IS_RETURNS_TWICE);
! compare_values (DECL_IS_MALLOC);
! compare_values (DECL_IS_OPERATOR_NEW);
! compare_values (DECL_DECLARED_INLINE_P);
! compare_values (DECL_STATIC_CHAIN);
! compare_values (DECL_NO_INLINE_WARNING_P);
! compare_values (DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT);
! compare_values (DECL_NO_LIMIT_STACK);
! compare_values (DECL_DISREGARD_INLINE_LIMITS);
! compare_values (DECL_PURE_P);
! compare_values (DECL_LOOPING_CONST_OR_PURE_P);
! if (DECL_BUILT_IN_CLASS (t1) != NOT_BUILT_IN)
! compare_values (DECL_FUNCTION_CODE);
! if (DECL_STATIC_DESTRUCTOR (t1))
! compare_values (DECL_FINI_PRIORITY);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
! {
! compare_values (TYPE_MODE);
! compare_values (TYPE_STRING_FLAG);
! compare_values (TYPE_NO_FORCE_BLK);
! compare_values (TYPE_NEEDS_CONSTRUCTING);
! if (RECORD_OR_UNION_TYPE_P (t1))
! compare_values (TYPE_TRANSPARENT_AGGR);
! else if (code == ARRAY_TYPE)
! compare_values (TYPE_NONALIASED_COMPONENT);
! compare_values (TYPE_PACKED);
! compare_values (TYPE_RESTRICT);
! compare_values (TYPE_CONTAINS_PLACEHOLDER_INTERNAL);
! compare_values (TYPE_USER_ALIGN);
! compare_values (TYPE_READONLY);
! compare_values (TYPE_PRECISION);
! compare_values (TYPE_ALIGN);
! compare_values (TYPE_ALIAS_SET);
! }
!
! /* We don't want to compare locations, so there is nothing do compare
! for TS_EXP. */
!
! /* BLOCKs are function local and we don't merge anything there, so
! simply refuse to merge. */
! if (CODE_CONTAINS_STRUCT (code, TS_BLOCK))
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_TRANSLATION_UNIT_DECL))
! if (strcmp (TRANSLATION_UNIT_LANGUAGE (t1),
! TRANSLATION_UNIT_LANGUAGE (t2)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_TARGET_OPTION))
! if (memcmp (TREE_TARGET_OPTION (t1), TREE_TARGET_OPTION (t2),
! sizeof (struct cl_target_option)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_OPTIMIZATION))
! if (memcmp (TREE_OPTIMIZATION (t1), TREE_OPTIMIZATION (t2),
! sizeof (struct cl_optimization)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
! if (vec_safe_length (BINFO_BASE_ACCESSES (t1))
! != vec_safe_length (BINFO_BASE_ACCESSES (t2)))
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
! compare_values (CONSTRUCTOR_NELTS);
!
! if (CODE_CONTAINS_STRUCT (code, TS_IDENTIFIER))
! if (IDENTIFIER_LENGTH (t1) != IDENTIFIER_LENGTH (t2)
! || memcmp (IDENTIFIER_POINTER (t1), IDENTIFIER_POINTER (t2),
! IDENTIFIER_LENGTH (t1)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_STRING))
! if (TREE_STRING_LENGTH (t1) != TREE_STRING_LENGTH (t2)
! || memcmp (TREE_STRING_POINTER (t1), TREE_STRING_POINTER (t2),
! TREE_STRING_LENGTH (t1)) != 0)
! return false;
!
! #undef compare_values
!
!
! /* Compare pointer fields. */
!
! /* Recurse. Search & Replaced from DFS_write_tree_body.
! Folding the early checks into the compare_tree_edges recursion
! macro makes debugging way quicker as you are able to break on
! compare_tree_sccs_1 and simply finish until a call returns false
! to spot the SCC members with the difference. */
! #define compare_tree_edges(E1, E2) \
! do { \
! tree t1_ = (E1), t2_ = (E2); \
! if (t1_ != t2_ \
! && (!t1_ || !t2_ \
! || !TREE_VISITED (t2_) \
! || (!TREE_ASM_WRITTEN (t2_) \
! && !compare_tree_sccs_1 (t1_, t2_, map)))) \
! return false; \
! /* Only non-NULL trees outside of the SCC may compare equal. */ \
! gcc_checking_assert (t1_ != t2_ || (!t2_ || !TREE_VISITED (t2_))); \
! } while (0)
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
! {
! if (code != IDENTIFIER_NODE)
! compare_tree_edges (TREE_TYPE (t1), TREE_TYPE (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
! {
! unsigned i;
! /* Note that the number of elements for EXPR has already been emitted
! in EXPR's header (see streamer_write_tree_header). */
! for (i = 0; i < VECTOR_CST_NELTS (t1); ++i)
! compare_tree_edges (VECTOR_CST_ELT (t1, i), VECTOR_CST_ELT (t2, i));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
! {
! compare_tree_edges (TREE_REALPART (t1), TREE_REALPART (t2));
! compare_tree_edges (TREE_IMAGPART (t1), TREE_IMAGPART (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
! {
! compare_tree_edges (DECL_NAME (t1), DECL_NAME (t2));
! /* ??? Global decls from different TUs have non-matching
! TRANSLATION_UNIT_DECLs. Only consider a small set of
! decls equivalent, we should not end up merging others. */
! if ((code == TYPE_DECL
! || code == NAMESPACE_DECL
! || code == IMPORTED_DECL
! || code == CONST_DECL
! || (VAR_OR_FUNCTION_DECL_P (t1)
! && (TREE_PUBLIC (t1) || DECL_EXTERNAL (t1))))
! && DECL_FILE_SCOPE_P (t1) && DECL_FILE_SCOPE_P (t2))
! ;
! else
! compare_tree_edges (DECL_CONTEXT (t1), DECL_CONTEXT (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
! {
! compare_tree_edges (DECL_SIZE (t1), DECL_SIZE (t2));
! compare_tree_edges (DECL_SIZE_UNIT (t1), DECL_SIZE_UNIT (t2));
! compare_tree_edges (DECL_ATTRIBUTES (t1), DECL_ATTRIBUTES (t2));
! if ((code == VAR_DECL
! || code == PARM_DECL)
! && DECL_HAS_VALUE_EXPR_P (t1))
! compare_tree_edges (DECL_VALUE_EXPR (t1), DECL_VALUE_EXPR (t2));
! if (code == VAR_DECL
! && DECL_HAS_DEBUG_EXPR_P (t1))
! compare_tree_edges (DECL_DEBUG_EXPR (t1), DECL_DEBUG_EXPR (t2));
! /* LTO specific edges. */
! if (code != FUNCTION_DECL
! && code != TRANSLATION_UNIT_DECL)
! compare_tree_edges (DECL_INITIAL (t1), DECL_INITIAL (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
! {
! if (code == FUNCTION_DECL)
! {
! tree a1, a2;
! for (a1 = DECL_ARGUMENTS (t1), a2 = DECL_ARGUMENTS (t2);
! a1 || a2;
! a1 = TREE_CHAIN (a1), a2 = TREE_CHAIN (a2))
! compare_tree_edges (a1, a2);
! compare_tree_edges (DECL_RESULT (t1), DECL_RESULT (t2));
! }
! else if (code == TYPE_DECL)
! compare_tree_edges (DECL_ORIGINAL_TYPE (t1), DECL_ORIGINAL_TYPE (t2));
! compare_tree_edges (DECL_VINDEX (t1), DECL_VINDEX (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
! {
! /* Make sure we don't inadvertently set the assembler name. */
! if (DECL_ASSEMBLER_NAME_SET_P (t1))
! compare_tree_edges (DECL_ASSEMBLER_NAME (t1),
! DECL_ASSEMBLER_NAME (t2));
! compare_tree_edges (DECL_SECTION_NAME (t1), DECL_SECTION_NAME (t2));
! compare_tree_edges (DECL_COMDAT_GROUP (t1), DECL_COMDAT_GROUP (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
! {
! compare_tree_edges (DECL_FIELD_OFFSET (t1), DECL_FIELD_OFFSET (t2));
! compare_tree_edges (DECL_BIT_FIELD_TYPE (t1), DECL_BIT_FIELD_TYPE (t2));
! compare_tree_edges (DECL_BIT_FIELD_REPRESENTATIVE (t1),
! DECL_BIT_FIELD_REPRESENTATIVE (t2));
! compare_tree_edges (DECL_FIELD_BIT_OFFSET (t1),
! DECL_FIELD_BIT_OFFSET (t2));
! compare_tree_edges (DECL_FCONTEXT (t1), DECL_FCONTEXT (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
! {
! compare_tree_edges (DECL_FUNCTION_PERSONALITY (t1),
! DECL_FUNCTION_PERSONALITY (t2));
! compare_tree_edges (DECL_FUNCTION_SPECIFIC_TARGET (t1),
! DECL_FUNCTION_SPECIFIC_TARGET (t2));
! compare_tree_edges (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t1),
! DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
! {
! compare_tree_edges (TYPE_SIZE (t1), TYPE_SIZE (t2));
! compare_tree_edges (TYPE_SIZE_UNIT (t1), TYPE_SIZE_UNIT (t2));
! compare_tree_edges (TYPE_ATTRIBUTES (t1), TYPE_ATTRIBUTES (t2));
! compare_tree_edges (TYPE_NAME (t1), TYPE_NAME (t2));
! /* Do not compare TYPE_POINTER_TO or TYPE_REFERENCE_TO. They will be
! reconstructed during fixup. */
! /* Do not compare TYPE_NEXT_VARIANT, we reconstruct the variant lists
! during fixup. */
! compare_tree_edges (TYPE_MAIN_VARIANT (t1), TYPE_MAIN_VARIANT (t2));
! /* ??? Global types from different TUs have non-matching
! TRANSLATION_UNIT_DECLs. Still merge them if they are otherwise
! equal. */
! if (TYPE_FILE_SCOPE_P (t1) && TYPE_FILE_SCOPE_P (t2))
! ;
! else
! compare_tree_edges (TYPE_CONTEXT (t1), TYPE_CONTEXT (t2));
! /* TYPE_CANONICAL is re-computed during type merging, so do not
! compare it here. */
! compare_tree_edges (TYPE_STUB_DECL (t1), TYPE_STUB_DECL (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
! {
! if (code == ENUMERAL_TYPE)
! compare_tree_edges (TYPE_VALUES (t1), TYPE_VALUES (t2));
! else if (code == ARRAY_TYPE)
! compare_tree_edges (TYPE_DOMAIN (t1), TYPE_DOMAIN (t2));
! else if (RECORD_OR_UNION_TYPE_P (t1))
! {
! tree f1, f2;
! for (f1 = TYPE_FIELDS (t1), f2 = TYPE_FIELDS (t2);
! f1 || f2;
! f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! compare_tree_edges (f1, f2);
! compare_tree_edges (TYPE_BINFO (t1), TYPE_BINFO (t2));
}
+ else if (code == FUNCTION_TYPE
+ || code == METHOD_TYPE)
+ compare_tree_edges (TYPE_ARG_TYPES (t1), TYPE_ARG_TYPES (t2));
+ if (!POINTER_TYPE_P (t1))
+ compare_tree_edges (TYPE_MINVAL (t1), TYPE_MINVAL (t2));
+ compare_tree_edges (TYPE_MAXVAL (t1), TYPE_MAXVAL (t2));
}

! if (CODE_CONTAINS_STRUCT (code, TS_LIST))
{
! compare_tree_edges (TREE_PURPOSE (t1), TREE_PURPOSE (t2));
! compare_tree_edges (TREE_VALUE (t1), TREE_VALUE (t2));
! compare_tree_edges (TREE_CHAIN (t1), TREE_CHAIN (t2));
! }

! if (CODE_CONTAINS_STRUCT (code, TS_VEC))
! for (int i = 0; i < TREE_VEC_LENGTH (t1); i++)
! compare_tree_edges (TREE_VEC_ELT (t1, i), TREE_VEC_ELT (t2, i));

! if (CODE_CONTAINS_STRUCT (code, TS_EXP))
! {
! for (int i = 0; i < TREE_OPERAND_LENGTH (t1); i++)
! compare_tree_edges (TREE_OPERAND (t1, i),
! TREE_OPERAND (t2, i));

! /* BLOCKs are function local and we don't merge anything there. */
! if (TREE_BLOCK (t1) || TREE_BLOCK (t2))
! return false;
! }

! if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
! {
! unsigned i;
! tree t;
! /* Lengths have already been compared above. */
! FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (t1), i, t)
! compare_tree_edges (t, BINFO_BASE_BINFO (t2, i));
! FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (t1), i, t)
! compare_tree_edges (t, BINFO_BASE_ACCESS (t2, i));
! compare_tree_edges (BINFO_OFFSET (t1), BINFO_OFFSET (t2));
! compare_tree_edges (BINFO_VTABLE (t1), BINFO_VTABLE (t2));
! compare_tree_edges (BINFO_VPTR_FIELD (t1), BINFO_VPTR_FIELD (t2));
! compare_tree_edges (BINFO_INHERITANCE_CHAIN (t1),
! BINFO_INHERITANCE_CHAIN (t2));
! compare_tree_edges (BINFO_SUBVTT_INDEX (t1),
! BINFO_SUBVTT_INDEX (t2));
! compare_tree_edges (BINFO_VPTR_INDEX (t1),
! BINFO_VPTR_INDEX (t2));
! }

! if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
! {
! unsigned i;
! tree index, value;
! /* Lengths have already been compared above. */
! FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (t1), i, index, value)
! {
! compare_tree_edges (index, CONSTRUCTOR_ELT (t2, i)->index);
! compare_tree_edges (value, CONSTRUCTOR_ELT (t2, i)->value);
}
+ }

! #undef compare_tree_edges
!
! return true;
! }
!
! /* Compare the tree scc SCC to the prevailing candidate PSCC, filling
! out MAP if they are equal. */
!
! static bool
! compare_tree_sccs (tree_scc *pscc, tree_scc *scc,
! tree *map)
! {
! /* Assume SCC entry hashes are sorted after their cardinality. Which
! means we can simply take the first n-tuple of equal hashes
! (which is recorded as entry_len) and do n SCC entry candidate
! comparisons. */
! for (unsigned i = 0; i < pscc->entry_len; ++i)
! {
! tree *mapp = map;
! num_scc_compare_collisions++;
! if (compare_tree_sccs_1 (pscc->entries[0], scc->entries[i], &mapp))
{
! /* Equal - no need to reset TREE_VISITED or TREE_ASM_WRITTEN
! on the scc as all trees will be freed. */
! return true;
! }
! /* Reset TREE_ASM_WRITTEN on scc for the next compare or in case
! the SCC prevails. */
! for (unsigned j = 0; j < scc->len; ++j)
! TREE_ASM_WRITTEN (scc->entries[j]) = 0;
! }

! return false;
! }
!
! /* Try to unify the SCC with nodes FROM to FROM + LEN in CACHE and
! hash value SCC_HASH with an already recorded SCC. Return true if
! that was successful, otherwise return false. */
!
! static bool
! unify_scc (struct streamer_tree_cache_d *cache, unsigned from,
! unsigned len, unsigned scc_entry_len, hashval_t scc_hash)
! {
! bool unified_p = false;
! tree_scc *scc
! = (tree_scc *) alloca (sizeof (tree_scc) + (len - 1) * sizeof (tree));
! scc->next = NULL;
! scc->hash = scc_hash;
! scc->len = len;
! scc->entry_len = scc_entry_len;
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = streamer_tree_cache_get_tree (cache, from + i);
! scc->entries[i] = t;
! /* Do not merge SCCs with local entities inside them. Also do
! not merge TRANSLATION_UNIT_DECLs. */
! if (TREE_CODE (t) == TRANSLATION_UNIT_DECL
! || (VAR_OR_FUNCTION_DECL_P (t)
! && !(TREE_PUBLIC (t) || DECL_EXTERNAL (t)))
! || TREE_CODE (t) == LABEL_DECL)
! {
! /* Avoid doing any work for these cases and do not worry to
! record the SCCs for further merging. */
! return false;
}
}

! /* Look for the list of candidate SCCs to compare against. */
! tree_scc **slot;
! slot = tree_scc_hash.find_slot_with_hash (scc, scc_hash, INSERT);
! if (*slot)
! {
! /* Try unifying against each candidate. */
! num_scc_compares++;
!
! /* Set TREE_VISITED on the scc so we can easily identify tree nodes
! outside of the scc when following tree edges. Make sure
! that TREE_ASM_WRITTEN is unset so we can use it as 2nd bit
! to track whether we visited the SCC member during the compare.
! We cannot use TREE_VISITED on the pscc members as the extended
! scc and pscc can overlap. */
! for (unsigned i = 0; i < scc->len; ++i)
! {
! TREE_VISITED (scc->entries[i]) = 1;
! gcc_assert (!TREE_ASM_WRITTEN (scc->entries[i]));
! }
!
! tree *map = XALLOCAVEC (tree, 2 * len);
! for (tree_scc *pscc = *slot; pscc; pscc = pscc->next)
! {
! if (!compare_tree_sccs (pscc, scc, map))
! continue;
!
! /* Found an equal SCC. */
! unified_p = true;
! num_scc_compare_collisions--;
! num_sccs_merged++;
! total_scc_size_merged += len;
!
! /* Fixup the streamer cache with the prevailing nodes according
! to the tree node mapping computed by compare_tree_sccs. */
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = map[2*i+1];
! enum tree_code code = TREE_CODE (t);
! unsigned ix;
! bool r;
! /* IDENTIFIER_NODEs should be singletons and are merged by the
! streamer. The others should be singletons, too, and we
! should not merge them in any way. */
! gcc_assert (code != TRANSLATION_UNIT_DECL
! && code != IDENTIFIER_NODE
! && !streamer_handle_as_builtin_p (t));
! r = streamer_tree_cache_lookup (cache, t, &ix);
! gcc_assert (r && ix >= from);
! streamer_tree_cache_replace_tree (cache, map[2 * i], ix);
! if (TYPE_P (t))
! num_merged_types++;
! }
! /* Free the tree nodes from the read SCC. */
! for (unsigned i = 0; i < len; ++i)
! ggc_free (scc->entries[i]);
! break;
! }
!
! /* Reset TREE_VISITED if we didn't unify the SCC with another. */
! if (!unified_p)
! for (unsigned i = 0; i < scc->len; ++i)
! TREE_VISITED (scc->entries[i]) = 0;
! }
!
! /* If we didn't unify it to any candidate duplicate the relevant
! pieces to permanent storage and link it into the chain. */
! if (!unified_p)
! {
! tree_scc *pscc
! = XOBNEWVAR (&tree_scc_hash_obstack, tree_scc, sizeof (tree_scc));
! memcpy (pscc, scc, sizeof (tree_scc));
! pscc->next = (*slot);
! *slot = pscc;
}
+ return unified_p;
}


*************** lto_read_decls (struct lto_file_decl_dat
*** 2036,2044 ****
{
tree t;
unsigned from = data_in->reader_cache->nodes.length ();
! t = stream_read_tree (&ib_main, data_in);
! gcc_assert (t && ib_main.p <= ib_main.len);
! uniquify_nodes (data_in, from);
}

/* Read in lto_in_decl_state objects. */
--- 2402,2524 ----
{
tree t;
unsigned from = data_in->reader_cache->nodes.length ();
! /* Read and uniquify SCCs as in the input stream. */
! enum LTO_tags tag = streamer_read_record_start (&ib_main);
! if (tag == LTO_tree_scc)
! {
! unsigned len_;
! unsigned scc_entry_len;
! hashval_t scc_hash = lto_input_scc (&ib_main, data_in, &len_,
! &scc_entry_len);
! unsigned len = data_in->reader_cache->nodes.length () - from;
! gcc_assert (len == len_);
!
! total_scc_size += len;
! num_sccs_read++;
!
! /* We have the special case of size-1 SCCs that are pre-merged
! by means of identifier and string sharing for example.
! ??? Maybe we should avoid streaming those as SCCs. */
! tree first = streamer_tree_cache_get_tree (data_in->reader_cache,
! from);
! if (len == 1
! && (TREE_CODE (first) == IDENTIFIER_NODE
! || TREE_CODE (first) == INTEGER_CST
! || TREE_CODE (first) == TRANSLATION_UNIT_DECL
! || streamer_handle_as_builtin_p (first)))
! continue;
!
! /* Try to unify the SCC with already existing ones. */
! if (!flag_ltrans
! && unify_scc (data_in->reader_cache, from,
! len, scc_entry_len, scc_hash))
! continue;
!
! /* Do remaining fixup tasks for prevailing nodes. */
! bool seen_type = false;
! bool not_merged_type_same_scc = false;
! bool not_merged_type_not_same_scc = false;
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = streamer_tree_cache_get_tree (data_in->reader_cache,
! from + i);
! /* For statistics, see if the old code would have merged
! the type. */
! if (TYPE_P (t)
! && (flag_lto_report || (flag_wpa && flag_lto_report_wpa)))
! {
! tree newt = gimple_register_type (t);
! if (newt != t)
! {
! num_not_merged_types++;
! unsigned j;
! /* Check if we can never merge the types because
! they are in the same SCC and thus the old
! code was broken. */
! for (j = 0; j < len; ++j)
! if (i != j
! && streamer_tree_cache_get_tree
! (data_in->reader_cache, from + j) == newt)
! {
! num_not_merged_types_in_same_scc++;
! not_merged_type_same_scc = true;
! break;
! }
! if (j == len)
! not_merged_type_not_same_scc = true;
! }
! }
! /* Reconstruct the type variant and pointer-to/reference-to
! chains. */
! if (TYPE_P (t))
! {
! seen_type = true;
! num_prevailing_types++;
! lto_fixup_prevailing_type (t);
! }
! /* Compute the canonical type of all types.
! ??? Should be able to assert that !TYPE_CANONICAL. */
! if (TYPE_P (t) && !TYPE_CANONICAL (t))
! TYPE_CANONICAL (t) = gimple_register_canonical_type (t);
! /* Link shared INTEGER_CSTs into TYPE_CACHED_VALUEs of its
! type which is also member of this SCC. */
! if (TREE_CODE (t) == INTEGER_CST
! && !TREE_OVERFLOW (t))
! cache_integer_cst (t);
! /* Register TYPE_DECLs with the debuginfo machinery. */
! if (!flag_wpa
! && TREE_CODE (t) == TYPE_DECL)
! debug_hooks->type_decl (t, !DECL_FILE_SCOPE_P (t));
! if (!flag_ltrans)
! {
! /* Register variables and functions with the
! symbol table. */
! if (TREE_CODE (t) == VAR_DECL)
! lto_register_var_decl_in_symtab (data_in, t);
! else if (TREE_CODE (t) == FUNCTION_DECL
! && !DECL_BUILT_IN (t))
! lto_register_function_decl_in_symtab (data_in, t);
! /* Scan the tree for references to global functions or
! variables and record those for later fixup. */
! maybe_remember_with_vars (t);
! }
! }
! if (not_merged_type_same_scc)
! {
! num_not_merged_types_in_same_scc_trees += len;
! num_not_merged_types_trees += len;
! }
! else if (not_merged_type_not_same_scc)
! num_not_merged_types_trees += len;
! if (seen_type)
! num_type_scc_trees += len;
! }
! else
! {
! /* Pickle stray references. */
! t = lto_input_tree_1 (&ib_main, data_in, tag, 0);
! gcc_assert (t && data_in->reader_cache->nodes.length () == from);
! }
}

/* Read in lto_in_decl_state objects. */
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2898,2906 ****
type_hash_cache = htab_create_ggc (512, tree_int_map_hash,
tree_int_map_eq, NULL);
type_pair_cache = XCNEWVEC (struct type_pair_d, GIMPLE_TYPE_PAIR_SIZE);
- gimple_type_leader = ggc_alloc_cleared_vec_gimple_type_leader_entry_s
- (GIMPLE_TYPE_LEADER_SIZE);
gimple_types = htab_create_ggc (16381, gimple_type_hash, gimple_type_eq, 0);

if (!quiet_flag)
fprintf (stderr, "Reading object files:");
--- 3378,3386 ----
type_hash_cache = htab_create_ggc (512, tree_int_map_hash,
tree_int_map_eq, NULL);
type_pair_cache = XCNEWVEC (struct type_pair_d, GIMPLE_TYPE_PAIR_SIZE);
gimple_types = htab_create_ggc (16381, gimple_type_hash, gimple_type_eq, 0);
+ gcc_obstack_init (&tree_scc_hash_obstack);
+ tree_scc_hash.create (4096);

if (!quiet_flag)
fprintf (stderr, "Reading object files:");
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2933,2939 ****
lto_obj_file_close (current_lto_file);
free (current_lto_file);
current_lto_file = NULL;
- ggc_collect ();
}

lto_flatten_files (decl_data, count, last_file_ix);
--- 3413,3418 ----
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2955,2961 ****
type_hash_cache = NULL;
free (type_pair_cache);
type_pair_cache = NULL;
! gimple_type_leader = NULL;
free_gimple_type_tables ();
ggc_collect ();

--- 3434,3441 ----
type_hash_cache = NULL;
free (type_pair_cache);
type_pair_cache = NULL;
! tree_scc_hash.dispose ();
! obstack_free (&tree_scc_hash_obstack, NULL);
free_gimple_type_tables ();
ggc_collect ();

*************** print_lto_report_1 (void)
*** 3121,3147 ****
const char *pfx = (flag_lto) ? "LTO" : (flag_wpa) ? "WPA" : "LTRANS";
fprintf (stderr, "%s statistics\n", pfx);

! if (gimple_types)
! fprintf (stderr, "[%s] GIMPLE type table: size %ld, %ld elements, "
! "%ld searches, %ld collisions (ratio: %f)\n", pfx,
! (long) htab_size (gimple_types),
! (long) htab_elements (gimple_types),
! (long) gimple_types->searches,
! (long) gimple_types->collisions,
! htab_collisions (gimple_types));
! else
! fprintf (stderr, "[%s] GIMPLE type table is empty\n", pfx);
! if (type_hash_cache)
! fprintf (stderr, "[%s] GIMPLE type hash cache table: size %ld, %ld elements, "
! "%ld searches, %ld collisions (ratio: %f)\n", pfx,
! (long) htab_size (type_hash_cache),
! (long) htab_elements (type_hash_cache),
! (long) type_hash_cache->searches,
! (long) type_hash_cache->collisions,
! htab_collisions (type_hash_cache));
! else
! fprintf (stderr, "[%s] GIMPLE type hash cache table is empty\n", pfx);
! fprintf (stderr, "[%s] Merged %lu types\n", pfx, num_merged_types);

print_gimple_types_stats (pfx);
print_lto_report (pfx);
--- 3601,3649 ----
const char *pfx = (flag_lto) ? "LTO" : (flag_wpa) ? "WPA" : "LTRANS";
fprintf (stderr, "%s statistics\n", pfx);

! fprintf (stderr, "[%s] read %lu SCCs of average size %f\n",
! pfx, num_sccs_read, total_scc_size / (double)num_sccs_read);
! fprintf (stderr, "[%s] %lu tree bodies read in total\n", pfx, total_scc_size);
! if (flag_wpa && tree_scc_hash.is_created ())
! {
! fprintf (stderr, "[%s] tree SCC table: size %ld, %ld elements, "
! "collision ratio: %f\n", pfx,
! (long) tree_scc_hash.size (),
! (long) tree_scc_hash.elements (),
! tree_scc_hash.collisions ());
! hash_table<tree_scc_hasher>::iterator hiter;
! tree_scc *scc, *max_scc = NULL;
! unsigned max_length = 0;
! FOR_EACH_HASH_TABLE_ELEMENT (tree_scc_hash, scc, x, hiter)
! {
! unsigned length = 0;
! tree_scc *s = scc;
! for (; s; s = s->next)
! length++;
! if (length > max_length)
! {
! max_length = length;
! max_scc = scc;
! }
! }
! fprintf (stderr, "[%s] tree SCC max chain length %u (size %u)\n",
! pfx, max_length, max_scc->len);
! fprintf (stderr, "[%s] Compared %lu SCCs, %lu collisions (%f)\n", pfx,
! num_scc_compares, num_scc_compare_collisions,
! num_scc_compare_collisions / (double) num_scc_compares);
! fprintf (stderr, "[%s] Merged %lu SCCs\n", pfx, num_sccs_merged);
! fprintf (stderr, "[%s] Merged %lu tree bodies\n", pfx,
! total_scc_size_merged);
! fprintf (stderr, "[%s] Merged %lu types\n", pfx, num_merged_types);
! fprintf (stderr, "[%s] %lu types prevailed (%lu associated trees)\n",
! pfx, num_prevailing_types, num_type_scc_trees);
! fprintf (stderr, "[%s] Old merging code merges an additional %lu types"
! " of which %lu are in the same SCC with their "
! "prevailing variant (%lu and %lu associated trees)\n",
! pfx, num_not_merged_types, num_not_merged_types_in_same_scc,
! num_not_merged_types_trees,
! num_not_merged_types_in_same_scc_trees);
! }

print_gimple_types_stats (pfx);
print_lto_report (pfx);
Index: trunk/gcc/lto/Make-lang.in
===================================================================
*** trunk.orig/gcc/lto/Make-lang.in 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto/Make-lang.in 2013-06-13 14:35:53.257108072 +0200
*************** lto/lto.o: lto/lto.c $(CONFIG_H) $(SYSTE
*** 85,91 ****
langhooks.h $(VEC_H) $(BITMAP_H) pointer-set.h $(IPA_PROP_H) \
$(COMMON_H) debug.h $(GIMPLE_H) $(LTO_H) $(LTO_TREE_H) \
$(LTO_TAGS_H) $(LTO_STREAMER_H) $(SPLAY_TREE_H) gt-lto-lto.h \
! $(TREE_STREAMER_H) lto/lto-partition.h
lto/lto-partition.o: lto/lto-partition.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
toplev.h $(TREE_H) $(TM_H) \
$(CGRAPH_H) $(TIMEVAR_H) \
--- 85,91 ----
langhooks.h $(VEC_H) $(BITMAP_H) pointer-set.h $(IPA_PROP_H) \
$(COMMON_H) debug.h $(GIMPLE_H) $(LTO_H) $(LTO_TREE_H) \
$(LTO_TAGS_H) $(LTO_STREAMER_H) $(SPLAY_TREE_H) gt-lto-lto.h \
! $(TREE_STREAMER_H) $(DATA_STREAMER_H) lto/lto-partition.h
lto/lto-partition.o: lto/lto-partition.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
toplev.h $(TREE_H) $(TM_H) \
$(CGRAPH_H) $(TIMEVAR_H) \
Index: trunk/gcc/tree-streamer-out.c
===================================================================
*** trunk.orig/gcc/tree-streamer-out.c 2013-06-13 14:35:48.000000000 +0200
--- trunk/gcc/tree-streamer-out.c 2013-06-14 11:00:01.363393021 +0200
*************** pack_ts_base_value_fields (struct bitpac
*** 95,104 ****
bp_pack_value (bp, TYPE_ARTIFICIAL (expr), 1);
else
bp_pack_value (bp, TREE_NO_WARNING (expr), 1);
- bp_pack_value (bp, TREE_USED (expr), 1);
bp_pack_value (bp, TREE_NOTHROW (expr), 1);
bp_pack_value (bp, TREE_STATIC (expr), 1);
! bp_pack_value (bp, TREE_PRIVATE (expr), 1);
bp_pack_value (bp, TREE_PROTECTED (expr), 1);
bp_pack_value (bp, TREE_DEPRECATED (expr), 1);
if (TYPE_P (expr))
--- 95,104 ----
bp_pack_value (bp, TYPE_ARTIFICIAL (expr), 1);
else
bp_pack_value (bp, TREE_NO_WARNING (expr), 1);
bp_pack_value (bp, TREE_NOTHROW (expr), 1);
bp_pack_value (bp, TREE_STATIC (expr), 1);
! if (TREE_CODE (expr) != TREE_BINFO)
! bp_pack_value (bp, TREE_PRIVATE (expr), 1);
bp_pack_value (bp, TREE_PROTECTED (expr), 1);
bp_pack_value (bp, TREE_DEPRECATED (expr), 1);
if (TYPE_P (expr))
*************** pack_ts_type_common_value_fields (struct
*** 303,309 ****
bp_pack_value (bp, TYPE_READONLY (expr), 1);
bp_pack_var_len_unsigned (bp, TYPE_PRECISION (expr));
bp_pack_var_len_unsigned (bp, TYPE_ALIGN (expr));
! bp_pack_var_len_int (bp, TYPE_ALIAS_SET (expr) == 0 ? 0 : -1);
}


--- 303,313 ----
bp_pack_value (bp, TYPE_READONLY (expr), 1);
bp_pack_var_len_unsigned (bp, TYPE_PRECISION (expr));
bp_pack_var_len_unsigned (bp, TYPE_ALIGN (expr));
! /* Make sure to preserve the fact whether the frontend would assign
! alias-set zero to this type. */
! bp_pack_var_len_int (bp, (TYPE_ALIAS_SET (expr) == 0
! || (!in_lto_p
! && get_alias_set (expr) == 0)) ? 0 : -1);
}


*************** streamer_write_chain (struct output_bloc
*** 491,499 ****
to the global decls section as we do not want to have them
enter decl merging. This is, of course, only for the call
for streaming BLOCK_VARS, but other callers are safe. */
if (VAR_OR_FUNCTION_DECL_P (t)
&& DECL_EXTERNAL (t))
! stream_write_tree_shallow_non_ref (ob, t, ref_p);
else
stream_write_tree (ob, t, ref_p);

--- 495,504 ----
to the global decls section as we do not want to have them
enter decl merging. This is, of course, only for the call
for streaming BLOCK_VARS, but other callers are safe. */
+ /* ??? FIXME wrt SCC streaming. Drop these for now. */
if (VAR_OR_FUNCTION_DECL_P (t)
&& DECL_EXTERNAL (t))
! ; /* stream_write_tree_shallow_non_ref (ob, t, ref_p); */
else
stream_write_tree (ob, t, ref_p);

*************** write_ts_list_tree_pointers (struct outp
*** 716,722 ****
{
stream_write_tree (ob, TREE_PURPOSE (expr), ref_p);
stream_write_tree (ob, TREE_VALUE (expr), ref_p);
! streamer_write_chain (ob, TREE_CHAIN (expr), ref_p);
}


--- 721,727 ----
{
stream_write_tree (ob, TREE_PURPOSE (expr), ref_p);
stream_write_tree (ob, TREE_VALUE (expr), ref_p);
! stream_write_tree (ob, TREE_CHAIN (expr), ref_p);
}


*************** write_ts_constructor_tree_pointers (stru
*** 834,839 ****
--- 839,846 ----
}
}

+ extern unsigned long num_tree_bodies_written;
+
/* Write all pointer fields in EXPR to output block OB. If REF_P is true,
the leaves of EXPR are emitted as references. */

*************** streamer_write_tree_body (struct output_
*** 842,847 ****
--- 849,856 ----
{
enum tree_code code;

+ num_tree_bodies_written++;
+
code = TREE_CODE (expr);

if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
Index: trunk/gcc/tree-streamer.c
===================================================================
*** trunk.orig/gcc/tree-streamer.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree-streamer.c 2013-06-13 14:35:53.258108084 +0200
*************** streamer_check_handled_ts_structures (vo
*** 92,107 ****

static void
streamer_tree_cache_add_to_node_array (struct streamer_tree_cache_d *cache,
! unsigned ix, tree t)
{
/* Make sure we're either replacing an old element or
appending consecutively. */
gcc_assert (ix <= cache->nodes.length ());

if (ix == cache->nodes.length ())
! cache->nodes.safe_push (t);
else
! cache->nodes[ix] = t;
}


--- 92,115 ----

static void
streamer_tree_cache_add_to_node_array (struct streamer_tree_cache_d *cache,
! unsigned ix, tree t, hashval_t hash)
{
/* Make sure we're either replacing an old element or
appending consecutively. */
gcc_assert (ix <= cache->nodes.length ());

if (ix == cache->nodes.length ())
! {
! cache->nodes.safe_push (t);
! if (cache->hashes.exists ())
! cache->hashes.safe_push (hash);
! }
else
! {
! cache->nodes[ix] = t;
! if (cache->hashes.exists ())
! cache->hashes[ix] = hash;
! }
}


*************** streamer_tree_cache_add_to_node_array (s
*** 117,123 ****

static bool
streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
! tree t, unsigned *ix_p,
bool insert_at_next_slot_p)
{
void **slot;
--- 125,131 ----

static bool
streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
! tree t, hashval_t hash, unsigned *ix_p,
bool insert_at_next_slot_p)
{
void **slot;
*************** streamer_tree_cache_insert_1 (struct str
*** 136,142 ****
ix = *ix_p;
*slot = (void *)(size_t) (ix + 1);

! streamer_tree_cache_add_to_node_array (cache, ix, t);

/* Indicate that the item was not present in the cache. */
existed_p = false;
--- 144,150 ----
ix = *ix_p;
*slot = (void *)(size_t) (ix + 1);

! streamer_tree_cache_add_to_node_array (cache, ix, t, hash);

/* Indicate that the item was not present in the cache. */
existed_p = false;
*************** streamer_tree_cache_insert_1 (struct str
*** 151,157 ****
location, and ENTRY->TO does not match *IX_P, add T to
the requested location slot. */
ix = *ix_p;
! streamer_tree_cache_add_to_node_array (cache, ix, t);
*slot = (void *)(size_t) (ix + 1);
}

--- 159,165 ----
location, and ENTRY->TO does not match *IX_P, add T to
the requested location slot. */
ix = *ix_p;
! streamer_tree_cache_add_to_node_array (cache, ix, t, hash);
*slot = (void *)(size_t) (ix + 1);
}

*************** streamer_tree_cache_insert_1 (struct str
*** 174,203 ****

bool
streamer_tree_cache_insert (struct streamer_tree_cache_d *cache, tree t,
! unsigned *ix_p)
{
! return streamer_tree_cache_insert_1 (cache, t, ix_p, true);
}


! /* Insert tree node T in CACHE at slot IX. If T already
! existed in the cache return true. Otherwise, return false. */

! bool
! streamer_tree_cache_insert_at (struct streamer_tree_cache_d *cache,
! tree t, unsigned ix)
{
! return streamer_tree_cache_insert_1 (cache, t, &ix, false);
}


/* Appends tree node T to CACHE, even if T already existed in it. */

void
! streamer_tree_cache_append (struct streamer_tree_cache_d *cache, tree t)
{
unsigned ix = cache->nodes.length ();
! streamer_tree_cache_insert_1 (cache, t, &ix, false);
}

/* Return true if tree node T exists in CACHE, otherwise false. If IX_P is
--- 182,214 ----

bool
streamer_tree_cache_insert (struct streamer_tree_cache_d *cache, tree t,
! hashval_t hash, unsigned *ix_p)
{
! return streamer_tree_cache_insert_1 (cache, t, hash, ix_p, true);
}


! /* Replace the tree node with T in CACHE at slot IX. */

! void
! streamer_tree_cache_replace_tree (struct streamer_tree_cache_d *cache,
! tree t, unsigned ix)
{
! hashval_t hash = 0;
! if (cache->hashes.exists ())
! hash = streamer_tree_cache_get_hash (cache, ix);
! streamer_tree_cache_insert_1 (cache, t, hash, &ix, false);
}


/* Appends tree node T to CACHE, even if T already existed in it. */

void
! streamer_tree_cache_append (struct streamer_tree_cache_d *cache,
! tree t, hashval_t hash)
{
unsigned ix = cache->nodes.length ();
! streamer_tree_cache_insert_1 (cache, t, hash, &ix, false);
}

/* Return true if tree node T exists in CACHE, otherwise false. If IX_P is
*************** record_common_node (struct streamer_tree
*** 257,263 ****
if (!node)
node = error_mark_node;

! streamer_tree_cache_append (cache, node);

if (POINTER_TYPE_P (node)
|| TREE_CODE (node) == COMPLEX_TYPE
--- 268,277 ----
if (!node)
node = error_mark_node;

! /* ??? FIXME, devise a better hash value. But the hash needs to be equal
! for all frontend and lto1 invocations. So just use the position
! in the cache as hash value. */
! streamer_tree_cache_append (cache, node, cache->nodes.length ());

if (POINTER_TYPE_P (node)
|| TREE_CODE (node) == COMPLEX_TYPE
*************** preload_common_nodes (struct streamer_tr
*** 305,317 ****
/* Create a cache of pickled nodes. */

struct streamer_tree_cache_d *
! streamer_tree_cache_create (void)
{
struct streamer_tree_cache_d *cache;

cache = XCNEW (struct streamer_tree_cache_d);

cache->node_map = pointer_map_create ();

/* Load all the well-known tree nodes that are always created by
the compiler on startup. This prevents writing them out
--- 319,334 ----
/* Create a cache of pickled nodes. */

struct streamer_tree_cache_d *
! streamer_tree_cache_create (bool with_hashes)
{
struct streamer_tree_cache_d *cache;

cache = XCNEW (struct streamer_tree_cache_d);

cache->node_map = pointer_map_create ();
+ cache->nodes.create (165);
+ if (with_hashes)
+ cache->hashes.create (165);

/* Load all the well-known tree nodes that are always created by
the compiler on startup. This prevents writing them out
*************** streamer_tree_cache_delete (struct strea
*** 332,336 ****
--- 349,354 ----

pointer_map_destroy (c->node_map);
c->nodes.release ();
+ c->hashes.release ();
free (c);
}
Index: trunk/gcc/tree-streamer.h
===================================================================
*** trunk.orig/gcc/tree-streamer.h 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree-streamer.h 2013-06-13 14:35:53.258108084 +0200
*************** along with GCC; see the file COPYING3.
*** 43,48 ****
--- 43,49 ----
T. The reconstructed T is inserted in some array so that when
the reference index for T is found in the input stream, it can be
used to look up into the array to get the reconstructed T. */
+
struct streamer_tree_cache_d
{
/* The mapping between tree nodes and slots into the nodes array. */
*************** struct streamer_tree_cache_d
*** 50,55 ****
--- 51,58 ----

/* The nodes pickled so far. */
vec<tree> nodes;
+ /* The node hashes (if available). */
+ vec<hashval_t> hashes;
};

/* Return true if tree node EXPR should be streamed as a builtin. For
*************** tree streamer_alloc_tree (struct lto_inp
*** 71,77 ****
void streamer_read_tree_body (struct lto_input_block *, struct data_in *, tree);
tree streamer_get_pickled_tree (struct lto_input_block *, struct data_in *);
tree streamer_get_builtin_tree (struct lto_input_block *, struct data_in *);
- tree streamer_read_integer_cst (struct lto_input_block *, struct data_in *);
struct bitpack_d streamer_read_tree_bitfields (struct lto_input_block *,
struct data_in *, tree);

--- 74,79 ----
*************** void streamer_write_builtin (struct outp
*** 89,110 ****
/* In tree-streamer.c. */
void streamer_check_handled_ts_structures (void);
bool streamer_tree_cache_insert (struct streamer_tree_cache_d *, tree,
! unsigned *);
! bool streamer_tree_cache_insert_at (struct streamer_tree_cache_d *, tree,
! unsigned);
! void streamer_tree_cache_append (struct streamer_tree_cache_d *, tree);
bool streamer_tree_cache_lookup (struct streamer_tree_cache_d *, tree,
unsigned *);
! struct streamer_tree_cache_d *streamer_tree_cache_create (void);
void streamer_tree_cache_delete (struct streamer_tree_cache_d *);

/* Return the tree node at slot IX in CACHE. */

static inline tree
! streamer_tree_cache_get (struct streamer_tree_cache_d *cache, unsigned ix)
{
return cache->nodes[ix];
}


#endif /* GCC_TREE_STREAMER_H */
--- 91,121 ----
/* In tree-streamer.c. */
void streamer_check_handled_ts_structures (void);
bool streamer_tree_cache_insert (struct streamer_tree_cache_d *, tree,
! hashval_t, unsigned *);
! void streamer_tree_cache_replace_tree (struct streamer_tree_cache_d *, tree,
! unsigned);
! void streamer_tree_cache_append (struct streamer_tree_cache_d *, tree,
! hashval_t);
bool streamer_tree_cache_lookup (struct streamer_tree_cache_d *, tree,
unsigned *);
! struct streamer_tree_cache_d *streamer_tree_cache_create (bool);
void streamer_tree_cache_delete (struct streamer_tree_cache_d *);

/* Return the tree node at slot IX in CACHE. */

static inline tree
! streamer_tree_cache_get_tree (struct streamer_tree_cache_d *cache, unsigned ix)
{
return cache->nodes[ix];
}

+ /* Return the tree hash value at slot IX in CACHE. */
+
+ static inline hashval_t
+ streamer_tree_cache_get_hash (struct streamer_tree_cache_d *cache, unsigned ix)
+ {
+ return cache->hashes[ix];
+ }
+

#endif /* GCC_TREE_STREAMER_H */
Index: trunk/gcc/lto-streamer.c
===================================================================
*** trunk.orig/gcc/lto-streamer.c 2013-06-13 14:35:48.000000000 +0200
--- trunk/gcc/lto-streamer.c 2013-06-13 14:35:53.258108084 +0200
*************** lto_get_section_name (int section_type,
*** 176,181 ****
--- 176,185 ----
return concat (LTO_SECTION_NAME_PREFIX, sep, add, post, NULL);
}

+ unsigned long num_tree_bodies_written = 0;
+ unsigned long num_trees_output = 0;
+ unsigned long num_grefs_output = 0;
+ unsigned long num_lrefs_output = 0;

/* Show various memory usage statistics related to LTO. */

*************** print_lto_report (const char *s)
*** 227,232 ****
--- 231,245 ----
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_output_symtab_nodes);

+ fprintf (stderr, "[%s] # of output trees: %lu\n",
+ s, num_trees_output);
+ fprintf (stderr, "[%s] # of output references to globals: %lu\n",
+ s, num_grefs_output);
+ fprintf (stderr, "[%s] # of output local references: %lu\n",
+ s, num_lrefs_output);
+ fprintf (stderr, "[%s] # of output tree bodies: %lu\n",
+ s, num_tree_bodies_written);
+
fprintf (stderr, "[%s] # callgraph partitions: "
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_cgraph_partitions);
Index: trunk/gcc/tree.c
===================================================================
*** trunk.orig/gcc/tree.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree.c 2013-06-13 14:35:53.261108120 +0200
*************** build_int_cst_wide (tree type, unsigned
*** 1266,1271 ****
--- 1266,1364 ----
return t;
}

+ void
+ cache_integer_cst (tree t)
+ {
+ tree type = TREE_TYPE (t);
+ HOST_WIDE_INT hi = TREE_INT_CST_HIGH (t);
+ unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (t);
+ int ix = -1;
+ int limit = 0;
+
+ gcc_assert (!TREE_OVERFLOW (t));
+
+ switch (TREE_CODE (type))
+ {
+ case NULLPTR_TYPE:
+ gcc_assert (hi == 0 && low == 0);
+ /* Fallthru. */
+
+ case POINTER_TYPE:
+ case REFERENCE_TYPE:
+ /* Cache NULL pointer. */
+ if (!hi && !low)
+ {
+ limit = 1;
+ ix = 0;
+ }
+ break;
+
+ case BOOLEAN_TYPE:
+ /* Cache false or true. */
+ limit = 2;
+ if (!hi && low < 2)
+ ix = low;
+ break;
+
+ case INTEGER_TYPE:
+ case OFFSET_TYPE:
+ if (TYPE_UNSIGNED (type))
+ {
+ /* Cache 0..N */
+ limit = INTEGER_SHARE_LIMIT;
+ if (!hi && low < (unsigned HOST_WIDE_INT)INTEGER_SHARE_LIMIT)
+ ix = low;
+ }
+ else
+ {
+ /* Cache -1..N */
+ limit = INTEGER_SHARE_LIMIT + 1;
+ if (!hi && low < (unsigned HOST_WIDE_INT)INTEGER_SHARE_LIMIT)
+ ix = low + 1;
+ else if (hi == -1 && low == -(unsigned HOST_WIDE_INT)1)
+ ix = 0;
+ }
+ break;
+
+ case ENUMERAL_TYPE:
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+
+ if (ix >= 0)
+ {
+ /* Look for it in the type's vector of small shared ints. */
+ if (!TYPE_CACHED_VALUES_P (type))
+ {
+ TYPE_CACHED_VALUES_P (type) = 1;
+ TYPE_CACHED_VALUES (type) = make_tree_vec (limit);
+ }
+
+ gcc_assert (TREE_VEC_ELT (TYPE_CACHED_VALUES (type), ix) == NULL_TREE);
+ TREE_VEC_ELT (TYPE_CACHED_VALUES (type), ix) = t;
+ }
+ else
+ {
+ /* Use the cache of larger shared ints. */
+ void **slot;
+
+ slot = htab_find_slot (int_cst_hash_table, t, INSERT);
+ /* If there is already an entry for the number verify it's the
+ same. */
+ if (*slot)
+ {
+ gcc_assert (TREE_INT_CST_LOW ((tree)*slot) == low
+ && TREE_INT_CST_HIGH ((tree)*slot) == hi);
+ return;
+ }
+ /* Otherwise insert this one into the hash table. */
+ *slot = t;
+ }
+ }
+
+
/* Builds an integer constant in TYPE such that lowest BITS bits are ones
and the rest are zeros. */

Index: trunk/gcc/tree.h
===================================================================
*** trunk.orig/gcc/tree.h 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree.h 2013-06-13 14:35:53.262108132 +0200
*************** extern const_tree strip_invariant_refs (
*** 5637,5642 ****
--- 5637,5643 ----
extern tree lhd_gcc_personality (void);
extern void assign_assembler_name_if_neeeded (tree);
extern void warn_deprecated_use (tree, tree);
+ extern void cache_integer_cst (tree);


/* In cgraph.c */
Index: trunk/gcc/lto-symtab.c
===================================================================
*** trunk.orig/gcc/lto-symtab.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto-symtab.c 2013-06-14 11:11:41.056800028 +0200
*************** lto_varpool_replace_node (struct varpool
*** 97,103 ****
ipa_clone_referring ((symtab_node)prevailing_node, &vnode->symbol.ref_list);

/* Be sure we can garbage collect the initializer. */
! if (DECL_INITIAL (vnode->symbol.decl))
DECL_INITIAL (vnode->symbol.decl) = error_mark_node;
/* Finally remove the replaced node. */
varpool_remove_node (vnode);
--- 97,104 ----
ipa_clone_referring ((symtab_node)prevailing_node, &vnode->symbol.ref_list);

/* Be sure we can garbage collect the initializer. */
! if (DECL_INITIAL (vnode->symbol.decl)
! && vnode->symbol.decl != prevailing_node->symbol.decl)
DECL_INITIAL (vnode->symbol.decl) = error_mark_node;
/* Finally remove the replaced node. */
varpool_remove_node (vnode);
Index: trunk/gcc/varpool.c
===================================================================
*** trunk.orig/gcc/varpool.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/varpool.c 2013-06-14 11:12:32.873423669 +0200
*************** varpool_remove_initializer (struct varpo
*** 84,90 ****
/* Keep vtables for BINFO folding. */
&& !DECL_VIRTUAL_P (node->symbol.decl)
/* FIXME: http://gcc.gnu.org/PR55395 */
! && debug_info_level == DINFO_LEVEL_NONE)
DECL_INITIAL (node->symbol.decl) = error_mark_node;
}

--- 84,95 ----
/* Keep vtables for BINFO folding. */
&& !DECL_VIRTUAL_P (node->symbol.decl)
/* FIXME: http://gcc.gnu.org/PR55395 */
! && debug_info_level == DINFO_LEVEL_NONE
! /* When doing declaration merging we have duplicate
! entries for given decl. Do not attempt to remove
! the boides, or we will end up remiving
! wrong one. */
! && cgraph_state != CGRAPH_LTO_STREAMING)
DECL_INITIAL (node->symbol.decl) = error_mark_node;
}
Andi Kleen
2013-06-14 22:39:17 UTC
Permalink
Patch didn't survive a kernel lto build. LTO test cases are tricky as usual, but I can give you a objdir
or core file if you want.

-Andi

/backup/lsrc/git/linux-lto-2.6/lib/decompress_unlzo.c:79:8: internal compiler error: in expand_expr_real_1, at expr.c:9361
parse += 7;
^
0x5e87d5 expand_expr_real_1(tree_node*, rtx_def*, machine_mode, expand_modifier, rtx_def**)
../../gcc/gcc/expr.c:9356
0x53e749 expand_expr
../../gcc/gcc/expr.h:444
0x53e749 expand_gimple_stmt_1
../../gcc/gcc/cfgexpand.c:2283
0x53e749 expand_gimple_stmt
../../gcc/gcc/cfgexpand.c:2370
0x53f1a8 expand_gimple_basic_block
../../gcc/gcc/cfgexpand.c:4204
0x541296 gimple_expand_cfg
../../gcc/gcc/cfgexpand.c:4723
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <http://gcc.gnu.org/bugs.html> for instructions.
make[3]: *** [/home/ak/lsrc/git/obj-lto/ccc9CYe6.ltrans18.ltrans.o] Error 1
make[3]: *** Waiting for unfinished jobs....
lto-wrapper: make returned 2 exit status
--
***@linux.intel.com -- Speaking for myself only.
Jan Hubicka
2013-06-15 10:29:51 UTC
Permalink
Post by Andi Kleen
Patch didn't survive a kernel lto build. LTO test cases are tricky as usual, but I can give you a objdir
or core file if you want.
I was looking into this ICE and it is DECL_CONTEXT being wrong. There is
another PR about the same, so i will try to debug it and figure out why.
Otherwise WPA with earlier version of patch was 1.1GB and 30 second for Martin
Liska and me ;)

Honza
Jan Hubicka
2013-06-15 10:28:23 UTC
Permalink
Post by Richard Biener
I've managed to fix nearly all reported missed merged types for cc1.
Remaining are those we'll never be able to merge (merging would
change the SCC shape) and those that eventually end up refering
to a TYPE_STUB_DECL with a make_anon_name () IDENTIFIER_NODE.
For the latter we should find a middle-end solution as a followup
in case it really matters.
WPA statistics for stage2 cc1 are
[WPA] read 2495082 SCCs of average size 2.380088
[WPA] 5938514 tree bodies read in total
0.804380
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 429412 SCCs, 7039 collisions (0.016392)
[WPA] Merged 426111 SCCs
[WPA] Merged 3313709 tree bodies
[WPA] Merged 225079 types
[WPA] 162844 types prevailed (488124 associated trees)
[WPA] Old merging code merges an additional 22412 types of which 21492 are
in the same SCC with their prevailing variant (345831 and 323276
associated trees)
which shows there are 920 such TYPE_STUB_DECL issues and 21492
merges the old code did that destroyed SCCs.
Compared to the old code which only unified types and some selected
trees (INTEGER_CSTs), the new code can immediately ggc_free the
unified SCCs after they have been read which results in 55% of
all tree bodies input into WPA stage to be freed (rather than hoping
on secondary GC walk effects as the old code relied on), 58% of
all types are recycled.
Compile-time is at least on-par with the old code now and disk-usage
grows by a moderate 10% due to the streaming format change.
On Firefox we now get
[WPA] read 43144472 SCCs of average size 2.270524
[WPA] 97960575 tree bodies read in total
[WPA] tree SCC table: size 8388593, 3936571 elements, collision ratio: 0.727773
[WPA] tree SCC max chain length 88 (size 1)
[WPA] Compared 19030240 SCCs, 337719 collisions (0.017746)
[WPA] Merged 18957101 SCCs
[WPA] Merged 58202930 tree bodies
[WPA] Merged 11800337 types
[WPA] 4506307 types prevailed (13699881 associated trees)
[WPA] Old merging code merges an additional 2174796 types of which 141104 are in the same SCC with their prevailing variant (12811826 and 6367853 associated trees)
[WPA] GIMPLE canonical type table: size 131071, 77871 elements, 4506442 searches, 1130903 collisions (ratio: 0.250953)
[WPA] GIMPLE canonical type hash table: size 8388593, 4506386 elements, 15712947 searches, 12879021 collisions (ratio: 0.819644)

and about 5GB of GGC memory after merging, overall the footprint is still around 10GB.
It is notable improvmenet over old code however, where we needed 16GB.

[LTRANS] read 319710 SCCs of average size 6.184039
[LTRANS] 1977099 tree bodies read in total
[LTRANS] GIMPLE canonical type table: size 16381, 9569 elements, 473131 searches, 24899 collisions (ratio: 0.052626)
[LTRANS] GIMPLE canonical type hash table: size 1048573, 473076 elements, 1611909 searches, 1340396 collisions (ratio: 0.831558)

CPU: AMD64 family10, speed 2100 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit mask of 0x00 (No unit mask) count 750000
samples % app name symbol name
45047 11.7420 lto1 inflate_fast
34224 8.9209 lto1 streamer_read_uhwi(lto_input_block*)
24630 6.4201 lto1 compare_tree_sccs_1(tree_node*, tree_node*, tree_node***)
23205 6.0487 lto1 pointer_map_insert(pointer_map_t*, void const*)
20829 5.4293 lto1 unpack_value_fields(data_in*, bitpack_d*, tree_node*)
13545 3.5307 lto1 ht_lookup_with_hash(ht*, unsigned char const*, unsigned long, unsigned int, ht_lookup_option)
12841 3.3472 libc-2.11.1.so memset
11840 3.0862 lto1 htab_find_slot_with_hash
11397 2.9708 lto1 streamer_tree_cache_insert_1(streamer_tree_cache_d*, tree_node*, unsigned int, unsigned int*, bool)
11086 2.8897 lto1 lto_input_tree(lto_input_block*, data_in*)
10522 2.7427 lto1 lto_input_tree_1(lto_input_block*, data_in*, LTO_tags, unsigned int)
8853 2.3076 lto1 unify_scc(streamer_tree_cache_d*, unsigned int, unsigned int, unsigned int, unsigned int)
8539 2.2258 lto1 hash_table<tree_scc_hasher, xcallocator>::find_slot_with_hash(tree_scc const*, unsigned int, insert_option)
7987 2.0819 lto1 adler32
7743 2.0183 lto1 streamer_read_tree_body(lto_input_block*, data_in*, tree_node*)

Can't we free the pointer map in streamer after every SCC?

phase stream in : 244.05 (47%) usr 7.14 (25%) sys 252.74 (46%) wall 3478752 kB (93%) ggc
phase stream out : 222.50 (43%) usr 21.22 (73%) sys 243.97 (44%) wall 7160 kB ( 0%) ggc
garbage collection : 12.88 ( 2%) usr 0.00 ( 0%) sys 12.88 ( 2%) wall 0 kB ( 0%) ggc
ipa lto decl in : 177.88 (34%) usr 4.87 (17%) sys 184.28 (34%) wall 3887482 kB (103%) ggc
ipa lto decl out : 199.66 (39%) usr 11.30 (39%) sys 211.05 (38%) wall 0 kB ( 0%) ggc
ipa inlining heuristics : 29.49 ( 6%) usr 0.70 ( 2%) sys 30.21 ( 6%) wall 1353230 kB (36%) ggc
ipa lto decl merge : 26.57 ( 5%) usr 0.00 ( 0%) sys 26.58 ( 5%) wall 8269 kB ( 0%) ggc
ipa lto cgraph merge : 12.78 ( 2%) usr 0.04 ( 0%) sys 12.82 ( 2%) wall 142112 kB ( 4%) ggc
TOTAL : 517.75 29.13 548.71 3758407 kB

The longest running ltrans add another 400 seconds.
combiner : 16.16 ( 4%) usr 0.08 ( 1%) sys 16.53 ( 4%) wall 205251 kB ( 6%) ggc
integrated RA : 47.97 (12%) usr 0.21 ( 3%) sys 48.39 (12%) wall 391655 kB (12%) ggc
LRA hard reg assignment : 158.64 (39%) usr 0.02 ( 0%) sys 158.74 (38%) wall 0 kB ( 0%) ggc
TOTAL : 404.51 8.39 414.01 3215235 kB


Honza
Richard Biener
2013-06-17 08:09:18 UTC
Permalink
Post by Jan Hubicka
Post by Richard Biener
I've managed to fix nearly all reported missed merged types for cc1.
Remaining are those we'll never be able to merge (merging would
change the SCC shape) and those that eventually end up refering
to a TYPE_STUB_DECL with a make_anon_name () IDENTIFIER_NODE.
For the latter we should find a middle-end solution as a followup
in case it really matters.
WPA statistics for stage2 cc1 are
[WPA] read 2495082 SCCs of average size 2.380088
[WPA] 5938514 tree bodies read in total
0.804380
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 429412 SCCs, 7039 collisions (0.016392)
[WPA] Merged 426111 SCCs
[WPA] Merged 3313709 tree bodies
[WPA] Merged 225079 types
[WPA] 162844 types prevailed (488124 associated trees)
[WPA] Old merging code merges an additional 22412 types of which 21492 are
in the same SCC with their prevailing variant (345831 and 323276
associated trees)
which shows there are 920 such TYPE_STUB_DECL issues and 21492
merges the old code did that destroyed SCCs.
Compared to the old code which only unified types and some selected
trees (INTEGER_CSTs), the new code can immediately ggc_free the
unified SCCs after they have been read which results in 55% of
all tree bodies input into WPA stage to be freed (rather than hoping
on secondary GC walk effects as the old code relied on), 58% of
all types are recycled.
Compile-time is at least on-par with the old code now and disk-usage
grows by a moderate 10% due to the streaming format change.
On Firefox we now get
[WPA] read 43144472 SCCs of average size 2.270524
[WPA] 97960575 tree bodies read in total
[WPA] tree SCC table: size 8388593, 3936571 elements, collision ratio: 0.727773
[WPA] tree SCC max chain length 88 (size 1)
[WPA] Compared 19030240 SCCs, 337719 collisions (0.017746)
[WPA] Merged 18957101 SCCs
[WPA] Merged 58202930 tree bodies
[WPA] Merged 11800337 types
[WPA] 4506307 types prevailed (13699881 associated trees)
[WPA] Old merging code merges an additional 2174796 types of which 141104 are in the same SCC with their prevailing variant (12811826 and 6367853 associated trees)
[WPA] GIMPLE canonical type table: size 131071, 77871 elements, 4506442 searches, 1130903 collisions (ratio: 0.250953)
[WPA] GIMPLE canonical type hash table: size 8388593, 4506386 elements, 15712947 searches, 12879021 collisions (ratio: 0.819644)
and about 5GB of GGC memory after merging, overall the footprint is still around 10GB.
It is notable improvmenet over old code however, where we needed 16GB.
[LTRANS] read 319710 SCCs of average size 6.184039
[LTRANS] 1977099 tree bodies read in total
[LTRANS] GIMPLE canonical type table: size 16381, 9569 elements, 473131 searches, 24899 collisions (ratio: 0.052626)
[LTRANS] GIMPLE canonical type hash table: size 1048573, 473076 elements, 1611909 searches, 1340396 collisions (ratio: 0.831558)
CPU: AMD64 family10, speed 2100 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit mask of 0x00 (No unit mask) count 750000
samples % app name symbol name
45047 11.7420 lto1 inflate_fast
It might be worth changing LTO section layout to include a header
that specifies whether a section is compressed or not so we can
allow mixed compressed/uncompressed sections in the LTRANS files
and avoid decompressing the function sections.
Post by Jan Hubicka
34224 8.9209 lto1 streamer_read_uhwi(lto_input_block*)
24630 6.4201 lto1 compare_tree_sccs_1(tree_node*, tree_node*, tree_node***)
23205 6.0487 lto1 pointer_map_insert(pointer_map_t*, void const*)
20829 5.4293 lto1 unpack_value_fields(data_in*, bitpack_d*, tree_node*)
13545 3.5307 lto1 ht_lookup_with_hash(ht*, unsigned char const*, unsigned long, unsigned int, ht_lookup_option)
12841 3.3472 libc-2.11.1.so memset
11840 3.0862 lto1 htab_find_slot_with_hash
11397 2.9708 lto1 streamer_tree_cache_insert_1(streamer_tree_cache_d*, tree_node*, unsigned int, unsigned int*, bool)
11086 2.8897 lto1 lto_input_tree(lto_input_block*, data_in*)
10522 2.7427 lto1 lto_input_tree_1(lto_input_block*, data_in*, LTO_tags, unsigned int)
8853 2.3076 lto1 unify_scc(streamer_tree_cache_d*, unsigned int, unsigned int, unsigned int, unsigned int)
8539 2.2258 lto1 hash_table<tree_scc_hasher, xcallocator>::find_slot_with_hash(tree_scc const*, unsigned int, insert_option)
7987 2.0819 lto1 adler32
7743 2.0183 lto1 streamer_read_tree_body(lto_input_block*, data_in*, tree_node*)
Can't we free the pointer map in streamer after every SCC?
You mean on read-in? We even can do without the pointer-map there at all.

We can experiment with that as a followup.
Post by Jan Hubicka
phase stream in : 244.05 (47%) usr 7.14 (25%) sys 252.74 (46%) wall 3478752 kB (93%) ggc
phase stream out : 222.50 (43%) usr 21.22 (73%) sys 243.97 (44%) wall 7160 kB ( 0%) ggc
garbage collection : 12.88 ( 2%) usr 0.00 ( 0%) sys 12.88 ( 2%) wall 0 kB ( 0%) ggc
ipa lto decl in : 177.88 (34%) usr 4.87 (17%) sys 184.28 (34%) wall 3887482 kB (103%) ggc
ipa lto decl out : 199.66 (39%) usr 11.30 (39%) sys 211.05 (38%) wall 0 kB ( 0%) ggc
ipa inlining heuristics : 29.49 ( 6%) usr 0.70 ( 2%) sys 30.21 ( 6%) wall 1353230 kB (36%) ggc
ipa lto decl merge : 26.57 ( 5%) usr 0.00 ( 0%) sys 26.58 ( 5%) wall 8269 kB ( 0%) ggc
ipa lto cgraph merge : 12.78 ( 2%) usr 0.04 ( 0%) sys 12.82 ( 2%) wall 142112 kB ( 4%) ggc
TOTAL : 517.75 29.13 548.71 3758407 kB
The longest running ltrans add another 400 seconds.
combiner : 16.16 ( 4%) usr 0.08 ( 1%) sys 16.53 ( 4%) wall 205251 kB ( 6%) ggc
integrated RA : 47.97 (12%) usr 0.21 ( 3%) sys 48.39 (12%) wall 391655 kB (12%) ggc
LRA hard reg assignment : 158.64 (39%) usr 0.02 ( 0%) sys 158.74 (38%) wall 0 kB ( 0%) ggc
TOTAL : 404.51 8.39 414.01 3215235 kB
Otherwise it looks pretty good.

Richard.
Jan Hubicka
2013-06-17 08:12:41 UTC
Permalink
Post by Richard Biener
Post by Jan Hubicka
CPU: AMD64 family10, speed 2100 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit mask of 0x00 (No unit mask) count 750000
samples % app name symbol name
45047 11.7420 lto1 inflate_fast
It might be worth changing LTO section layout to include a header
that specifies whether a section is compressed or not so we can
allow mixed compressed/uncompressed sections in the LTRANS files
and avoid decompressing the function sections.
Yes, but this profile shows only decl streaming. Functions do not really show
up in profile. I guess only way to cut this down is to either use LZO that
is faster at decompression side and/or reduce amount of data we stream to .o
files.
Post by Richard Biener
Post by Jan Hubicka
34224 8.9209 lto1 streamer_read_uhwi(lto_input_block*)
24630 6.4201 lto1 compare_tree_sccs_1(tree_node*, tree_node*, tree_node***)
23205 6.0487 lto1 pointer_map_insert(pointer_map_t*, void const*)
20829 5.4293 lto1 unpack_value_fields(data_in*, bitpack_d*, tree_node*)
13545 3.5307 lto1 ht_lookup_with_hash(ht*, unsigned char const*, unsigned long, unsigned int, ht_lookup_option)
12841 3.3472 libc-2.11.1.so memset
11840 3.0862 lto1 htab_find_slot_with_hash
11397 2.9708 lto1 streamer_tree_cache_insert_1(streamer_tree_cache_d*, tree_node*, unsigned int, unsigned int*, bool)
11086 2.8897 lto1 lto_input_tree(lto_input_block*, data_in*)
10522 2.7427 lto1 lto_input_tree_1(lto_input_block*, data_in*, LTO_tags, unsigned int)
8853 2.3076 lto1 unify_scc(streamer_tree_cache_d*, unsigned int, unsigned int, unsigned int, unsigned int)
8539 2.2258 lto1 hash_table<tree_scc_hasher, xcallocator>::find_slot_with_hash(tree_scc const*, unsigned int, insert_option)
7987 2.0819 lto1 adler32
7743 2.0183 lto1 streamer_read_tree_body(lto_input_block*, data_in*, tree_node*)
Can't we free the pointer map in streamer after every SCC?
You mean on read-in? We even can do without the pointer-map there at all.
We can experiment with that as a followup.
I believe it was needed for one of the cleanups (to update the map), but i guess
one can easily just run the fixup on the segment of array corresponding to new SCC.
Post by Richard Biener
Post by Jan Hubicka
The longest running ltrans add another 400 seconds.
combiner : 16.16 ( 4%) usr 0.08 ( 1%) sys 16.53 ( 4%) wall 205251 kB ( 6%) ggc
integrated RA : 47.97 (12%) usr 0.21 ( 3%) sys 48.39 (12%) wall 391655 kB (12%) ggc
LRA hard reg assignment : 158.64 (39%) usr 0.02 ( 0%) sys 158.74 (38%) wall 0 kB ( 0%) ggc
TOTAL : 404.51 8.39 414.01 3215235 kB
Otherwise it looks pretty good.
Indeed. We are getting closer to numbers I measured on the same machine in 2010, when Firefox
was half of its today size.

Thanks for all the hard work!
Honza
Andi Kleen
2013-06-17 17:27:06 UTC
Permalink
Post by Jan Hubicka
Post by Richard Biener
Post by Jan Hubicka
CPU: AMD64 family10, speed 2100 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit mask of 0x00 (No unit mask) count 750000
samples % app name symbol name
45047 11.7420 lto1 inflate_fast
It might be worth changing LTO section layout to include a header
that specifies whether a section is compressed or not so we can
allow mixed compressed/uncompressed sections in the LTRANS files
and avoid decompressing the function sections.
Yes, but this profile shows only decl streaming. Functions do not really show
up in profile. I guess only way to cut this down is to either use LZO that
is faster at decompression side and/or reduce amount of data we stream to .o
files.
Or better snappy / snappy-c. It's faster at compression than LZO and
comparable at decompression.

-Andi
Richard Biener
2013-06-17 13:45:01 UTC
Permalink
Post by Richard Biener
I've managed to fix nearly all reported missed merged types for cc1.
Remaining are those we'll never be able to merge (merging would
change the SCC shape) and those that eventually end up refering
to a TYPE_STUB_DECL with a make_anon_name () IDENTIFIER_NODE.
For the latter we should find a middle-end solution as a followup
in case it really matters.
WPA statistics for stage2 cc1 are
[WPA] read 2495082 SCCs of average size 2.380088
[WPA] 5938514 tree bodies read in total
0.804380
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 429412 SCCs, 7039 collisions (0.016392)
[WPA] Merged 426111 SCCs
[WPA] Merged 3313709 tree bodies
[WPA] Merged 225079 types
[WPA] 162844 types prevailed (488124 associated trees)
[WPA] Old merging code merges an additional 22412 types of which 21492 are
in the same SCC with their prevailing variant (345831 and 323276
associated trees)
which shows there are 920 such TYPE_STUB_DECL issues and 21492
merges the old code did that destroyed SCCs.
Compared to the old code which only unified types and some selected
trees (INTEGER_CSTs), the new code can immediately ggc_free the
unified SCCs after they have been read which results in 55% of
all tree bodies input into WPA stage to be freed (rather than hoping
on secondary GC walk effects as the old code relied on), 58% of
all types are recycled.
Compile-time is at least on-par with the old code now and disk-usage
grows by a moderate 10% due to the streaming format change.
A final LTO bootstrap & regtest is running on x86_64-unknown-linux-gnu
now, and I'm building SPEC2k6 with LTO again.
I plan to commit this or a variant with the old merging code ripped
out on Monday (not decided on that yet) unless anyone objects.
Thus, more testing appreciated (also patch review, of course).
So meanwhile I've LTO bootstrapped and tested with Ada enabled
and fixed a bug uncovered by that. I've also incorporated exploiting
anonymous identifiers used by the C++ FE via make_anon_name and moved
the core piece from cp/cp-tree.h to tree.h (considered obvious).
I also stopped streaming and comparing another cache bit (fixed
Fortran FE missed mergings), TYPE_CONTAINS_PLACEHOLDER_INTERNAL.
I've fixed guality fails exposed by us streaming DEBUG_EXPR_DECLs
into the global type/decl section (and merging compatible ones ...)
by streaming them into the function sections instead.
Alongside with minor cleanups the following is what is currently
LTO bootstrapping / regtesting on x86_64-unknown-linux-gnu and
what will be committed to trunk after that completed.

Any further things (including ripping out of the old merging
code that's still there for statistics purposes) as followup.

Thanks,
Richard.

Updated stage2 cc1 WPA stats:

[WPA] read 2460328 SCCs of average size 2.399646
[WPA] 5903917 tree bodies read in total
[WPA] tree SCC table: size 524287, 218327 elements, collision ratio: 0.823034
[WPA] tree SCC max chain length 11 (size 1)
[WPA] Compared 439287 SCCs, 8372 collisions (0.019058)
[WPA] Merged 436838 SCCs
[WPA] Merged 3488703 tree bodies
[WPA] Merged 272649 types
[WPA] 115305 types prevailed (311659 associated trees)
[WPA] Old merging code merges an additional 12642 types of which 12127 are in the same SCC with their prevailing variant (193369 and 192443 associated trees)
[WPA] GIMPLE canonical type table: size 16381, 6244 elements, 115440 searches, 4503 collisions (ratio: 0.039007)
[WPA] GIMPLE canonical type hash table: size 262139, 115381 elements, 535878 searches, 446148 collisions (ratio: 0.832555)


2013-06-17 Richard Biener <***@suse.de>

* lto-streamer.h (enum LTO_tags): Add LTO_tree_scc.
(lto_input_scc): Declare.
(lto_input_tree_1): Likewise.
(struct lto_stats_d): Add num_tree_bodies_output and
num_pickle_refs_output.
* lto-streamer-in.c (lto_read_body): Use streamer_tree_cache_get_tree.
(lto_read_tree_1): Split out from ...
(lto_read_tree): ... this.
(lto_input_scc): New function.
(lto_input_tree_1): Split out from ...
(lto_input_tree): ... this. Handle LTO_tree_scc.
(lto_data_in_create): Create the streamer cache without hashes.
* lto-streamer-out.c (create_output_block): Create the streamer
cache with hashes when not doing WPA.
(lto_write_tree_1): Split out from ...
(lto_write_tree): ... this.
(get_symbol_initial_value): New function.
(lto_output_tree_1): Split out from ...
(lto_output_tree): ... this. Write trees as series of SCCs
using a DFS walk via DFS_write_tree.
(struct sccs, struct scc_entry): New types.
(next_dfs_num, sccstack, sccstate, sccstate_obstack): New globals.
(DFS_write_tree_body): New function.
(DFS_write_tree): Likewise.
(hash_tree): Likewise.
(scc_entry_compare): Likewise.
(hash_scc): Likewise.
(tree_is_indexable): DEBUG_EXPR_DECLs are local entities.
* tree-streamer-in.c (lto_input_ts_list_tree_pointers): Stream
TREE_CHAIN as regular reference.
(streamer_read_integer_cst): Remove.
(streamer_get_pickled_tree): Adjust.
* tree-streamer-out.c (streamer_write_chain): Disable streaming
of DECL_EXTERNALs in BLOCK_VARS for now.
(write_ts_list_tree_pointers): Stream TREE_CHAIN as regular
reference.
* tree-streamer.c (streamer_tree_cache_add_to_node_array):
Add hash value argument and record that if hashes are recorded
in the cache.
(streamer_tree_cache_insert_1): Adjust.
(streamer_tree_cache_insert): Likewise.
(streamer_tree_cache_insert_at): Rename to ...
(streamer_tree_cache_replace_tree): ... this and adjust.
(streamer_tree_cache_append): Adjust.
(record_common_node): Likewise.
(streamer_tree_cache_create): Add argument whether to
record hash values together with trees.
(streamer_tree_cache_delete): Adjust.
* tree-streamer.h (struct streamer_tree_cache_d): Add
vector of hashes.
(streamer_read_integer_cst): Remove.
(streamer_tree_cache_insert): Adjust.
(streamer_tree_cache_append): Likewise.
(streamer_tree_cache_insert_at): Rename to ...
(streamer_tree_cache_replace_tree): ... this and adjust.
(streamer_tree_cache_create): Add argument whether to record hashes.
(streamer_tree_cache_get): Rename to ...
(streamer_tree_cache_get_tree): ... this.
(streamer_tree_cache_get_hash): New function.
* tree.c (cache_integer_cst): New function.
* tree.h (cache_integer_cst): Declare.
(ANON_AGGRNAME_FORMAT, ANON_AGGRNAME_P): Move here from cp/cp-tree.h.
* lto-symtab.c (lto_varpool_replace_node): Only release
DECL_INITIAL of non-prevailing decls.
* varpool.c (varpool_remove_initializer): Do not release
DECL_INITIAL when we are still in CGRAPH_LTO_STREAMING.

cp/
* cp-tree.h (ANON_AGGRNAME_FORMAT, ANON_AGGRNAME_P): Move to tree.h.

lto/
* Make-lang.in (lto.o): Add $(DATA_STREAMER_H) dependency.
* lto.c: Include data-streamer.h.
(lto_read_in_decl_state): Use streamer_tree_cache_get_tree.
(gimple_type_leader_entry_s, gimple_type_leader,
gimple_lookup_type_leader): Remove.
(gtc_visit): Simplify.
(gimple_types_compatible_p): Likewise.
(gimple_register_type_1): Likewise. Merge into ...
(gimple_register_type): ... this. Keep it as legacy for
statistics purposes for now.
(fixup_integer_cst): Remove.
(LTO_FIXUP_TREE, lto_fixup_types, lto_ft_*): Simplify and
rename to ...
(MAYBE_REMEMBER_WITH_VARS, maybe_remember_with_vars,
maybe_remember_with_vars_*): ... these.
(uniquify_nodes): Remove.
(lto_fixup_prevailing_type): New function.
(struct tree_scc, struct tree_scc_hasher): New type and hasher.
(tree_scc_hash, tree_scc_hash_obstack): New globals.
(num_merged_types, num_prevailing_types, num_not_merged_types,
num_not_merged_types_in_same_scc, total_scc_size, num_sccs_read,
total_scc_size_merged, num_sccs_merged, num_scc_compares,
num_scc_compare_collisions): New global counters.
(compare_tree_sccs_1): New function.
(compare_tree_sccs): Likewise.
(unify_scc): Likewise.
(lto_read_decls): Stream in tree SCCs and unify them on the
way in. Finalize prevailing SCC tree members.
(read_cgraph_and_symbols): Do not initialize or free gimple_type_leader.
Allocate and free tree_scc_hash_obstack and tree_scc_hash, do not bother
to ggc-collect during merging.
(print_lto_report_1): Adjust for new merging code.

Index: trunk/gcc/lto-streamer.h
===================================================================
*** trunk.orig/gcc/lto-streamer.h 2013-06-13 14:35:48.000000000 +0200
--- trunk/gcc/lto-streamer.h 2013-06-17 10:16:33.721600739 +0200
*************** enum LTO_tags
*** 199,204 ****
--- 199,207 ----
/* EH try/catch node. */
LTO_eh_catch,

+ /* Special for global streamer. A blob of unnamed tree nodes. */
+ LTO_tree_scc,
+
/* References to indexable tree nodes. These objects are stored in
tables that are written separately from the function bodies that
reference them. This way they can be instantiated even when the
*************** struct lto_stats_d
*** 421,426 ****
--- 424,431 ----
unsigned HOST_WIDE_INT num_compressed_il_bytes;
unsigned HOST_WIDE_INT num_input_il_bytes;
unsigned HOST_WIDE_INT num_uncompressed_il_bytes;
+ unsigned HOST_WIDE_INT num_tree_bodies_output;
+ unsigned HOST_WIDE_INT num_pickle_refs_output;
};

/* Entry of LTO symtab encoder. */
*************** tree lto_input_tree_ref (struct lto_inpu
*** 854,859 ****
--- 859,868 ----
struct function *, enum LTO_tags);
void lto_tag_check_set (enum LTO_tags, int, ...);
void lto_init_eh (void);
+ hashval_t lto_input_scc (struct lto_input_block *, struct data_in *,
+ unsigned *, unsigned *);
+ tree lto_input_tree_1 (struct lto_input_block *, struct data_in *,
+ enum LTO_tags, hashval_t hash);
tree lto_input_tree (struct lto_input_block *, struct data_in *);


Index: trunk/gcc/lto-streamer-in.c
===================================================================
*** trunk.orig/gcc/lto-streamer-in.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto-streamer-in.c 2013-06-13 14:35:53.250107988 +0200
*************** lto_read_body (struct lto_file_decl_data
*** 1016,1022 ****
unsigned i;
for (i = len; i-- > from;)
{
! tree t = cache->nodes[i];
if (t == NULL_TREE)
continue;

--- 1016,1022 ----
unsigned i;
for (i = len; i-- > from;)
{
! tree t = streamer_tree_cache_get_tree (cache, i);
if (t == NULL_TREE)
continue;

*************** lto_input_function_body (struct lto_file
*** 1056,1067 ****
}


/* Read the physical representation of a tree node with tag TAG from
input block IB using the per-file context in DATA_IN. */

static tree
lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag)
{
/* Instantiate a new tree node. */
tree result = streamer_alloc_tree (ib, data_in, tag);
--- 1056,1098 ----
}


+ /* Read the physical representation of a tree node EXPR from
+ input block IB using the per-file context in DATA_IN. */
+
+ static void
+ lto_read_tree_1 (struct lto_input_block *ib, struct data_in *data_in, tree expr)
+ {
+ /* Read all the bitfield values in EXPR. Note that for LTO, we
+ only write language-independent bitfields, so no more unpacking is
+ needed. */
+ streamer_read_tree_bitfields (ib, data_in, expr);
+
+ /* Read all the pointer fields in EXPR. */
+ streamer_read_tree_body (ib, data_in, expr);
+
+ /* Read any LTO-specific data not read by the tree streamer. */
+ if (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
+ DECL_INITIAL (expr) = stream_read_tree (ib, data_in);
+
+ /* We should never try to instantiate an MD or NORMAL builtin here. */
+ if (TREE_CODE (expr) == FUNCTION_DECL)
+ gcc_assert (!streamer_handle_as_builtin_p (expr));
+
+ #ifdef LTO_STREAMER_DEBUG
+ /* Remove the mapping to RESULT's original address set by
+ streamer_alloc_tree. */
+ lto_orig_address_remove (expr);
+ #endif
+ }
+
/* Read the physical representation of a tree node with tag TAG from
input block IB using the per-file context in DATA_IN. */

static tree
lto_read_tree (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag, hashval_t hash)
{
/* Instantiate a new tree node. */
tree result = streamer_alloc_tree (ib, data_in, tag);
*************** lto_read_tree (struct lto_input_block *i
*** 1069,1103 ****
/* Enter RESULT in the reader cache. This will make RESULT
available so that circular references in the rest of the tree
structure can be resolved in subsequent calls to stream_read_tree. */
! streamer_tree_cache_append (data_in->reader_cache, result);

! /* Read all the bitfield values in RESULT. Note that for LTO, we
! only write language-independent bitfields, so no more unpacking is
! needed. */
! streamer_read_tree_bitfields (ib, data_in, result);

! /* Read all the pointer fields in RESULT. */
! streamer_read_tree_body (ib, data_in, result);

! /* Read any LTO-specific data not read by the tree streamer. */
! if (DECL_P (result)
! && TREE_CODE (result) != FUNCTION_DECL
! && TREE_CODE (result) != TRANSLATION_UNIT_DECL)
! DECL_INITIAL (result) = stream_read_tree (ib, data_in);

- /* We should never try to instantiate an MD or NORMAL builtin here. */
- if (TREE_CODE (result) == FUNCTION_DECL)
- gcc_assert (!streamer_handle_as_builtin_p (result));

! /* end_marker = */ streamer_read_uchar (ib);

! #ifdef LTO_STREAMER_DEBUG
! /* Remove the mapping to RESULT's original address set by
! streamer_alloc_tree. */
! lto_orig_address_remove (result);
! #endif

! return result;
}


--- 1100,1169 ----
/* Enter RESULT in the reader cache. This will make RESULT
available so that circular references in the rest of the tree
structure can be resolved in subsequent calls to stream_read_tree. */
! streamer_tree_cache_append (data_in->reader_cache, result, hash);

! lto_read_tree_1 (ib, data_in, result);

! /* end_marker = */ streamer_read_uchar (ib);

! return result;
! }


! /* Populate the reader cache with trees materialized from the SCC
! following in the IB, DATA_IN stream. */

! hashval_t
! lto_input_scc (struct lto_input_block *ib, struct data_in *data_in,
! unsigned *len, unsigned *entry_len)
! {
! /* A blob of unnamed tree nodes, fill the cache from it and
! recurse. */
! unsigned size = streamer_read_uhwi (ib);
! hashval_t scc_hash = streamer_read_uhwi (ib);
! unsigned scc_entry_len = 1;

! if (size == 1)
! {
! enum LTO_tags tag = streamer_read_record_start (ib);
! lto_input_tree_1 (ib, data_in, tag, scc_hash);
! }
! else
! {
! unsigned int first = data_in->reader_cache->nodes.length ();
! tree result;
!
! scc_entry_len = streamer_read_uhwi (ib);
!
! /* Materialize size trees by reading their headers. */
! for (unsigned i = 0; i < size; ++i)
! {
! enum LTO_tags tag = streamer_read_record_start (ib);
! if (tag == LTO_null
! || (tag >= LTO_field_decl_ref && tag <= LTO_global_decl_ref)
! || tag == LTO_tree_pickle_reference
! || tag == LTO_builtin_decl
! || tag == LTO_integer_cst
! || tag == LTO_tree_scc)
! gcc_unreachable ();
!
! result = streamer_alloc_tree (ib, data_in, tag);
! streamer_tree_cache_append (data_in->reader_cache, result, 0);
! }
!
! /* Read the tree bitpacks and references. */
! for (unsigned i = 0; i < size; ++i)
! {
! result = streamer_tree_cache_get_tree (data_in->reader_cache,
! first + i);
! lto_read_tree_1 (ib, data_in, result);
! /* end_marker = */ streamer_read_uchar (ib);
! }
! }
!
! *len = size;
! *entry_len = scc_entry_len;
! return scc_hash;
}


*************** lto_read_tree (struct lto_input_block *i
*** 1106,1117 ****
to previously read nodes. */

tree
! lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
{
- enum LTO_tags tag;
tree result;

- tag = streamer_read_record_start (ib);
gcc_assert ((unsigned) tag < (unsigned) LTO_NUM_TAGS);

if (tag == LTO_null)
--- 1172,1182 ----
to previously read nodes. */

tree
! lto_input_tree_1 (struct lto_input_block *ib, struct data_in *data_in,
! enum LTO_tags tag, hashval_t hash)
{
tree result;

gcc_assert ((unsigned) tag < (unsigned) LTO_NUM_TAGS);

if (tag == LTO_null)
*************** lto_input_tree (struct lto_input_block *
*** 1137,1155 ****
}
else if (tag == LTO_integer_cst)
{
! /* For shared integer constants we only need the type and its hi/low
! words. */
! result = streamer_read_integer_cst (ib, data_in);
}
else
{
/* Otherwise, materialize a new node from IB. */
! result = lto_read_tree (ib, data_in, tag);
}

return result;
}


/* Input toplevel asms. */

--- 1202,1240 ----
}
else if (tag == LTO_integer_cst)
{
! /* For shared integer constants in singletons we can use the existing
! tree integer constant merging code. */
! tree type = stream_read_tree (ib, data_in);
! unsigned HOST_WIDE_INT low = streamer_read_uhwi (ib);
! HOST_WIDE_INT high = streamer_read_hwi (ib);
! result = build_int_cst_wide (type, low, high);
! streamer_tree_cache_append (data_in->reader_cache, result, hash);
! }
! else if (tag == LTO_tree_scc)
! {
! unsigned len, entry_len;
!
! /* Input and skip the SCC. */
! lto_input_scc (ib, data_in, &len, &entry_len);
!
! /* Recurse. */
! return lto_input_tree (ib, data_in);
}
else
{
/* Otherwise, materialize a new node from IB. */
! result = lto_read_tree (ib, data_in, tag, hash);
}

return result;
}

+ tree
+ lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
+ {
+ return lto_input_tree_1 (ib, data_in, streamer_read_record_start (ib), 0);
+ }
+

/* Input toplevel asms. */

*************** lto_data_in_create (struct lto_file_decl
*** 1220,1226 ****
data_in->strings = strings;
data_in->strings_len = len;
data_in->globals_resolution = resolutions;
! data_in->reader_cache = streamer_tree_cache_create ();

return data_in;
}
--- 1305,1311 ----
data_in->strings = strings;
data_in->strings_len = len;
data_in->globals_resolution = resolutions;
! data_in->reader_cache = streamer_tree_cache_create (false);

return data_in;
}
Index: trunk/gcc/lto-streamer-out.c
===================================================================
*** trunk.orig/gcc/lto-streamer-out.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto-streamer-out.c 2013-06-17 14:12:42.363187576 +0200
*************** create_output_block (enum lto_section_ty
*** 71,77 ****
ob->decl_state = lto_get_out_decl_state ();
ob->main_stream = XCNEW (struct lto_output_stream);
ob->string_stream = XCNEW (struct lto_output_stream);
! ob->writer_cache = streamer_tree_cache_create ();

if (section_type == LTO_section_function_body)
ob->cfg_stream = XCNEW (struct lto_output_stream);
--- 71,77 ----
ob->decl_state = lto_get_out_decl_state ();
ob->main_stream = XCNEW (struct lto_output_stream);
ob->string_stream = XCNEW (struct lto_output_stream);
! ob->writer_cache = streamer_tree_cache_create (!flag_wpa);

if (section_type == LTO_section_function_body)
ob->cfg_stream = XCNEW (struct lto_output_stream);
*************** tree_is_indexable (tree t)
*** 129,134 ****
--- 129,136 ----
else if (TREE_CODE (t) == VAR_DECL && decl_function_context (t)
&& !TREE_STATIC (t))
return false;
+ else if (TREE_CODE (t) == DEBUG_EXPR_DECL)
+ return false;
/* Variably modified types need to be streamed alongside function
bodies because they can refer to local entities. Together with
them we have to localize their members as well.
*************** lto_is_streamable (tree expr)
*** 293,319 ****
}


/* Write a physical representation of tree node EXPR to output block
OB. If REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. IX is the index into the streamer cache
where EXPR is stored. */

static void
! lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
{
- struct bitpack_d bp;
-
- if (!lto_is_streamable (expr))
- internal_error ("tree code %qs is not supported in LTO streams",
- tree_code_name[TREE_CODE (expr)]);
-
- /* Write the header, containing everything needed to materialize
- EXPR on the reading side. */
- streamer_write_tree_header (ob, expr);
-
/* Pack all the non-pointer fields in EXPR into a bitpack and write
the resulting bitpack. */
! bp = bitpack_create (ob->main_stream);
streamer_pack_tree_bitfields (ob, &bp, expr);
streamer_write_bitpack (&bp);

--- 295,342 ----
}


+ /* For EXPR lookup and return what we want to stream to OB as DECL_INITIAL. */
+
+ static tree
+ get_symbol_initial_value (struct output_block *ob, tree expr)
+ {
+ gcc_checking_assert (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL);
+
+ /* Handle DECL_INITIAL for symbols. */
+ tree initial = DECL_INITIAL (expr);
+ if (TREE_CODE (expr) == VAR_DECL
+ && (TREE_STATIC (expr) || DECL_EXTERNAL (expr))
+ && !DECL_IN_CONSTANT_POOL (expr)
+ && initial)
+ {
+ lto_symtab_encoder_t encoder;
+ struct varpool_node *vnode;
+
+ encoder = ob->decl_state->symtab_node_encoder;
+ vnode = varpool_get_node (expr);
+ if (!vnode
+ || !lto_symtab_encoder_encode_initializer_p (encoder,
+ vnode))
+ initial = error_mark_node;
+ }
+
+ return initial;
+ }
+
+
/* Write a physical representation of tree node EXPR to output block
OB. If REF_P is true, the leaves of EXPR are emitted as references
via lto_output_tree_ref. IX is the index into the streamer cache
where EXPR is stored. */

static void
! lto_write_tree_1 (struct output_block *ob, tree expr, bool ref_p)
{
/* Pack all the non-pointer fields in EXPR into a bitpack and write
the resulting bitpack. */
! bitpack_d bp = bitpack_create (ob->main_stream);
streamer_pack_tree_bitfields (ob, &bp, expr);
streamer_write_bitpack (&bp);

*************** lto_write_tree (struct output_block *ob,
*** 326,355 ****
&& TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
{
/* Handle DECL_INITIAL for symbols. */
! tree initial = DECL_INITIAL (expr);
! if (TREE_CODE (expr) == VAR_DECL
! && (TREE_STATIC (expr) || DECL_EXTERNAL (expr))
! && !DECL_IN_CONSTANT_POOL (expr)
! && initial)
! {
! lto_symtab_encoder_t encoder;
! struct varpool_node *vnode;
!
! encoder = ob->decl_state->symtab_node_encoder;
! vnode = varpool_get_node (expr);
! if (!vnode
! || !lto_symtab_encoder_encode_initializer_p (encoder,
! vnode))
! initial = error_mark_node;
! }
!
stream_write_tree (ob, initial, ref_p);
}

/* Mark the end of EXPR. */
streamer_write_zero (ob);
}


/* Emit the physical representation of tree node EXPR to output block
OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
--- 349,1287 ----
&& TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
{
/* Handle DECL_INITIAL for symbols. */
! tree initial = get_symbol_initial_value (ob, expr);
stream_write_tree (ob, initial, ref_p);
}
+ }
+
+ /* Write a physical representation of tree node EXPR to output block
+ OB. If REF_P is true, the leaves of EXPR are emitted as references
+ via lto_output_tree_ref. IX is the index into the streamer cache
+ where EXPR is stored. */
+
+ static void
+ lto_write_tree (struct output_block *ob, tree expr, bool ref_p)
+ {
+ if (!lto_is_streamable (expr))
+ internal_error ("tree code %qs is not supported in LTO streams",
+ tree_code_name[TREE_CODE (expr)]);
+
+ /* Write the header, containing everything needed to materialize
+ EXPR on the reading side. */
+ streamer_write_tree_header (ob, expr);
+
+ lto_write_tree_1 (ob, expr, ref_p);

/* Mark the end of EXPR. */
streamer_write_zero (ob);
}

+ /* Emit the physical representation of tree node EXPR to output block
+ OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
+ via lto_output_tree_ref. REF_P is used for streaming siblings of EXPR. */
+
+ static void
+ lto_output_tree_1 (struct output_block *ob, tree expr, hashval_t hash,
+ bool ref_p, bool this_ref_p)
+ {
+ unsigned ix;
+
+ gcc_checking_assert (expr != NULL_TREE
+ && !(this_ref_p && tree_is_indexable (expr)));
+
+ bool exists_p = streamer_tree_cache_insert (ob->writer_cache,
+ expr, hash, &ix);
+ gcc_assert (!exists_p);
+ if (streamer_handle_as_builtin_p (expr))
+ {
+ /* MD and NORMAL builtins do not need to be written out
+ completely as they are always instantiated by the
+ compiler on startup. The only builtins that need to
+ be written out are BUILT_IN_FRONTEND. For all other
+ builtins, we simply write the class and code. */
+ streamer_write_builtin (ob, expr);
+ }
+ else if (TREE_CODE (expr) == INTEGER_CST
+ && !TREE_OVERFLOW (expr))
+ {
+ /* Shared INTEGER_CST nodes are special because they need their
+ original type to be materialized by the reader (to implement
+ TYPE_CACHED_VALUES). */
+ streamer_write_integer_cst (ob, expr, ref_p);
+ }
+ else
+ {
+ /* This is the first time we see EXPR, write its fields
+ to OB. */
+ lto_write_tree (ob, expr, ref_p);
+ }
+ }
+
+ struct sccs
+ {
+ unsigned int dfsnum;
+ unsigned int low;
+ };
+
+ struct scc_entry
+ {
+ tree t;
+ hashval_t hash;
+ };
+
+ static unsigned int next_dfs_num;
+ static vec<scc_entry> sccstack;
+ static struct pointer_map_t *sccstate;
+ static struct obstack sccstate_obstack;
+
+ static void
+ DFS_write_tree (struct output_block *ob, sccs *from_state,
+ tree expr, bool ref_p, bool this_ref_p);
+
+ /* Handle the tree EXPR in the DFS walk with SCC state EXPR_STATE and
+ DFS recurse for all tree edges originating from it. */
+
+ static void
+ DFS_write_tree_body (struct output_block *ob,
+ tree expr, sccs *expr_state, bool ref_p)
+ {
+ #define DFS_follow_tree_edge(DEST) \
+ DFS_write_tree (ob, expr_state, DEST, ref_p, ref_p)
+
+ enum tree_code code;
+
+ code = TREE_CODE (expr);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
+ {
+ if (TREE_CODE (expr) != IDENTIFIER_NODE)
+ DFS_follow_tree_edge (TREE_TYPE (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
+ {
+ for (unsigned i = 0; i < VECTOR_CST_NELTS (expr); ++i)
+ DFS_follow_tree_edge (VECTOR_CST_ELT (expr, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
+ {
+ DFS_follow_tree_edge (TREE_REALPART (expr));
+ DFS_follow_tree_edge (TREE_IMAGPART (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
+ {
+ /* Drop names that were created for anonymous entities. */
+ if (DECL_NAME (expr)
+ && TREE_CODE (DECL_NAME (expr)) == IDENTIFIER_NODE
+ && ANON_AGGRNAME_P (DECL_NAME (expr)))
+ ;
+ else
+ DFS_follow_tree_edge (DECL_NAME (expr));
+ DFS_follow_tree_edge (DECL_CONTEXT (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ DFS_follow_tree_edge (DECL_SIZE (expr));
+ DFS_follow_tree_edge (DECL_SIZE_UNIT (expr));
+
+ /* Note, DECL_INITIAL is not handled here. Since DECL_INITIAL needs
+ special handling in LTO, it must be handled by streamer hooks. */
+
+ DFS_follow_tree_edge (DECL_ATTRIBUTES (expr));
+
+ /* Do not follow DECL_ABSTRACT_ORIGIN. We cannot handle debug information
+ for early inlining so drop it on the floor instead of ICEing in
+ dwarf2out.c. */
+
+ if ((TREE_CODE (expr) == VAR_DECL
+ || TREE_CODE (expr) == PARM_DECL)
+ && DECL_HAS_VALUE_EXPR_P (expr))
+ DFS_follow_tree_edge (DECL_VALUE_EXPR (expr));
+ if (TREE_CODE (expr) == VAR_DECL)
+ DFS_follow_tree_edge (DECL_DEBUG_EXPR (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
+ {
+ if (TREE_CODE (expr) == FUNCTION_DECL)
+ {
+ for (tree t = DECL_ARGUMENTS (expr); t; t = TREE_CHAIN (t))
+ DFS_follow_tree_edge (t);
+ DFS_follow_tree_edge (DECL_RESULT (expr));
+ }
+ else if (TREE_CODE (expr) == TYPE_DECL)
+ DFS_follow_tree_edge (DECL_ORIGINAL_TYPE (expr));
+ DFS_follow_tree_edge (DECL_VINDEX (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ /* Make sure we don't inadvertently set the assembler name. */
+ if (DECL_ASSEMBLER_NAME_SET_P (expr))
+ DFS_follow_tree_edge (DECL_ASSEMBLER_NAME (expr));
+ DFS_follow_tree_edge (DECL_SECTION_NAME (expr));
+ DFS_follow_tree_edge (DECL_COMDAT_GROUP (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
+ {
+ DFS_follow_tree_edge (DECL_FIELD_OFFSET (expr));
+ DFS_follow_tree_edge (DECL_BIT_FIELD_TYPE (expr));
+ DFS_follow_tree_edge (DECL_BIT_FIELD_REPRESENTATIVE (expr));
+ DFS_follow_tree_edge (DECL_FIELD_BIT_OFFSET (expr));
+ DFS_follow_tree_edge (DECL_FCONTEXT (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ DFS_follow_tree_edge (DECL_FUNCTION_PERSONALITY (expr));
+ DFS_follow_tree_edge (DECL_FUNCTION_SPECIFIC_TARGET (expr));
+ DFS_follow_tree_edge (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ DFS_follow_tree_edge (TYPE_SIZE (expr));
+ DFS_follow_tree_edge (TYPE_SIZE_UNIT (expr));
+ DFS_follow_tree_edge (TYPE_ATTRIBUTES (expr));
+ DFS_follow_tree_edge (TYPE_NAME (expr));
+ /* Do not follow TYPE_POINTER_TO or TYPE_REFERENCE_TO. They will be
+ reconstructed during fixup. */
+ /* Do not follow TYPE_NEXT_VARIANT, we reconstruct the variant lists
+ during fixup. */
+ DFS_follow_tree_edge (TYPE_MAIN_VARIANT (expr));
+ DFS_follow_tree_edge (TYPE_CONTEXT (expr));
+ /* TYPE_CANONICAL is re-computed during type merging, so no need
+ to follow it here. */
+ DFS_follow_tree_edge (TYPE_STUB_DECL (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
+ {
+ if (TREE_CODE (expr) == ENUMERAL_TYPE)
+ DFS_follow_tree_edge (TYPE_VALUES (expr));
+ else if (TREE_CODE (expr) == ARRAY_TYPE)
+ DFS_follow_tree_edge (TYPE_DOMAIN (expr));
+ else if (RECORD_OR_UNION_TYPE_P (expr))
+ for (tree t = TYPE_FIELDS (expr); t; t = TREE_CHAIN (t))
+ DFS_follow_tree_edge (t);
+ else if (TREE_CODE (expr) == FUNCTION_TYPE
+ || TREE_CODE (expr) == METHOD_TYPE)
+ DFS_follow_tree_edge (TYPE_ARG_TYPES (expr));
+
+ if (!POINTER_TYPE_P (expr))
+ DFS_follow_tree_edge (TYPE_MINVAL (expr));
+ DFS_follow_tree_edge (TYPE_MAXVAL (expr));
+ if (RECORD_OR_UNION_TYPE_P (expr))
+ DFS_follow_tree_edge (TYPE_BINFO (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_LIST))
+ {
+ DFS_follow_tree_edge (TREE_PURPOSE (expr));
+ DFS_follow_tree_edge (TREE_VALUE (expr));
+ DFS_follow_tree_edge (TREE_CHAIN (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VEC))
+ {
+ for (int i = 0; i < TREE_VEC_LENGTH (expr); i++)
+ DFS_follow_tree_edge (TREE_VEC_ELT (expr, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_EXP))
+ {
+ for (int i = 0; i < TREE_OPERAND_LENGTH (expr); i++)
+ DFS_follow_tree_edge (TREE_OPERAND (expr, i));
+ DFS_follow_tree_edge (TREE_BLOCK (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BLOCK))
+ {
+ for (tree t = BLOCK_VARS (expr); t; t = TREE_CHAIN (t))
+ /* ??? FIXME. See also streamer_write_chain. */
+ if (!(VAR_OR_FUNCTION_DECL_P (t)
+ && DECL_EXTERNAL (t)))
+ DFS_follow_tree_edge (t);
+
+ DFS_follow_tree_edge (BLOCK_SUPERCONTEXT (expr));
+
+ /* Follow BLOCK_ABSTRACT_ORIGIN for the limited cases we can
+ handle - those that represent inlined function scopes.
+ For the drop rest them on the floor instead of ICEing
+ in dwarf2out.c. */
+ if (inlined_function_outer_scope_p (expr))
+ {
+ tree ultimate_origin = block_ultimate_origin (expr);
+ DFS_follow_tree_edge (ultimate_origin);
+ }
+ /* Do not follow BLOCK_NONLOCALIZED_VARS. We cannot handle debug
+ information for early inlined BLOCKs so drop it on the floor instead
+ of ICEing in dwarf2out.c. */
+
+ /* BLOCK_FRAGMENT_ORIGIN and BLOCK_FRAGMENT_CHAIN is not live at LTO
+ streaming time. */
+
+ /* Do not output BLOCK_SUBBLOCKS. Instead on streaming-in this
+ list is re-constructed from BLOCK_SUPERCONTEXT. */
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
+ {
+ unsigned i;
+ tree t;
+
+ /* Note that the number of BINFO slots has already been emitted in
+ EXPR's header (see streamer_write_tree_header) because this length
+ is needed to build the empty BINFO node on the reader side. */
+ FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (expr), i, t)
+ DFS_follow_tree_edge (t);
+ DFS_follow_tree_edge (BINFO_OFFSET (expr));
+ DFS_follow_tree_edge (BINFO_VTABLE (expr));
+ DFS_follow_tree_edge (BINFO_VPTR_FIELD (expr));
+
+ /* The number of BINFO_BASE_ACCESSES has already been emitted in
+ EXPR's bitfield section. */
+ FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (expr), i, t)
+ DFS_follow_tree_edge (t);
+
+ DFS_follow_tree_edge (BINFO_INHERITANCE_CHAIN (expr));
+ DFS_follow_tree_edge (BINFO_SUBVTT_INDEX (expr));
+ DFS_follow_tree_edge (BINFO_VPTR_INDEX (expr));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
+ {
+ unsigned i;
+ tree index, value;
+
+ FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (expr), i, index, value)
+ {
+ DFS_follow_tree_edge (index);
+ DFS_follow_tree_edge (value);
+ }
+ }
+
+ #undef DFS_follow_tree_edge
+ }
+
+ /* Return a hash value for the tree T. */
+
+ static hashval_t
+ hash_tree (struct streamer_tree_cache_d *cache, tree t)
+ {
+ #define visit(SIBLING) \
+ do { \
+ unsigned ix; \
+ if (SIBLING && streamer_tree_cache_lookup (cache, SIBLING, &ix)) \
+ v = iterative_hash_hashval_t (streamer_tree_cache_get_hash (cache, ix), v); \
+ } while (0)
+
+ /* Hash TS_BASE. */
+ enum tree_code code = TREE_CODE (t);
+ hashval_t v = iterative_hash_host_wide_int (code, 0);
+ if (!TYPE_P (t))
+ {
+ v = iterative_hash_host_wide_int (TREE_SIDE_EFFECTS (t)
+ | (TREE_CONSTANT (t) << 1)
+ | (TREE_READONLY (t) << 2)
+ | (TREE_PUBLIC (t) << 3), v);
+ }
+ v = iterative_hash_host_wide_int (TREE_ADDRESSABLE (t)
+ | (TREE_THIS_VOLATILE (t) << 1), v);
+ if (DECL_P (t))
+ v = iterative_hash_host_wide_int (DECL_UNSIGNED (t), v);
+ else if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_UNSIGNED (t), v);
+ if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_ARTIFICIAL (t), v);
+ else
+ v = iterative_hash_host_wide_int (TREE_NO_WARNING (t), v);
+ v = iterative_hash_host_wide_int (TREE_NOTHROW (t)
+ | (TREE_STATIC (t) << 1)
+ | (TREE_PROTECTED (t) << 2)
+ | (TREE_DEPRECATED (t) << 3), v);
+ if (code != TREE_BINFO)
+ v = iterative_hash_host_wide_int (TREE_PRIVATE (t), v);
+ if (TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_SATURATING (t)
+ | (TYPE_ADDR_SPACE (t) << 1), v);
+ else if (code == SSA_NAME)
+ v = iterative_hash_host_wide_int (SSA_NAME_IS_DEFAULT_DEF (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
+ {
+ v = iterative_hash_host_wide_int (TREE_INT_CST_LOW (t), v);
+ v = iterative_hash_host_wide_int (TREE_INT_CST_HIGH (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_REAL_CST))
+ {
+ REAL_VALUE_TYPE r = TREE_REAL_CST (t);
+ v = iterative_hash_host_wide_int (r.cl, v);
+ v = iterative_hash_host_wide_int (r.decimal
+ | (r.sign << 1)
+ | (r.signalling << 2)
+ | (r.canonical << 3), v);
+ v = iterative_hash_host_wide_int (r.uexp, v);
+ for (unsigned i = 0; i < SIGSZ; ++i)
+ v = iterative_hash_host_wide_int (r.sig[i], v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIXED_CST))
+ {
+ FIXED_VALUE_TYPE f = TREE_FIXED_CST (t);
+ v = iterative_hash_host_wide_int (f.mode, v);
+ v = iterative_hash_host_wide_int (f.data.low, v);
+ v = iterative_hash_host_wide_int (f.data.high, v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ v = iterative_hash_host_wide_int (DECL_MODE (t), v);
+ v = iterative_hash_host_wide_int (DECL_NONLOCAL (t)
+ | (DECL_VIRTUAL_P (t) << 1)
+ | (DECL_IGNORED_P (t) << 2)
+ | (DECL_ABSTRACT (t) << 3)
+ | (DECL_ARTIFICIAL (t) << 4)
+ | (DECL_USER_ALIGN (t) << 5)
+ | (DECL_PRESERVE_P (t) << 6)
+ | (DECL_EXTERNAL (t) << 7)
+ | (DECL_GIMPLE_REG_P (t) << 8), v);
+ v = iterative_hash_host_wide_int (DECL_ALIGN (t), v);
+ if (code == LABEL_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_ERROR_ISSUED (t), v);
+ v = iterative_hash_host_wide_int (EH_LANDING_PAD_NR (t), v);
+ v = iterative_hash_host_wide_int (LABEL_DECL_UID (t), v);
+ }
+ else if (code == FIELD_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_PACKED (t)
+ | (DECL_NONADDRESSABLE_P (t) << 1),
+ v);
+ v = iterative_hash_host_wide_int (DECL_OFFSET_ALIGN (t), v);
+ }
+ else if (code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_HAS_DEBUG_EXPR_P (t)
+ | (DECL_NONLOCAL_FRAME (t) << 1),
+ v);
+ }
+ if (code == RESULT_DECL
+ || code == PARM_DECL
+ || code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_BY_REFERENCE (t), v);
+ if (code == VAR_DECL
+ || code == PARM_DECL)
+ v = iterative_hash_host_wide_int (DECL_HAS_VALUE_EXPR_P (t), v);
+ }
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WRTL))
+ v = iterative_hash_host_wide_int (DECL_REGISTER (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ v = iterative_hash_host_wide_int (DECL_DEFER_OUTPUT (t)
+ | (DECL_COMMON (t) << 1)
+ | (DECL_DLLIMPORT_P (t) << 2)
+ | (DECL_WEAK (t) << 3)
+ | (DECL_SEEN_IN_BIND_EXPR_P (t) << 4)
+ | (DECL_COMDAT (t) << 5)
+ | (DECL_VISIBILITY_SPECIFIED (t) << 6),
+ v);
+ v = iterative_hash_host_wide_int (DECL_VISIBILITY (t), v);
+ if (code == VAR_DECL)
+ {
+ v = iterative_hash_host_wide_int (DECL_HARD_REGISTER (t)
+ | (DECL_IN_TEXT_SECTION (t) << 1)
+ | (DECL_IN_CONSTANT_POOL (t) << 2),
+ v);
+ v = iterative_hash_host_wide_int (DECL_TLS_MODEL (t), v);
+ }
+ if (VAR_OR_FUNCTION_DECL_P (t))
+ v = iterative_hash_host_wide_int (DECL_INIT_PRIORITY (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ v = iterative_hash_host_wide_int (DECL_BUILT_IN_CLASS (t), v);
+ v = iterative_hash_host_wide_int (DECL_STATIC_CONSTRUCTOR (t)
+ | (DECL_STATIC_DESTRUCTOR (t) << 1)
+ | (DECL_UNINLINABLE (t) << 2)
+ | (DECL_POSSIBLY_INLINED (t) << 3)
+ | (DECL_IS_NOVOPS (t) << 4)
+ | (DECL_IS_RETURNS_TWICE (t) << 5)
+ | (DECL_IS_MALLOC (t) << 6)
+ | (DECL_IS_OPERATOR_NEW (t) << 7)
+ | (DECL_DECLARED_INLINE_P (t) << 8)
+ | (DECL_STATIC_CHAIN (t) << 9)
+ | (DECL_NO_INLINE_WARNING_P (t) << 10)
+ | (DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT (t) << 11)
+ | (DECL_NO_LIMIT_STACK (t) << 12)
+ | (DECL_DISREGARD_INLINE_LIMITS (t) << 13)
+ | (DECL_PURE_P (t) << 14)
+ | (DECL_LOOPING_CONST_OR_PURE_P (t) << 15), v);
+ if (DECL_BUILT_IN_CLASS (t) != NOT_BUILT_IN)
+ v = iterative_hash_host_wide_int (DECL_FUNCTION_CODE (t), v);
+ if (DECL_STATIC_DESTRUCTOR (t))
+ v = iterative_hash_host_wide_int (DECL_FINI_PRIORITY (t), v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ v = iterative_hash_host_wide_int (TYPE_MODE (t), v);
+ v = iterative_hash_host_wide_int (TYPE_STRING_FLAG (t)
+ | (TYPE_NO_FORCE_BLK (t) << 1)
+ | (TYPE_NEEDS_CONSTRUCTING (t) << 2)
+ | (TYPE_PACKED (t) << 3)
+ | (TYPE_RESTRICT (t) << 4)
+ | (TYPE_USER_ALIGN (t) << 5)
+ | (TYPE_READONLY (t) << 6), v);
+ if (RECORD_OR_UNION_TYPE_P (t))
+ v = iterative_hash_host_wide_int (TYPE_TRANSPARENT_AGGR (t), v);
+ else if (code == ARRAY_TYPE)
+ v = iterative_hash_host_wide_int (TYPE_NONALIASED_COMPONENT (t), v);
+ v = iterative_hash_host_wide_int (TYPE_PRECISION (t), v);
+ v = iterative_hash_host_wide_int (TYPE_ALIGN (t), v);
+ v = iterative_hash_host_wide_int ((TYPE_ALIAS_SET (t) == 0
+ || (!in_lto_p
+ && get_alias_set (t) == 0))
+ ? 0 : -1, v);
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TRANSLATION_UNIT_DECL))
+ v = iterative_hash (TRANSLATION_UNIT_LANGUAGE (t),
+ strlen (TRANSLATION_UNIT_LANGUAGE (t)), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TARGET_OPTION))
+ v = iterative_hash (t, sizeof (struct cl_target_option), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_OPTIMIZATION))
+ v = iterative_hash (t, sizeof (struct cl_optimization), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_IDENTIFIER))
+ v = iterative_hash_host_wide_int (IDENTIFIER_HASH_VALUE (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_STRING))
+ v = iterative_hash (TREE_STRING_POINTER (t), TREE_STRING_LENGTH (t), v);
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
+ {
+ if (POINTER_TYPE_P (t))
+ {
+ /* For pointers factor in the pointed-to type recursively as
+ we cannot recurse through only pointers.
+ ??? We can generalize this by keeping track of the
+ in-SCC edges for each tree (or arbitrarily the first
+ such edge) and hashing that in in a second stage
+ (instead of the quadratic mixing of the SCC we do now). */
+ hashval_t x;
+ unsigned ix;
+ if (streamer_tree_cache_lookup (cache, TREE_TYPE (t), &ix))
+ x = streamer_tree_cache_get_hash (cache, ix);
+ else
+ x = hash_tree (cache, TREE_TYPE (t));
+ v = iterative_hash_hashval_t (x, v);
+ }
+ else if (code != IDENTIFIER_NODE)
+ visit (TREE_TYPE (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
+ for (unsigned i = 0; i < VECTOR_CST_NELTS (t); ++i)
+ visit (VECTOR_CST_ELT (t, i));
+
+ if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
+ {
+ visit (TREE_REALPART (t));
+ visit (TREE_IMAGPART (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
+ {
+ /* Drop names that were created for anonymous entities. */
+ if (DECL_NAME (t)
+ && TREE_CODE (DECL_NAME (t)) == IDENTIFIER_NODE
+ && ANON_AGGRNAME_P (DECL_NAME (t)))
+ ;
+ else
+ visit (DECL_NAME (t));
+ if (DECL_FILE_SCOPE_P (t))
+ ;
+ else
+ visit (DECL_CONTEXT (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
+ {
+ visit (DECL_SIZE (t));
+ visit (DECL_SIZE_UNIT (t));
+ visit (DECL_ATTRIBUTES (t));
+ if ((code == VAR_DECL
+ || code == PARM_DECL)
+ && DECL_HAS_VALUE_EXPR_P (t))
+ visit (DECL_VALUE_EXPR (t));
+ if (code == VAR_DECL
+ && DECL_HAS_DEBUG_EXPR_P (t))
+ visit (DECL_DEBUG_EXPR (t));
+ /* ??? Hash DECL_INITIAL as streamed. Needs the output-block to
+ be able to call get_symbol_initial_value. */
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
+ {
+ if (code == FUNCTION_DECL)
+ {
+ for (tree a = DECL_ARGUMENTS (t); a; a = DECL_CHAIN (a))
+ visit (a);
+ visit (DECL_RESULT (t));
+ }
+ else if (code == TYPE_DECL)
+ visit (DECL_ORIGINAL_TYPE (t));
+ visit (DECL_VINDEX (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
+ {
+ if (DECL_ASSEMBLER_NAME_SET_P (t))
+ visit (DECL_ASSEMBLER_NAME (t));
+ visit (DECL_SECTION_NAME (t));
+ visit (DECL_COMDAT_GROUP (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
+ {
+ visit (DECL_FIELD_OFFSET (t));
+ visit (DECL_BIT_FIELD_TYPE (t));
+ visit (DECL_BIT_FIELD_REPRESENTATIVE (t));
+ visit (DECL_FIELD_BIT_OFFSET (t));
+ visit (DECL_FCONTEXT (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
+ {
+ visit (DECL_FUNCTION_PERSONALITY (t));
+ visit (DECL_FUNCTION_SPECIFIC_TARGET (t));
+ visit (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
+ {
+ visit (TYPE_SIZE (t));
+ visit (TYPE_SIZE_UNIT (t));
+ visit (TYPE_ATTRIBUTES (t));
+ visit (TYPE_NAME (t));
+ visit (TYPE_MAIN_VARIANT (t));
+ if (TYPE_FILE_SCOPE_P (t))
+ ;
+ else
+ visit (TYPE_CONTEXT (t));
+ visit (TYPE_STUB_DECL (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
+ {
+ if (code == ENUMERAL_TYPE)
+ visit (TYPE_VALUES (t));
+ else if (code == ARRAY_TYPE)
+ visit (TYPE_DOMAIN (t));
+ else if (RECORD_OR_UNION_TYPE_P (t))
+ for (tree f = TYPE_FIELDS (t); f; f = TREE_CHAIN (f))
+ visit (f);
+ else if (code == FUNCTION_TYPE
+ || code == METHOD_TYPE)
+ visit (TYPE_ARG_TYPES (t));
+ if (!POINTER_TYPE_P (t))
+ visit (TYPE_MINVAL (t));
+ visit (TYPE_MAXVAL (t));
+ if (RECORD_OR_UNION_TYPE_P (t))
+ visit (TYPE_BINFO (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_LIST))
+ {
+ visit (TREE_PURPOSE (t));
+ visit (TREE_VALUE (t));
+ visit (TREE_CHAIN (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_VEC))
+ for (int i = 0; i < TREE_VEC_LENGTH (t); ++i)
+ visit (TREE_VEC_ELT (t, i));
+
+ if (CODE_CONTAINS_STRUCT (code, TS_EXP))
+ {
+ v = iterative_hash_host_wide_int (TREE_OPERAND_LENGTH (t), v);
+ for (int i = 0; i < TREE_OPERAND_LENGTH (t); ++i)
+ visit (TREE_OPERAND (t, i));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
+ {
+ unsigned i;
+ tree b;
+ FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (t), i, b)
+ visit (b);
+ visit (BINFO_OFFSET (t));
+ visit (BINFO_VTABLE (t));
+ visit (BINFO_VPTR_FIELD (t));
+ FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (t), i, b)
+ visit (b);
+ visit (BINFO_INHERITANCE_CHAIN (t));
+ visit (BINFO_SUBVTT_INDEX (t));
+ visit (BINFO_VPTR_INDEX (t));
+ }
+
+ if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
+ {
+ unsigned i;
+ tree index, value;
+ v = iterative_hash_host_wide_int (CONSTRUCTOR_NELTS (t), v);
+ FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (t), i, index, value)
+ {
+ visit (index);
+ visit (value);
+ }
+ }
+
+ return v;
+
+ #undef visit
+ }
+
+ /* Compare two SCC entries by their hash value for qsorting them. */
+
+ static int
+ scc_entry_compare (const void *p1_, const void *p2_)
+ {
+ const scc_entry *p1 = (const scc_entry *) p1_;
+ const scc_entry *p2 = (const scc_entry *) p2_;
+ if (p1->hash < p2->hash)
+ return -1;
+ else if (p1->hash > p2->hash)
+ return 1;
+ return 0;
+ }
+
+ /* Return a hash value for the SCC on the SCC stack from FIRST with
+ size SIZE. */
+
+ static hashval_t
+ hash_scc (struct streamer_tree_cache_d *cache, unsigned first, unsigned size)
+ {
+ /* Compute hash values for the SCC members. */
+ for (unsigned i = 0; i < size; ++i)
+ sccstack[first+i].hash = hash_tree (cache, sccstack[first+i].t);
+
+ if (size == 1)
+ return sccstack[first].hash;
+
+ /* Sort the SCC of type, hash pairs so that when we mix in
+ all members of the SCC the hash value becomes independent on
+ the order we visited the SCC. Disregard hashes equal to
+ the hash of the tree we mix into because we cannot guarantee
+ a stable sort for those across different TUs. */
+ qsort (&sccstack[first], size, sizeof (scc_entry), scc_entry_compare);
+ hashval_t *tem = XALLOCAVEC (hashval_t, size);
+ for (unsigned i = 0; i < size; ++i)
+ {
+ hashval_t hash = sccstack[first+i].hash;
+ hashval_t orig_hash = hash;
+ unsigned j;
+ /* Skip same hashes. */
+ for (j = i + 1;
+ j < size && sccstack[first+j].hash == orig_hash; ++j)
+ ;
+ for (; j < size; ++j)
+ hash = iterative_hash_hashval_t (sccstack[first+j].hash, hash);
+ for (j = 0; sccstack[first+j].hash != orig_hash; ++j)
+ hash = iterative_hash_hashval_t (sccstack[first+j].hash, hash);
+ tem[i] = hash;
+ }
+ hashval_t scc_hash = 0;
+ for (unsigned i = 0; i < size; ++i)
+ {
+ sccstack[first+i].hash = tem[i];
+ scc_hash = iterative_hash_hashval_t (tem[i], scc_hash);
+ }
+ return scc_hash;
+ }
+
+ /* DFS walk EXPR and stream SCCs of tree bodies if they are not
+ already in the streamer cache. Main routine called for
+ each visit of EXPR. */
+
+ static void
+ DFS_write_tree (struct output_block *ob, sccs *from_state,
+ tree expr, bool ref_p, bool this_ref_p)
+ {
+ unsigned ix;
+ sccs **slot;
+
+ /* Handle special cases. */
+ if (expr == NULL_TREE)
+ return;
+
+ /* Do not DFS walk into indexable trees. */
+ if (this_ref_p && tree_is_indexable (expr))
+ return;
+
+ /* Check if we already streamed EXPR. */
+ if (streamer_tree_cache_lookup (ob->writer_cache, expr, &ix))
+ return;
+
+ slot = (sccs **)pointer_map_insert (sccstate, expr);
+ sccs *cstate = *slot;
+ if (!cstate)
+ {
+ scc_entry e = { expr, 0 };
+ /* Not yet visited. DFS recurse and push it onto the stack. */
+ *slot = cstate = XOBNEW (&sccstate_obstack, struct sccs);
+ sccstack.safe_push (e);
+ cstate->dfsnum = next_dfs_num++;
+ cstate->low = cstate->dfsnum;
+
+ if (streamer_handle_as_builtin_p (expr))
+ ;
+ else if (TREE_CODE (expr) == INTEGER_CST
+ && !TREE_OVERFLOW (expr))
+ DFS_write_tree (ob, cstate, TREE_TYPE (expr), ref_p, ref_p);
+ else
+ {
+ DFS_write_tree_body (ob, expr, cstate, ref_p);
+
+ /* Walk any LTO-specific edges. */
+ if (DECL_P (expr)
+ && TREE_CODE (expr) != FUNCTION_DECL
+ && TREE_CODE (expr) != TRANSLATION_UNIT_DECL)
+ {
+ /* Handle DECL_INITIAL for symbols. */
+ tree initial = get_symbol_initial_value (ob, expr);
+ DFS_write_tree (ob, cstate, initial, ref_p, ref_p);
+ }
+ }
+
+ /* See if we found an SCC. */
+ if (cstate->low == cstate->dfsnum)
+ {
+ unsigned first, size;
+ tree x;
+
+ /* Pop the SCC and compute its size. */
+ first = sccstack.length ();
+ do
+ {
+ x = sccstack[--first].t;
+ }
+ while (x != expr);
+ size = sccstack.length () - first;
+
+ /* No need to compute hashes for LTRANS units, we don't perform
+ any merging there. */
+ hashval_t scc_hash = 0;
+ unsigned scc_entry_len = 0;
+ if (!flag_wpa)
+ {
+ scc_hash = hash_scc (ob->writer_cache, first, size);
+
+ /* Put the entries with the least number of collisions first. */
+ unsigned entry_start = 0;
+ scc_entry_len = size + 1;
+ for (unsigned i = 0; i < size;)
+ {
+ unsigned from = i;
+ for (i = i + 1; i < size
+ && (sccstack[first + i].hash
+ == sccstack[first + from].hash); ++i)
+ ;
+ if (i - from < scc_entry_len)
+ {
+ scc_entry_len = i - from;
+ entry_start = from;
+ }
+ }
+ for (unsigned i = 0; i < scc_entry_len; ++i)
+ {
+ scc_entry tem = sccstack[first + i];
+ sccstack[first + i] = sccstack[first + entry_start + i];
+ sccstack[first + entry_start + i] = tem;
+ }
+ }
+
+ /* Write LTO_tree_scc. */
+ streamer_write_record_start (ob, LTO_tree_scc);
+ streamer_write_uhwi (ob, size);
+ streamer_write_uhwi (ob, scc_hash);
+
+ /* Write size-1 SCCs without wrapping them inside SCC bundles.
+ All INTEGER_CSTs need to be handled this way as we need
+ their type to materialize them. Also builtins are handled
+ this way.
+ ??? We still wrap these in LTO_tree_scc so at the
+ input side we can properly identify the tree we want
+ to ultimatively return. */
+ size_t old_len = ob->writer_cache->nodes.length ();
+ if (size == 1)
+ lto_output_tree_1 (ob, expr, scc_hash, ref_p, this_ref_p);
+ else
+ {
+ /* Write the size of the SCC entry candidates. */
+ streamer_write_uhwi (ob, scc_entry_len);
+
+ /* Write all headers and populate the streamer cache. */
+ for (unsigned i = 0; i < size; ++i)
+ {
+ hashval_t hash = sccstack[first+i].hash;
+ tree t = sccstack[first+i].t;
+ bool exists_p = streamer_tree_cache_insert (ob->writer_cache,
+ t, hash, &ix);
+ gcc_assert (!exists_p);
+
+ if (!lto_is_streamable (t))
+ internal_error ("tree code %qs is not supported "
+ "in LTO streams",
+ tree_code_name[TREE_CODE (t)]);
+
+ gcc_checking_assert (!streamer_handle_as_builtin_p (t));
+
+ /* Write the header, containing everything needed to
+ materialize EXPR on the reading side. */
+ streamer_write_tree_header (ob, t);
+ }
+
+ /* Write the bitpacks and tree references. */
+ for (unsigned i = 0; i < size; ++i)
+ {
+ lto_write_tree_1 (ob, sccstack[first+i].t, ref_p);
+
+ /* Mark the end of the tree. */
+ streamer_write_zero (ob);
+ }
+ }
+ gcc_assert (old_len + size == ob->writer_cache->nodes.length ());
+
+ /* Finally truncate the vector. */
+ sccstack.truncate (first);
+
+ if (from_state)
+ from_state->low = MIN (from_state->low, cstate->low);
+ return;
+ }
+
+ if (from_state)
+ from_state->low = MIN (from_state->low, cstate->low);
+ }
+ gcc_checking_assert (from_state);
+ if (cstate->dfsnum < from_state->dfsnum)
+ from_state->low = MIN (cstate->dfsnum, from_state->low);
+ }
+

/* Emit the physical representation of tree node EXPR to output block
OB. If THIS_REF_P is true, the leaves of EXPR are emitted as references
*************** lto_output_tree (struct output_block *ob
*** 374,389 ****
return;
}

! /* Shared INTEGER_CST nodes are special because they need their original type
! to be materialized by the reader (to implement TYPE_CACHED_VALUES). */
! if (TREE_CODE (expr) == INTEGER_CST
! && !TREE_OVERFLOW (expr))
! {
! streamer_write_integer_cst (ob, expr, ref_p);
! return;
! }
!
! existed_p = streamer_tree_cache_insert (ob->writer_cache, expr, &ix);
if (existed_p)
{
/* If a node has already been streamed out, make sure that
--- 1306,1312 ----
return;
}

! existed_p = streamer_tree_cache_lookup (ob->writer_cache, expr, &ix);
if (existed_p)
{
/* If a node has already been streamed out, make sure that
*************** lto_output_tree (struct output_block *ob
*** 393,413 ****
streamer_write_uhwi (ob, ix);
streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
lto_tree_code_to_tag (TREE_CODE (expr)));
! }
! else if (streamer_handle_as_builtin_p (expr))
! {
! /* MD and NORMAL builtins do not need to be written out
! completely as they are always instantiated by the
! compiler on startup. The only builtins that need to
! be written out are BUILT_IN_FRONTEND. For all other
! builtins, we simply write the class and code. */
! streamer_write_builtin (ob, expr);
}
else
{
! /* This is the first time we see EXPR, write its fields
! to OB. */
! lto_write_tree (ob, expr, ref_p);
}
}

--- 1316,1357 ----
streamer_write_uhwi (ob, ix);
streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
lto_tree_code_to_tag (TREE_CODE (expr)));
! lto_stats.num_pickle_refs_output++;
}
else
{
! /* This is the first time we see EXPR, write all reachable
! trees to OB. */
! static bool in_dfs_walk;
!
! /* Protect against recursion which means disconnect between
! what tree edges we walk in the DFS walk and what edges
! we stream out. */
! gcc_assert (!in_dfs_walk);
!
! /* Start the DFS walk. */
! /* Save ob state ... */
! /* let's see ... */
! in_dfs_walk = true;
! sccstate = pointer_map_create ();
! gcc_obstack_init (&sccstate_obstack);
! next_dfs_num = 1;
! DFS_write_tree (ob, NULL, expr, ref_p, this_ref_p);
! sccstack.release ();
! pointer_map_destroy (sccstate);
! obstack_free (&sccstate_obstack, NULL);
! in_dfs_walk = false;
!
! /* Finally append a reference to the tree we were writing.
! ??? If expr ended up as a singleton we could have
! inlined it here and avoid outputting a reference. */
! existed_p = streamer_tree_cache_lookup (ob->writer_cache, expr, &ix);
! gcc_assert (existed_p);
! streamer_write_record_start (ob, LTO_tree_pickle_reference);
! streamer_write_uhwi (ob, ix);
! streamer_write_enum (ob->main_stream, LTO_tags, LTO_NUM_TAGS,
! lto_tree_code_to_tag (TREE_CODE (expr)));
! lto_stats.num_pickle_refs_output++;
}
}

Index: trunk/gcc/tree-streamer-in.c
===================================================================
*** trunk.orig/gcc/tree-streamer-in.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree-streamer-in.c 2013-06-17 10:14:46.773311787 +0200
*************** unpack_ts_base_value_fields (struct bitp
*** 122,131 ****
TYPE_ARTIFICIAL (expr) = (unsigned) bp_unpack_value (bp, 1);
else
TREE_NO_WARNING (expr) = (unsigned) bp_unpack_value (bp, 1);
- TREE_USED (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_NOTHROW (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_STATIC (expr) = (unsigned) bp_unpack_value (bp, 1);
! TREE_PRIVATE (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_PROTECTED (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_DEPRECATED (expr) = (unsigned) bp_unpack_value (bp, 1);
if (TYPE_P (expr))
--- 122,131 ----
TYPE_ARTIFICIAL (expr) = (unsigned) bp_unpack_value (bp, 1);
else
TREE_NO_WARNING (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_NOTHROW (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_STATIC (expr) = (unsigned) bp_unpack_value (bp, 1);
! if (TREE_CODE (expr) != TREE_BINFO)
! TREE_PRIVATE (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_PROTECTED (expr) = (unsigned) bp_unpack_value (bp, 1);
TREE_DEPRECATED (expr) = (unsigned) bp_unpack_value (bp, 1);
if (TYPE_P (expr))
*************** unpack_ts_type_common_value_fields (stru
*** 351,358 ****
TYPE_NONALIASED_COMPONENT (expr) = (unsigned) bp_unpack_value (bp, 1);
TYPE_PACKED (expr) = (unsigned) bp_unpack_value (bp, 1);
TYPE_RESTRICT (expr) = (unsigned) bp_unpack_value (bp, 1);
- TYPE_CONTAINS_PLACEHOLDER_INTERNAL (expr)
- = (unsigned) bp_unpack_value (bp, 2);
TYPE_USER_ALIGN (expr) = (unsigned) bp_unpack_value (bp, 1);
TYPE_READONLY (expr) = (unsigned) bp_unpack_value (bp, 1);
TYPE_PRECISION (expr) = bp_unpack_var_len_unsigned (bp);
--- 351,356 ----
*************** lto_input_ts_list_tree_pointers (struct
*** 811,817 ****
{
TREE_PURPOSE (expr) = stream_read_tree (ib, data_in);
TREE_VALUE (expr) = stream_read_tree (ib, data_in);
! TREE_CHAIN (expr) = streamer_read_chain (ib, data_in);
}


--- 809,815 ----
{
TREE_PURPOSE (expr) = stream_read_tree (ib, data_in);
TREE_VALUE (expr) = stream_read_tree (ib, data_in);
! TREE_CHAIN (expr) = stream_read_tree (ib, data_in);
}


*************** streamer_read_tree_body (struct lto_inpu
*** 1021,1039 ****
}


- /* Read and INTEGER_CST node from input block IB using the per-file
- context in DATA_IN. */
-
- tree
- streamer_read_integer_cst (struct lto_input_block *ib, struct data_in *data_in)
- {
- tree type = stream_read_tree (ib, data_in);
- unsigned HOST_WIDE_INT low = streamer_read_uhwi (ib);
- HOST_WIDE_INT high = streamer_read_hwi (ib);
- return build_int_cst_wide (type, low, high);
- }
-
-
/* Read an index IX from input block IB and return the tree node at
DATA_IN->FILE_DATA->GLOBALS_INDEX[IX]. */

--- 1019,1024 ----
*************** streamer_get_pickled_tree (struct lto_in
*** 1047,1053 ****
ix = streamer_read_uhwi (ib);
expected_tag = streamer_read_enum (ib, LTO_tags, LTO_NUM_TAGS);

! result = streamer_tree_cache_get (data_in->reader_cache, ix);
gcc_assert (result
&& TREE_CODE (result) == lto_tag_to_tree_code (expected_tag));

--- 1032,1038 ----
ix = streamer_read_uhwi (ib);
expected_tag = streamer_read_enum (ib, LTO_tags, LTO_NUM_TAGS);

! result = streamer_tree_cache_get_tree (data_in->reader_cache, ix);
gcc_assert (result
&& TREE_CODE (result) == lto_tag_to_tree_code (expected_tag));

*************** streamer_get_builtin_tree (struct lto_in
*** 1091,1097 ****
if (asmname)
set_builtin_user_assembler_name (result, asmname);

! streamer_tree_cache_append (data_in->reader_cache, result);

return result;
}
--- 1076,1082 ----
if (asmname)
set_builtin_user_assembler_name (result, asmname);

! streamer_tree_cache_append (data_in->reader_cache, result, 0);

return result;
}
Index: trunk/gcc/lto/lto.c
===================================================================
*** trunk.orig/gcc/lto/lto.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto/lto.c 2013-06-17 10:15:05.731540281 +0200
*************** along with GCC; see the file COPYING3.
*** 45,50 ****
--- 45,51 ----
#include "tree-streamer.h"
#include "splay-tree.h"
#include "lto-partition.h"
+ #include "data-streamer.h"

static GTY(()) tree first_personality_decl;

*************** lto_read_in_decl_state (struct data_in *
*** 254,260 ****
uint32_t i, j;

ix = *data++;
! decl = streamer_tree_cache_get (data_in->reader_cache, ix);
if (TREE_CODE (decl) != FUNCTION_DECL)
{
gcc_assert (decl == void_type_node);
--- 255,261 ----
uint32_t i, j;

ix = *data++;
! decl = streamer_tree_cache_get_tree (data_in->reader_cache, ix);
if (TREE_CODE (decl) != FUNCTION_DECL)
{
gcc_assert (decl == void_type_node);
*************** lto_read_in_decl_state (struct data_in *
*** 268,274 ****
tree *decls = ggc_alloc_vec_tree (size);

for (j = 0; j < size; j++)
! decls[j] = streamer_tree_cache_get (data_in->reader_cache, data[j]);

state->streams[i].size = size;
state->streams[i].trees = decls;
--- 269,275 ----
tree *decls = ggc_alloc_vec_tree (size);

for (j = 0; j < size; j++)
! decls[j] = streamer_tree_cache_get_tree (data_in->reader_cache, data[j]);

state->streams[i].size = size;
state->streams[i].trees = decls;
*************** lto_read_in_decl_state (struct data_in *
*** 280,285 ****
--- 281,289 ----



+ /* ??? Old hashing and merging code follows, we keep it for statistics
+ purposes for now. */
+
/* Global type table. FIXME, it should be possible to re-use some
of the type hashing routines in tree.c (type_hash_canon, type_hash_lookup,
etc), but those assume that types were built with the various
*************** struct sccs
*** 366,398 ****
static unsigned int next_dfs_num;
static unsigned int gtc_next_dfs_num;

- /* GIMPLE type merging cache. A direct-mapped cache based on TYPE_UID. */
-
- typedef struct GTY(()) gimple_type_leader_entry_s {
- tree type;
- tree leader;
- } gimple_type_leader_entry;
-
- #define GIMPLE_TYPE_LEADER_SIZE 16381
- static GTY((length("GIMPLE_TYPE_LEADER_SIZE")))
- gimple_type_leader_entry *gimple_type_leader;
-
- /* Lookup an existing leader for T and return it or NULL_TREE, if
- there is none in the cache. */
-
- static inline tree
- gimple_lookup_type_leader (tree t)
- {
- gimple_type_leader_entry *leader;
-
- leader = &gimple_type_leader[TYPE_UID (t) % GIMPLE_TYPE_LEADER_SIZE];
- if (leader->type != t)
- return NULL_TREE;
-
- return leader->leader;
- }
-
-
/* Return true if T1 and T2 have the same name. If FOR_COMPLETION_P is
true then if any type has no name return false, otherwise return
true if both types have no names. */
--- 370,375 ----
*************** gtc_visit (tree t1, tree t2,
*** 450,456 ****
struct sccs *cstate = NULL;
type_pair_t p;
void **slot;
- tree leader1, leader2;

/* Check first for the obvious case of pointer identity. */
if (t1 == t2)
--- 427,432 ----
*************** gtc_visit (tree t1, tree t2,
*** 507,521 ****
/* For other types fall through to more complex checks. */
}

- /* If the types have been previously registered and found equal
- they still are. */
- leader1 = gimple_lookup_type_leader (t1);
- leader2 = gimple_lookup_type_leader (t2);
- if (leader1 == t2
- || t1 == leader2
- || (leader1 && leader1 == leader2))
- return true;
-
/* If the hash values of t1 and t2 are different the types can't
possibly be the same. This helps keeping the type-pair hashtable
small, only tracking comparisons for hash collisions. */
--- 483,488 ----
*************** gimple_types_compatible_p (tree t1, tree
*** 879,885 ****
struct obstack sccstate_obstack;
type_pair_t p = NULL;
bool res;
- tree leader1, leader2;

/* Before starting to set up the SCC machinery handle simple cases. */

--- 846,851 ----
*************** gimple_types_compatible_p (tree t1, tree
*** 938,952 ****
/* For other types fall through to more complex checks. */
}

- /* If the types have been previously registered and found equal
- they still are. */
- leader1 = gimple_lookup_type_leader (t1);
- leader2 = gimple_lookup_type_leader (t2);
- if (leader1 == t2
- || t1 == leader2
- || (leader1 && leader1 == leader2))
- return true;
-
/* If the hash values of t1 and t2 are different the types can't
possibly be the same. This helps keeping the type-pair hashtable
small, only tracking comparisons for hash collisions. */
--- 904,909 ----
*************** gimple_type_eq (const void *p1, const vo
*** 1335,1402 ****
CONST_CAST_TREE (t2));
}

!
! /* Worker for gimple_register_type.
! Register type T in the global type table gimple_types.
! When REGISTERING_MV is false first recurse for the main variant of T. */

static tree
! gimple_register_type_1 (tree t, bool registering_mv)
{
void **slot;
- gimple_type_leader_entry *leader;
-
- /* If we registered this type before return the cached result. */
- leader = &gimple_type_leader[TYPE_UID (t) % GIMPLE_TYPE_LEADER_SIZE];
- if (leader->type == t)
- return leader->leader;
-
- /* Always register the main variant first. This is important so we
- pick up the non-typedef variants as canonical, otherwise we'll end
- up taking typedef ids for structure tags during comparison.
- It also makes sure that main variants will be merged to main variants.
- As we are operating on a possibly partially fixed up type graph
- do not bother to recurse more than once, otherwise we may end up
- walking in circles.
- If we are registering a main variant it will either remain its
- own main variant or it will be merged to something else in which
- case we do not care for the main variant leader. */
- if (!registering_mv
- && TYPE_MAIN_VARIANT (t) != t)
- gimple_register_type_1 (TYPE_MAIN_VARIANT (t), true);

/* See if we already have an equivalent type registered. */
slot = htab_find_slot (gimple_types, t, INSERT);
if (*slot
&& *(tree *)slot != t)
! {
! tree new_type = (tree) *((tree *) slot);
! leader->type = t;
! leader->leader = new_type;
! return new_type;
! }

/* If not, insert it to the cache and the hash. */
- leader->type = t;
- leader->leader = t;
*slot = (void *) t;
return t;
}

! /* Register type T in the global type table gimple_types.
! If another type T', compatible with T, already existed in
! gimple_types then return T', otherwise return T. This is used by
! LTO to merge identical types read from different TUs. */
!
! static tree
! gimple_register_type (tree t)
! {
! gcc_assert (TYPE_P (t));
! return gimple_register_type_1 (t, false);
! }
!
! #define GIMPLE_REGISTER_TYPE(tt) \
! (TREE_VISITED (tt) ? gimple_register_type (tt) : tt)



--- 1292,1316 ----
CONST_CAST_TREE (t2));
}

! /* Register type T in the global type table gimple_types. */

static tree
! gimple_register_type (tree t)
{
void **slot;

/* See if we already have an equivalent type registered. */
slot = htab_find_slot (gimple_types, t, INSERT);
if (*slot
&& *(tree *)slot != t)
! return (tree) *((tree *) slot);

/* If not, insert it to the cache and the hash. */
*slot = (void *) t;
return t;
}

! /* End of old merging code. */



*************** remember_with_vars (tree t)
*** 1413,1632 ****
*(tree *) htab_find_slot (tree_with_vars, t, INSERT) = t;
}

! #define LTO_FIXUP_TREE(tt) \
do \
{ \
if (tt) \
{ \
- if (TYPE_P (tt)) \
- (tt) = GIMPLE_REGISTER_TYPE (tt); \
if (VAR_OR_FUNCTION_DECL_P (tt) && TREE_PUBLIC (tt)) \
remember_with_vars (t); \
- if (TREE_CODE (tt) == INTEGER_CST) \
- (tt) = fixup_integer_cst (tt); \
} \
} while (0)

- static void lto_fixup_types (tree);
-
- /* Return integer_cst T with updated type. */
-
- static tree
- fixup_integer_cst (tree t)
- {
- tree type = GIMPLE_REGISTER_TYPE (TREE_TYPE (t));
-
- if (type == TREE_TYPE (t))
- return t;
-
- /* If overflow was set, streamer_read_integer_cst
- produced local copy of T. */
- if (TREE_OVERFLOW (t))
- {
- TREE_TYPE (t) = type;
- return t;
- }
- else
- /* Otherwise produce new shared node for the new type. */
- return build_int_cst_wide (type, TREE_INT_CST_LOW (t),
- TREE_INT_CST_HIGH (t));
- }
-
/* Fix up fields of a tree_typed T. */

static void
! lto_ft_typed (tree t)
{
! LTO_FIXUP_TREE (TREE_TYPE (t));
}

/* Fix up fields of a tree_common T. */

static void
! lto_ft_common (tree t)
{
! lto_ft_typed (t);
! LTO_FIXUP_TREE (TREE_CHAIN (t));
}

/* Fix up fields of a decl_minimal T. */

static void
! lto_ft_decl_minimal (tree t)
{
! lto_ft_common (t);
! LTO_FIXUP_TREE (DECL_NAME (t));
! LTO_FIXUP_TREE (DECL_CONTEXT (t));
}

/* Fix up fields of a decl_common T. */

static void
! lto_ft_decl_common (tree t)
{
! lto_ft_decl_minimal (t);
! LTO_FIXUP_TREE (DECL_SIZE (t));
! LTO_FIXUP_TREE (DECL_SIZE_UNIT (t));
! LTO_FIXUP_TREE (DECL_INITIAL (t));
! LTO_FIXUP_TREE (DECL_ATTRIBUTES (t));
! LTO_FIXUP_TREE (DECL_ABSTRACT_ORIGIN (t));
}

/* Fix up fields of a decl_with_vis T. */

static void
! lto_ft_decl_with_vis (tree t)
{
! lto_ft_decl_common (t);

/* Accessor macro has side-effects, use field-name here. */
! LTO_FIXUP_TREE (t->decl_with_vis.assembler_name);
! LTO_FIXUP_TREE (DECL_SECTION_NAME (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! lto_ft_decl_non_common (tree t)
{
! lto_ft_decl_with_vis (t);
! LTO_FIXUP_TREE (DECL_ARGUMENT_FLD (t));
! LTO_FIXUP_TREE (DECL_RESULT_FLD (t));
! LTO_FIXUP_TREE (DECL_VINDEX (t));
! /* The C frontends may create exact duplicates for DECL_ORIGINAL_TYPE
! like for 'typedef enum foo foo'. We have no way of avoiding to
! merge them and dwarf2out.c cannot deal with this,
! so fix this up by clearing DECL_ORIGINAL_TYPE in this case. */
! if (TREE_CODE (t) == TYPE_DECL
! && DECL_ORIGINAL_TYPE (t) == TREE_TYPE (t))
! DECL_ORIGINAL_TYPE (t) = NULL_TREE;
}

/* Fix up fields of a decl_non_common T. */

static void
! lto_ft_function (tree t)
{
! lto_ft_decl_non_common (t);
! LTO_FIXUP_TREE (DECL_FUNCTION_PERSONALITY (t));
}

/* Fix up fields of a field_decl T. */

static void
! lto_ft_field_decl (tree t)
{
! lto_ft_decl_common (t);
! LTO_FIXUP_TREE (DECL_FIELD_OFFSET (t));
! LTO_FIXUP_TREE (DECL_BIT_FIELD_TYPE (t));
! LTO_FIXUP_TREE (DECL_QUALIFIER (t));
! LTO_FIXUP_TREE (DECL_FIELD_BIT_OFFSET (t));
! LTO_FIXUP_TREE (DECL_FCONTEXT (t));
}

/* Fix up fields of a type T. */

static void
! lto_ft_type (tree t)
{
! lto_ft_common (t);
! LTO_FIXUP_TREE (TYPE_CACHED_VALUES (t));
! LTO_FIXUP_TREE (TYPE_SIZE (t));
! LTO_FIXUP_TREE (TYPE_SIZE_UNIT (t));
! LTO_FIXUP_TREE (TYPE_ATTRIBUTES (t));
! LTO_FIXUP_TREE (TYPE_NAME (t));

/* Accessors are for derived node types only. */
if (!POINTER_TYPE_P (t))
! LTO_FIXUP_TREE (TYPE_MINVAL (t));
! LTO_FIXUP_TREE (TYPE_MAXVAL (t));

/* Accessor is for derived node types only. */
! LTO_FIXUP_TREE (t->type_non_common.binfo);

! LTO_FIXUP_TREE (TYPE_CONTEXT (t));
}

/* Fix up fields of a BINFO T. */

static void
! lto_ft_binfo (tree t)
{
unsigned HOST_WIDE_INT i, n;
- tree base, saved_base;

! lto_ft_common (t);
! LTO_FIXUP_TREE (BINFO_VTABLE (t));
! LTO_FIXUP_TREE (BINFO_OFFSET (t));
! LTO_FIXUP_TREE (BINFO_VIRTUALS (t));
! LTO_FIXUP_TREE (BINFO_VPTR_FIELD (t));
n = vec_safe_length (BINFO_BASE_ACCESSES (t));
for (i = 0; i < n; i++)
! {
! saved_base = base = BINFO_BASE_ACCESS (t, i);
! LTO_FIXUP_TREE (base);
! if (base != saved_base)
! (*BINFO_BASE_ACCESSES (t))[i] = base;
! }
! LTO_FIXUP_TREE (BINFO_INHERITANCE_CHAIN (t));
! LTO_FIXUP_TREE (BINFO_SUBVTT_INDEX (t));
! LTO_FIXUP_TREE (BINFO_VPTR_INDEX (t));
n = BINFO_N_BASE_BINFOS (t);
for (i = 0; i < n; i++)
! {
! saved_base = base = BINFO_BASE_BINFO (t, i);
! LTO_FIXUP_TREE (base);
! if (base != saved_base)
! (*BINFO_BASE_BINFOS (t))[i] = base;
! }
}

/* Fix up fields of a CONSTRUCTOR T. */

static void
! lto_ft_constructor (tree t)
{
unsigned HOST_WIDE_INT idx;
constructor_elt *ce;

! lto_ft_typed (t);

for (idx = 0; vec_safe_iterate (CONSTRUCTOR_ELTS (t), idx, &ce); idx++)
{
! LTO_FIXUP_TREE (ce->index);
! LTO_FIXUP_TREE (ce->value);
}
}

/* Fix up fields of an expression tree T. */

static void
! lto_ft_expr (tree t)
{
int i;
! lto_ft_typed (t);
for (i = TREE_OPERAND_LENGTH (t) - 1; i >= 0; --i)
! LTO_FIXUP_TREE (TREE_OPERAND (t, i));
}

/* Given a tree T fixup fields of T by replacing types with their merged
--- 1327,1499 ----
*(tree *) htab_find_slot (tree_with_vars, t, INSERT) = t;
}

! #define MAYBE_REMEMBER_WITH_VARS(tt) \
do \
{ \
if (tt) \
{ \
if (VAR_OR_FUNCTION_DECL_P (tt) && TREE_PUBLIC (tt)) \
remember_with_vars (t); \
} \
} while (0)

/* Fix up fields of a tree_typed T. */

static void
! maybe_remember_with_vars_typed (tree t)
{
! MAYBE_REMEMBER_WITH_VARS (TREE_TYPE (t));
}

/* Fix up fields of a tree_common T. */

static void
! maybe_remember_with_vars_common (tree t)
{
! maybe_remember_with_vars_typed (t);
! MAYBE_REMEMBER_WITH_VARS (TREE_CHAIN (t));
}

/* Fix up fields of a decl_minimal T. */

static void
! maybe_remember_with_vars_decl_minimal (tree t)
{
! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_NAME (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_CONTEXT (t));
}

/* Fix up fields of a decl_common T. */

static void
! maybe_remember_with_vars_decl_common (tree t)
{
! maybe_remember_with_vars_decl_minimal (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_SIZE (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_SIZE_UNIT (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_INITIAL (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_ATTRIBUTES (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_ABSTRACT_ORIGIN (t));
}

/* Fix up fields of a decl_with_vis T. */

static void
! maybe_remember_with_vars_decl_with_vis (tree t)
{
! maybe_remember_with_vars_decl_common (t);

/* Accessor macro has side-effects, use field-name here. */
! MAYBE_REMEMBER_WITH_VARS (t->decl_with_vis.assembler_name);
! MAYBE_REMEMBER_WITH_VARS (DECL_SECTION_NAME (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! maybe_remember_with_vars_decl_non_common (tree t)
{
! maybe_remember_with_vars_decl_with_vis (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_ARGUMENT_FLD (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_RESULT_FLD (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_VINDEX (t));
}

/* Fix up fields of a decl_non_common T. */

static void
! maybe_remember_with_vars_function (tree t)
{
! maybe_remember_with_vars_decl_non_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_FUNCTION_PERSONALITY (t));
}

/* Fix up fields of a field_decl T. */

static void
! maybe_remember_with_vars_field_decl (tree t)
{
! maybe_remember_with_vars_decl_common (t);
! MAYBE_REMEMBER_WITH_VARS (DECL_FIELD_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_BIT_FIELD_TYPE (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_QUALIFIER (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_FIELD_BIT_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (DECL_FCONTEXT (t));
}

/* Fix up fields of a type T. */

static void
! maybe_remember_with_vars_type (tree t)
{
! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (TYPE_CACHED_VALUES (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_SIZE (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_SIZE_UNIT (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_ATTRIBUTES (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_NAME (t));

/* Accessors are for derived node types only. */
if (!POINTER_TYPE_P (t))
! MAYBE_REMEMBER_WITH_VARS (TYPE_MINVAL (t));
! MAYBE_REMEMBER_WITH_VARS (TYPE_MAXVAL (t));

/* Accessor is for derived node types only. */
! MAYBE_REMEMBER_WITH_VARS (t->type_non_common.binfo);

! MAYBE_REMEMBER_WITH_VARS (TYPE_CONTEXT (t));
}

/* Fix up fields of a BINFO T. */

static void
! maybe_remember_with_vars_binfo (tree t)
{
unsigned HOST_WIDE_INT i, n;

! maybe_remember_with_vars_common (t);
! MAYBE_REMEMBER_WITH_VARS (BINFO_VTABLE (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_OFFSET (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VIRTUALS (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VPTR_FIELD (t));
n = vec_safe_length (BINFO_BASE_ACCESSES (t));
for (i = 0; i < n; i++)
! MAYBE_REMEMBER_WITH_VARS (BINFO_BASE_ACCESS (t, i));
! MAYBE_REMEMBER_WITH_VARS (BINFO_INHERITANCE_CHAIN (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_SUBVTT_INDEX (t));
! MAYBE_REMEMBER_WITH_VARS (BINFO_VPTR_INDEX (t));
n = BINFO_N_BASE_BINFOS (t);
for (i = 0; i < n; i++)
! MAYBE_REMEMBER_WITH_VARS (BINFO_BASE_BINFO (t, i));
}

/* Fix up fields of a CONSTRUCTOR T. */

static void
! maybe_remember_with_vars_constructor (tree t)
{
unsigned HOST_WIDE_INT idx;
constructor_elt *ce;

! maybe_remember_with_vars_typed (t);

for (idx = 0; vec_safe_iterate (CONSTRUCTOR_ELTS (t), idx, &ce); idx++)
{
! MAYBE_REMEMBER_WITH_VARS (ce->index);
! MAYBE_REMEMBER_WITH_VARS (ce->value);
}
}

/* Fix up fields of an expression tree T. */

static void
! maybe_remember_with_vars_expr (tree t)
{
int i;
! maybe_remember_with_vars_typed (t);
for (i = TREE_OPERAND_LENGTH (t) - 1; i >= 0; --i)
! MAYBE_REMEMBER_WITH_VARS (TREE_OPERAND (t, i));
}

/* Given a tree T fixup fields of T by replacing types with their merged
*************** lto_ft_expr (tree t)
*** 1635,1641 ****
for instance integer or string constants. */

static void
! lto_fixup_types (tree t)
{
switch (TREE_CODE (t))
{
--- 1502,1508 ----
for instance integer or string constants. */

static void
! maybe_remember_with_vars (tree t)
{
switch (TREE_CODE (t))
{
*************** lto_fixup_types (tree t)
*** 1643,1655 ****
break;

case TREE_LIST:
! LTO_FIXUP_TREE (TREE_VALUE (t));
! LTO_FIXUP_TREE (TREE_PURPOSE (t));
! LTO_FIXUP_TREE (TREE_CHAIN (t));
break;

case FIELD_DECL:
! lto_ft_field_decl (t);
break;

case LABEL_DECL:
--- 1510,1522 ----
break;

case TREE_LIST:
! MAYBE_REMEMBER_WITH_VARS (TREE_VALUE (t));
! MAYBE_REMEMBER_WITH_VARS (TREE_PURPOSE (t));
! MAYBE_REMEMBER_WITH_VARS (TREE_CHAIN (t));
break;

case FIELD_DECL:
! maybe_remember_with_vars_field_decl (t);
break;

case LABEL_DECL:
*************** lto_fixup_types (tree t)
*** 1657,1683 ****
case PARM_DECL:
case RESULT_DECL:
case IMPORTED_DECL:
! lto_ft_decl_common (t);
break;

case VAR_DECL:
! lto_ft_decl_with_vis (t);
break;

case TYPE_DECL:
! lto_ft_decl_non_common (t);
break;

case FUNCTION_DECL:
! lto_ft_function (t);
break;

case TREE_BINFO:
! lto_ft_binfo (t);
break;

case PLACEHOLDER_EXPR:
! lto_ft_common (t);
break;

case BLOCK:
--- 1524,1550 ----
case PARM_DECL:
case RESULT_DECL:
case IMPORTED_DECL:
! maybe_remember_with_vars_decl_common (t);
break;

case VAR_DECL:
! maybe_remember_with_vars_decl_with_vis (t);
break;

case TYPE_DECL:
! maybe_remember_with_vars_decl_non_common (t);
break;

case FUNCTION_DECL:
! maybe_remember_with_vars_function (t);
break;

case TREE_BINFO:
! maybe_remember_with_vars_binfo (t);
break;

case PLACEHOLDER_EXPR:
! maybe_remember_with_vars_common (t);
break;

case BLOCK:
*************** lto_fixup_types (tree t)
*** 1686,1706 ****
case TARGET_OPTION_NODE:
break;

default:
if (TYPE_P (t))
! lto_ft_type (t);
! else if (TREE_CODE (t) == CONSTRUCTOR)
! lto_ft_constructor (t);
else if (CONSTANT_CLASS_P (t))
! LTO_FIXUP_TREE (TREE_TYPE (t));
else if (EXPR_P (t))
! {
! lto_ft_expr (t);
! }
else
! {
! remember_with_vars (t);
! }
}
}

--- 1553,1571 ----
case TARGET_OPTION_NODE:
break;

+ case CONSTRUCTOR:
+ maybe_remember_with_vars_constructor (t);
+ break;
+
default:
if (TYPE_P (t))
! maybe_remember_with_vars_type (t);
else if (CONSTANT_CLASS_P (t))
! MAYBE_REMEMBER_WITH_VARS (TREE_TYPE (t));
else if (EXPR_P (t))
! maybe_remember_with_vars_expr (t);
else
! remember_with_vars (t);
}
}

*************** lto_register_function_decl_in_symtab (st
*** 1786,2006 ****
}
}

- static unsigned long num_merged_types = 0;

! /* Given a streamer cache structure DATA_IN (holding a sequence of trees
! for one compilation unit) go over all trees starting at index FROM until the
! end of the sequence and replace fields of those trees, and the trees
! themself with their canonical variants as per gimple_register_type. */

static void
! uniquify_nodes (struct data_in *data_in, unsigned from)
{
! struct streamer_tree_cache_d *cache = data_in->reader_cache;
! unsigned len = cache->nodes.length ();
! unsigned i;

! /* Go backwards because children streamed for the first time come
! as part of their parents, and hence are created after them. */

! /* First register all the types in the cache. This makes sure to
! have the original structure in the type cycles when registering
! them and computing hashes. */
! for (i = len; i-- > from;)
! {
! tree t = cache->nodes[i];
! if (t && TYPE_P (t))
! {
! tree newt = gimple_register_type (t);
! /* Mark non-prevailing types so we fix them up. No need
! to reset that flag afterwards - nothing that refers
! to those types is left and they are collected. */
! if (newt != t)
! {
! num_merged_types++;
! TREE_VISITED (t) = 1;
! }
}
}

! /* Second fixup all trees in the new cache entries. */
! for (i = len; i-- > from;)
{
! tree t = cache->nodes[i];
! tree oldt = t;
! if (!t)
! continue;
!
! /* First fixup the fields of T. */
! lto_fixup_types (t);
!
! if (!TYPE_P (t))
! continue;
!
! /* Now try to find a canonical variant of T itself. */
! t = GIMPLE_REGISTER_TYPE (t);
!
! if (t == oldt)
! {
! /* The following re-creates proper variant lists while fixing up
! the variant leaders. We do not stream TYPE_NEXT_VARIANT so the
! variant list state before fixup is broken. */
! tree tem, mv;
!
! #ifdef ENABLE_CHECKING
! /* Remove us from our main variant list if we are not the
! variant leader. */
! if (TYPE_MAIN_VARIANT (t) != t)
! {
! tem = TYPE_MAIN_VARIANT (t);
! while (tem && TYPE_NEXT_VARIANT (tem) != t)
! tem = TYPE_NEXT_VARIANT (tem);
! gcc_assert (!tem && !TYPE_NEXT_VARIANT (t));
! }
! #endif

! /* Query our new main variant. */
! mv = GIMPLE_REGISTER_TYPE (TYPE_MAIN_VARIANT (t));

! /* If we were the variant leader and we get replaced ourselves drop
! all variants from our list. */
! if (TYPE_MAIN_VARIANT (t) == t
! && mv != t)
! {
! tem = t;
! while (tem)
! {
! tree tem2 = TYPE_NEXT_VARIANT (tem);
! TYPE_NEXT_VARIANT (tem) = NULL_TREE;
! tem = tem2;
! }
! }

! /* If we are not our own variant leader link us into our new leaders
! variant list. */
! if (mv != t)
! {
! TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (mv);
! TYPE_NEXT_VARIANT (mv) = t;
! if (RECORD_OR_UNION_TYPE_P (t))
! TYPE_BINFO (t) = TYPE_BINFO (mv);
! /* Preserve the invariant that type variants share their
! TYPE_FIELDS. */
! if (RECORD_OR_UNION_TYPE_P (t)
! && TYPE_FIELDS (mv) != TYPE_FIELDS (t))
! {
! tree f1, f2;
! for (f1 = TYPE_FIELDS (mv), f2 = TYPE_FIELDS (t);
! f1 && f2; f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! {
! unsigned ix;
! gcc_assert (f1 != f2
! && DECL_NAME (f1) == DECL_NAME (f2));
! if (!streamer_tree_cache_lookup (cache, f2, &ix))
! gcc_unreachable ();
! /* If we're going to replace an element which we'd
! still visit in the next iterations, we wouldn't
! handle it, so do it here. We do have to handle it
! even though the field_decl itself will be removed,
! as it could refer to e.g. integer_cst which we
! wouldn't reach via any other way, hence they
! (and their type) would stay uncollected. */
! /* ??? We should rather make sure to replace all
! references to f2 with f1. That means handling
! COMPONENT_REFs and CONSTRUCTOR elements in
! lto_fixup_types and special-case the field-decl
! operand handling. */
! /* ??? Not sure the above is all relevant in this
! path canonicalizing TYPE_FIELDS to that of the
! main variant. */
! if (ix < i)
! lto_fixup_types (f2);
! streamer_tree_cache_insert_at (cache, f1, ix);
! }
! TYPE_FIELDS (t) = TYPE_FIELDS (mv);
! }
! }

! /* Finally adjust our main variant and fix it up. */
! TYPE_MAIN_VARIANT (t) = mv;

! /* The following reconstructs the pointer chains
! of the new pointed-to type if we are a main variant. We do
! not stream those so they are broken before fixup. */
! if (TREE_CODE (t) == POINTER_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_PTR_TO (t) = TYPE_POINTER_TO (TREE_TYPE (t));
! TYPE_POINTER_TO (TREE_TYPE (t)) = t;
! }
! else if (TREE_CODE (t) == REFERENCE_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_REF_TO (t) = TYPE_REFERENCE_TO (TREE_TYPE (t));
! TYPE_REFERENCE_TO (TREE_TYPE (t)) = t;
! }
}

! else
{
! if (RECORD_OR_UNION_TYPE_P (t))
! {
! tree f1, f2;
! if (TYPE_FIELDS (t) != TYPE_FIELDS (oldt))
! for (f1 = TYPE_FIELDS (t), f2 = TYPE_FIELDS (oldt);
! f1 && f2; f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! {
! unsigned ix;
! gcc_assert (f1 != f2 && DECL_NAME (f1) == DECL_NAME (f2));
! if (!streamer_tree_cache_lookup (cache, f2, &ix))
! gcc_unreachable ();
! /* If we're going to replace an element which we'd
! still visit in the next iterations, we wouldn't
! handle it, so do it here. We do have to handle it
! even though the field_decl itself will be removed,
! as it could refer to e.g. integer_cst which we
! wouldn't reach via any other way, hence they
! (and their type) would stay uncollected. */
! /* ??? We should rather make sure to replace all
! references to f2 with f1. That means handling
! COMPONENT_REFs and CONSTRUCTOR elements in
! lto_fixup_types and special-case the field-decl
! operand handling. */
! if (ix < i)
! lto_fixup_types (f2);
! streamer_tree_cache_insert_at (cache, f1, ix);
! }
! }

! /* If we found a tree that is equal to oldt replace it in the
! cache, so that further users (in the various LTO sections)
! make use of it. */
! streamer_tree_cache_insert_at (cache, t, i);
}
}

! /* Finally compute the canonical type of all TREE_TYPEs and register
! VAR_DECL and FUNCTION_DECL nodes in the symbol table.
! From this point there are no longer any types with
! TYPE_STRUCTURAL_EQUALITY_P and its type-based alias problems.
! This step requires the TYPE_POINTER_TO lists being present, so
! make sure it is done last. */
! for (i = len; i-- > from;)
! {
! tree t = cache->nodes[i];
! if (t == NULL_TREE)
! continue;
!
! if (TREE_CODE (t) == VAR_DECL)
! lto_register_var_decl_in_symtab (data_in, t);
! else if (TREE_CODE (t) == FUNCTION_DECL && !DECL_BUILT_IN (t))
! lto_register_function_decl_in_symtab (data_in, t);
! else if (!flag_wpa
! && TREE_CODE (t) == TYPE_DECL)
! debug_hooks->type_decl (t, !DECL_FILE_SCOPE_P (t));
! else if (TYPE_P (t) && !TYPE_CANONICAL (t))
! TYPE_CANONICAL (t) = gimple_register_canonical_type (t);
}
}


--- 1651,2371 ----
}
}


! /* For the type T re-materialize it in the type variant list and
! the pointer/reference-to chains. */

static void
! lto_fixup_prevailing_type (tree t)
{
! /* The following re-creates proper variant lists while fixing up
! the variant leaders. We do not stream TYPE_NEXT_VARIANT so the
! variant list state before fixup is broken. */

! /* If we are not our own variant leader link us into our new leaders
! variant list. */
! if (TYPE_MAIN_VARIANT (t) != t)
! {
! tree mv = TYPE_MAIN_VARIANT (t);
! TYPE_NEXT_VARIANT (t) = TYPE_NEXT_VARIANT (mv);
! TYPE_NEXT_VARIANT (mv) = t;
! }

! /* The following reconstructs the pointer chains
! of the new pointed-to type if we are a main variant. We do
! not stream those so they are broken before fixup. */
! if (TREE_CODE (t) == POINTER_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_PTR_TO (t) = TYPE_POINTER_TO (TREE_TYPE (t));
! TYPE_POINTER_TO (TREE_TYPE (t)) = t;
! }
! else if (TREE_CODE (t) == REFERENCE_TYPE
! && TYPE_MAIN_VARIANT (t) == t)
! {
! TYPE_NEXT_REF_TO (t) = TYPE_REFERENCE_TO (TREE_TYPE (t));
! TYPE_REFERENCE_TO (TREE_TYPE (t)) = t;
! }
! }
!
!
! /* We keep prevailing tree SCCs in a hashtable with manual collision
! handling (in case all hashes compare the same) and keep the colliding
! entries in the tree_scc->next chain. */
!
! struct tree_scc
! {
! tree_scc *next;
! /* Hash of the whole SCC. */
! hashval_t hash;
! /* Number of trees in the SCC. */
! unsigned len;
! /* Number of possible entries into the SCC (tree nodes [0..entry_len-1]
! which share the same individual tree hash). */
! unsigned entry_len;
! /* The members of the SCC.
! We only need to remember the first entry node candidate for prevailing
! SCCs (but of course have access to all entries for SCCs we are
! processing).
! ??? For prevailing SCCs we really only need hash and the first
! entry candidate, but that's too awkward to implement. */
! tree entries[1];
! };
!
! struct tree_scc_hasher : typed_noop_remove <tree_scc>
! {
! typedef tree_scc value_type;
! typedef tree_scc compare_type;
! static inline hashval_t hash (const value_type *);
! static inline bool equal (const value_type *, const compare_type *);
! };
!
! hashval_t
! tree_scc_hasher::hash (const value_type *scc)
! {
! return scc->hash;
! }
!
! bool
! tree_scc_hasher::equal (const value_type *scc1, const compare_type *scc2)
! {
! if (scc1->hash != scc2->hash
! || scc1->len != scc2->len
! || scc1->entry_len != scc2->entry_len)
! return false;
! return true;
! }
!
! static hash_table <tree_scc_hasher> tree_scc_hash;
! static struct obstack tree_scc_hash_obstack;
!
! static unsigned long num_merged_types;
! static unsigned long num_prevailing_types;
! static unsigned long num_not_merged_types;
! static unsigned long num_not_merged_types_in_same_scc;
! static unsigned long num_not_merged_types_trees;
! static unsigned long num_not_merged_types_in_same_scc_trees;
! static unsigned long num_type_scc_trees;
! static unsigned long total_scc_size;
! static unsigned long num_sccs_read;
! static unsigned long total_scc_size_merged;
! static unsigned long num_sccs_merged;
! static unsigned long num_scc_compares;
! static unsigned long num_scc_compare_collisions;
!
!
! /* Compare the two entries T1 and T2 of two SCCs that are possibly equal,
! recursing through in-SCC tree edges. Returns true if the SCCs entered
! through T1 and T2 are equal and fills in *MAP with the pairs of
! SCC entries we visited, starting with (*MAP)[0] = T1 and (*MAP)[1] = T2. */
!
! static bool
! compare_tree_sccs_1 (tree t1, tree t2, tree **map)
! {
! enum tree_code code;
!
! /* Mark already visited nodes. */
! TREE_ASM_WRITTEN (t2) = 1;
!
! /* Push the pair onto map. */
! (*map)[0] = t1;
! (*map)[1] = t2;
! *map = *map + 2;
!
! /* Compare value-fields. */
! #define compare_values(X) \
! do { \
! if (X(t1) != X(t2)) \
! return false; \
! } while (0)
!
! compare_values (TREE_CODE);
! code = TREE_CODE (t1);
!
! if (!TYPE_P (t1))
! {
! compare_values (TREE_SIDE_EFFECTS);
! compare_values (TREE_CONSTANT);
! compare_values (TREE_READONLY);
! compare_values (TREE_PUBLIC);
! }
! compare_values (TREE_ADDRESSABLE);
! compare_values (TREE_THIS_VOLATILE);
! if (DECL_P (t1))
! compare_values (DECL_UNSIGNED);
! else if (TYPE_P (t1))
! compare_values (TYPE_UNSIGNED);
! if (TYPE_P (t1))
! compare_values (TYPE_ARTIFICIAL);
! else
! compare_values (TREE_NO_WARNING);
! compare_values (TREE_NOTHROW);
! compare_values (TREE_STATIC);
! if (code != TREE_BINFO)
! compare_values (TREE_PRIVATE);
! compare_values (TREE_PROTECTED);
! compare_values (TREE_DEPRECATED);
! if (TYPE_P (t1))
! {
! compare_values (TYPE_SATURATING);
! compare_values (TYPE_ADDR_SPACE);
! }
! else if (code == SSA_NAME)
! compare_values (SSA_NAME_IS_DEFAULT_DEF);
!
! if (CODE_CONTAINS_STRUCT (code, TS_INT_CST))
! {
! compare_values (TREE_INT_CST_LOW);
! compare_values (TREE_INT_CST_HIGH);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_REAL_CST))
! {
! /* ??? No suitable compare routine available. */
! REAL_VALUE_TYPE r1 = TREE_REAL_CST (t1);
! REAL_VALUE_TYPE r2 = TREE_REAL_CST (t2);
! if (r1.cl != r2.cl
! || r1.decimal != r2.decimal
! || r1.sign != r2.sign
! || r1.signalling != r2.signalling
! || r1.canonical != r2.canonical
! || r1.uexp != r2.uexp)
! return false;
! for (unsigned i = 0; i < SIGSZ; ++i)
! if (r1.sig[i] != r2.sig[i])
! return false;
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FIXED_CST))
! if (!fixed_compare (EQ_EXPR,
! TREE_FIXED_CST_PTR (t1), TREE_FIXED_CST_PTR (t2)))
! return false;
!
!
! /* We don't want to compare locations, so there is nothing do compare
! for TS_DECL_MINIMAL. */
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
! {
! compare_values (DECL_MODE);
! compare_values (DECL_NONLOCAL);
! compare_values (DECL_VIRTUAL_P);
! compare_values (DECL_IGNORED_P);
! compare_values (DECL_ABSTRACT);
! compare_values (DECL_ARTIFICIAL);
! compare_values (DECL_USER_ALIGN);
! compare_values (DECL_PRESERVE_P);
! compare_values (DECL_EXTERNAL);
! compare_values (DECL_GIMPLE_REG_P);
! compare_values (DECL_ALIGN);
! if (code == LABEL_DECL)
! {
! compare_values (DECL_ERROR_ISSUED);
! compare_values (EH_LANDING_PAD_NR);
! compare_values (LABEL_DECL_UID);
! }
! else if (code == FIELD_DECL)
! {
! compare_values (DECL_PACKED);
! compare_values (DECL_NONADDRESSABLE_P);
! compare_values (DECL_OFFSET_ALIGN);
! }
! else if (code == VAR_DECL)
! {
! compare_values (DECL_HAS_DEBUG_EXPR_P);
! compare_values (DECL_NONLOCAL_FRAME);
! }
! if (code == RESULT_DECL
! || code == PARM_DECL
! || code == VAR_DECL)
! {
! compare_values (DECL_BY_REFERENCE);
! if (code == VAR_DECL
! || code == PARM_DECL)
! compare_values (DECL_HAS_VALUE_EXPR_P);
! }
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WRTL))
! compare_values (DECL_REGISTER);
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
! {
! compare_values (DECL_DEFER_OUTPUT);
! compare_values (DECL_COMMON);
! compare_values (DECL_DLLIMPORT_P);
! compare_values (DECL_WEAK);
! compare_values (DECL_SEEN_IN_BIND_EXPR_P);
! compare_values (DECL_COMDAT);
! compare_values (DECL_VISIBILITY);
! compare_values (DECL_VISIBILITY_SPECIFIED);
! if (code == VAR_DECL)
! {
! compare_values (DECL_HARD_REGISTER);
! compare_values (DECL_IN_TEXT_SECTION);
! compare_values (DECL_IN_CONSTANT_POOL);
! compare_values (DECL_TLS_MODEL);
! }
! if (VAR_OR_FUNCTION_DECL_P (t1))
! compare_values (DECL_INIT_PRIORITY);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
! {
! compare_values (DECL_BUILT_IN_CLASS);
! compare_values (DECL_STATIC_CONSTRUCTOR);
! compare_values (DECL_STATIC_DESTRUCTOR);
! compare_values (DECL_UNINLINABLE);
! compare_values (DECL_POSSIBLY_INLINED);
! compare_values (DECL_IS_NOVOPS);
! compare_values (DECL_IS_RETURNS_TWICE);
! compare_values (DECL_IS_MALLOC);
! compare_values (DECL_IS_OPERATOR_NEW);
! compare_values (DECL_DECLARED_INLINE_P);
! compare_values (DECL_STATIC_CHAIN);
! compare_values (DECL_NO_INLINE_WARNING_P);
! compare_values (DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT);
! compare_values (DECL_NO_LIMIT_STACK);
! compare_values (DECL_DISREGARD_INLINE_LIMITS);
! compare_values (DECL_PURE_P);
! compare_values (DECL_LOOPING_CONST_OR_PURE_P);
! if (DECL_BUILT_IN_CLASS (t1) != NOT_BUILT_IN)
! compare_values (DECL_FUNCTION_CODE);
! if (DECL_STATIC_DESTRUCTOR (t1))
! compare_values (DECL_FINI_PRIORITY);
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
! {
! compare_values (TYPE_MODE);
! compare_values (TYPE_STRING_FLAG);
! compare_values (TYPE_NO_FORCE_BLK);
! compare_values (TYPE_NEEDS_CONSTRUCTING);
! if (RECORD_OR_UNION_TYPE_P (t1))
! compare_values (TYPE_TRANSPARENT_AGGR);
! else if (code == ARRAY_TYPE)
! compare_values (TYPE_NONALIASED_COMPONENT);
! compare_values (TYPE_PACKED);
! compare_values (TYPE_RESTRICT);
! compare_values (TYPE_USER_ALIGN);
! compare_values (TYPE_READONLY);
! compare_values (TYPE_PRECISION);
! compare_values (TYPE_ALIGN);
! compare_values (TYPE_ALIAS_SET);
! }
!
! /* We don't want to compare locations, so there is nothing do compare
! for TS_EXP. */
!
! /* BLOCKs are function local and we don't merge anything there, so
! simply refuse to merge. */
! if (CODE_CONTAINS_STRUCT (code, TS_BLOCK))
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_TRANSLATION_UNIT_DECL))
! if (strcmp (TRANSLATION_UNIT_LANGUAGE (t1),
! TRANSLATION_UNIT_LANGUAGE (t2)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_TARGET_OPTION))
! if (memcmp (TREE_TARGET_OPTION (t1), TREE_TARGET_OPTION (t2),
! sizeof (struct cl_target_option)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_OPTIMIZATION))
! if (memcmp (TREE_OPTIMIZATION (t1), TREE_OPTIMIZATION (t2),
! sizeof (struct cl_optimization)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
! if (vec_safe_length (BINFO_BASE_ACCESSES (t1))
! != vec_safe_length (BINFO_BASE_ACCESSES (t2)))
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
! compare_values (CONSTRUCTOR_NELTS);
!
! if (CODE_CONTAINS_STRUCT (code, TS_IDENTIFIER))
! if (IDENTIFIER_LENGTH (t1) != IDENTIFIER_LENGTH (t2)
! || memcmp (IDENTIFIER_POINTER (t1), IDENTIFIER_POINTER (t2),
! IDENTIFIER_LENGTH (t1)) != 0)
! return false;
!
! if (CODE_CONTAINS_STRUCT (code, TS_STRING))
! if (TREE_STRING_LENGTH (t1) != TREE_STRING_LENGTH (t2)
! || memcmp (TREE_STRING_POINTER (t1), TREE_STRING_POINTER (t2),
! TREE_STRING_LENGTH (t1)) != 0)
! return false;
!
! #undef compare_values
!
!
! /* Compare pointer fields. */
!
! /* Recurse. Search & Replaced from DFS_write_tree_body.
! Folding the early checks into the compare_tree_edges recursion
! macro makes debugging way quicker as you are able to break on
! compare_tree_sccs_1 and simply finish until a call returns false
! to spot the SCC members with the difference. */
! #define compare_tree_edges(E1, E2) \
! do { \
! tree t1_ = (E1), t2_ = (E2); \
! if (t1_ != t2_ \
! && (!t1_ || !t2_ \
! || !TREE_VISITED (t2_) \
! || (!TREE_ASM_WRITTEN (t2_) \
! && !compare_tree_sccs_1 (t1_, t2_, map)))) \
! return false; \
! /* Only non-NULL trees outside of the SCC may compare equal. */ \
! gcc_checking_assert (t1_ != t2_ || (!t2_ || !TREE_VISITED (t2_))); \
! } while (0)
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
! {
! if (code != IDENTIFIER_NODE)
! compare_tree_edges (TREE_TYPE (t1), TREE_TYPE (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_VECTOR))
! {
! unsigned i;
! /* Note that the number of elements for EXPR has already been emitted
! in EXPR's header (see streamer_write_tree_header). */
! for (i = 0; i < VECTOR_CST_NELTS (t1); ++i)
! compare_tree_edges (VECTOR_CST_ELT (t1, i), VECTOR_CST_ELT (t2, i));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_COMPLEX))
! {
! compare_tree_edges (TREE_REALPART (t1), TREE_REALPART (t2));
! compare_tree_edges (TREE_IMAGPART (t1), TREE_IMAGPART (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
! {
! compare_tree_edges (DECL_NAME (t1), DECL_NAME (t2));
! /* ??? Global decls from different TUs have non-matching
! TRANSLATION_UNIT_DECLs. Only consider a small set of
! decls equivalent, we should not end up merging others. */
! if ((code == TYPE_DECL
! || code == NAMESPACE_DECL
! || code == IMPORTED_DECL
! || code == CONST_DECL
! || (VAR_OR_FUNCTION_DECL_P (t1)
! && (TREE_PUBLIC (t1) || DECL_EXTERNAL (t1))))
! && DECL_FILE_SCOPE_P (t1) && DECL_FILE_SCOPE_P (t2))
! ;
! else
! compare_tree_edges (DECL_CONTEXT (t1), DECL_CONTEXT (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
! {
! compare_tree_edges (DECL_SIZE (t1), DECL_SIZE (t2));
! compare_tree_edges (DECL_SIZE_UNIT (t1), DECL_SIZE_UNIT (t2));
! compare_tree_edges (DECL_ATTRIBUTES (t1), DECL_ATTRIBUTES (t2));
! if ((code == VAR_DECL
! || code == PARM_DECL)
! && DECL_HAS_VALUE_EXPR_P (t1))
! compare_tree_edges (DECL_VALUE_EXPR (t1), DECL_VALUE_EXPR (t2));
! if (code == VAR_DECL
! && DECL_HAS_DEBUG_EXPR_P (t1))
! compare_tree_edges (DECL_DEBUG_EXPR (t1), DECL_DEBUG_EXPR (t2));
! /* LTO specific edges. */
! if (code != FUNCTION_DECL
! && code != TRANSLATION_UNIT_DECL)
! compare_tree_edges (DECL_INITIAL (t1), DECL_INITIAL (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_NON_COMMON))
! {
! if (code == FUNCTION_DECL)
! {
! tree a1, a2;
! for (a1 = DECL_ARGUMENTS (t1), a2 = DECL_ARGUMENTS (t2);
! a1 || a2;
! a1 = TREE_CHAIN (a1), a2 = TREE_CHAIN (a2))
! compare_tree_edges (a1, a2);
! compare_tree_edges (DECL_RESULT (t1), DECL_RESULT (t2));
! }
! else if (code == TYPE_DECL)
! compare_tree_edges (DECL_ORIGINAL_TYPE (t1), DECL_ORIGINAL_TYPE (t2));
! compare_tree_edges (DECL_VINDEX (t1), DECL_VINDEX (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
! {
! /* Make sure we don't inadvertently set the assembler name. */
! if (DECL_ASSEMBLER_NAME_SET_P (t1))
! compare_tree_edges (DECL_ASSEMBLER_NAME (t1),
! DECL_ASSEMBLER_NAME (t2));
! compare_tree_edges (DECL_SECTION_NAME (t1), DECL_SECTION_NAME (t2));
! compare_tree_edges (DECL_COMDAT_GROUP (t1), DECL_COMDAT_GROUP (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FIELD_DECL))
! {
! compare_tree_edges (DECL_FIELD_OFFSET (t1), DECL_FIELD_OFFSET (t2));
! compare_tree_edges (DECL_BIT_FIELD_TYPE (t1), DECL_BIT_FIELD_TYPE (t2));
! compare_tree_edges (DECL_BIT_FIELD_REPRESENTATIVE (t1),
! DECL_BIT_FIELD_REPRESENTATIVE (t2));
! compare_tree_edges (DECL_FIELD_BIT_OFFSET (t1),
! DECL_FIELD_BIT_OFFSET (t2));
! compare_tree_edges (DECL_FCONTEXT (t1), DECL_FCONTEXT (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
! {
! compare_tree_edges (DECL_FUNCTION_PERSONALITY (t1),
! DECL_FUNCTION_PERSONALITY (t2));
! compare_tree_edges (DECL_FUNCTION_SPECIFIC_TARGET (t1),
! DECL_FUNCTION_SPECIFIC_TARGET (t2));
! compare_tree_edges (DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t1),
! DECL_FUNCTION_SPECIFIC_OPTIMIZATION (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
! {
! compare_tree_edges (TYPE_SIZE (t1), TYPE_SIZE (t2));
! compare_tree_edges (TYPE_SIZE_UNIT (t1), TYPE_SIZE_UNIT (t2));
! compare_tree_edges (TYPE_ATTRIBUTES (t1), TYPE_ATTRIBUTES (t2));
! compare_tree_edges (TYPE_NAME (t1), TYPE_NAME (t2));
! /* Do not compare TYPE_POINTER_TO or TYPE_REFERENCE_TO. They will be
! reconstructed during fixup. */
! /* Do not compare TYPE_NEXT_VARIANT, we reconstruct the variant lists
! during fixup. */
! compare_tree_edges (TYPE_MAIN_VARIANT (t1), TYPE_MAIN_VARIANT (t2));
! /* ??? Global types from different TUs have non-matching
! TRANSLATION_UNIT_DECLs. Still merge them if they are otherwise
! equal. */
! if (TYPE_FILE_SCOPE_P (t1) && TYPE_FILE_SCOPE_P (t2))
! ;
! else
! compare_tree_edges (TYPE_CONTEXT (t1), TYPE_CONTEXT (t2));
! /* TYPE_CANONICAL is re-computed during type merging, so do not
! compare it here. */
! compare_tree_edges (TYPE_STUB_DECL (t1), TYPE_STUB_DECL (t2));
! }
!
! if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
! {
! if (code == ENUMERAL_TYPE)
! compare_tree_edges (TYPE_VALUES (t1), TYPE_VALUES (t2));
! else if (code == ARRAY_TYPE)
! compare_tree_edges (TYPE_DOMAIN (t1), TYPE_DOMAIN (t2));
! else if (RECORD_OR_UNION_TYPE_P (t1))
! {
! tree f1, f2;
! for (f1 = TYPE_FIELDS (t1), f2 = TYPE_FIELDS (t2);
! f1 || f2;
! f1 = TREE_CHAIN (f1), f2 = TREE_CHAIN (f2))
! compare_tree_edges (f1, f2);
! compare_tree_edges (TYPE_BINFO (t1), TYPE_BINFO (t2));
}
+ else if (code == FUNCTION_TYPE
+ || code == METHOD_TYPE)
+ compare_tree_edges (TYPE_ARG_TYPES (t1), TYPE_ARG_TYPES (t2));
+ if (!POINTER_TYPE_P (t1))
+ compare_tree_edges (TYPE_MINVAL (t1), TYPE_MINVAL (t2));
+ compare_tree_edges (TYPE_MAXVAL (t1), TYPE_MAXVAL (t2));
}

! if (CODE_CONTAINS_STRUCT (code, TS_LIST))
{
! compare_tree_edges (TREE_PURPOSE (t1), TREE_PURPOSE (t2));
! compare_tree_edges (TREE_VALUE (t1), TREE_VALUE (t2));
! compare_tree_edges (TREE_CHAIN (t1), TREE_CHAIN (t2));
! }

! if (CODE_CONTAINS_STRUCT (code, TS_VEC))
! for (int i = 0; i < TREE_VEC_LENGTH (t1); i++)
! compare_tree_edges (TREE_VEC_ELT (t1, i), TREE_VEC_ELT (t2, i));

! if (CODE_CONTAINS_STRUCT (code, TS_EXP))
! {
! for (int i = 0; i < TREE_OPERAND_LENGTH (t1); i++)
! compare_tree_edges (TREE_OPERAND (t1, i),
! TREE_OPERAND (t2, i));

! /* BLOCKs are function local and we don't merge anything there. */
! if (TREE_BLOCK (t1) || TREE_BLOCK (t2))
! return false;
! }

! if (CODE_CONTAINS_STRUCT (code, TS_BINFO))
! {
! unsigned i;
! tree t;
! /* Lengths have already been compared above. */
! FOR_EACH_VEC_ELT (*BINFO_BASE_BINFOS (t1), i, t)
! compare_tree_edges (t, BINFO_BASE_BINFO (t2, i));
! FOR_EACH_VEC_SAFE_ELT (BINFO_BASE_ACCESSES (t1), i, t)
! compare_tree_edges (t, BINFO_BASE_ACCESS (t2, i));
! compare_tree_edges (BINFO_OFFSET (t1), BINFO_OFFSET (t2));
! compare_tree_edges (BINFO_VTABLE (t1), BINFO_VTABLE (t2));
! compare_tree_edges (BINFO_VPTR_FIELD (t1), BINFO_VPTR_FIELD (t2));
! compare_tree_edges (BINFO_INHERITANCE_CHAIN (t1),
! BINFO_INHERITANCE_CHAIN (t2));
! compare_tree_edges (BINFO_SUBVTT_INDEX (t1),
! BINFO_SUBVTT_INDEX (t2));
! compare_tree_edges (BINFO_VPTR_INDEX (t1),
! BINFO_VPTR_INDEX (t2));
! }

! if (CODE_CONTAINS_STRUCT (code, TS_CONSTRUCTOR))
! {
! unsigned i;
! tree index, value;
! /* Lengths have already been compared above. */
! FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (t1), i, index, value)
! {
! compare_tree_edges (index, CONSTRUCTOR_ELT (t2, i)->index);
! compare_tree_edges (value, CONSTRUCTOR_ELT (t2, i)->value);
}
+ }

! #undef compare_tree_edges
!
! return true;
! }
!
! /* Compare the tree scc SCC to the prevailing candidate PSCC, filling
! out MAP if they are equal. */
!
! static bool
! compare_tree_sccs (tree_scc *pscc, tree_scc *scc,
! tree *map)
! {
! /* Assume SCC entry hashes are sorted after their cardinality. Which
! means we can simply take the first n-tuple of equal hashes
! (which is recorded as entry_len) and do n SCC entry candidate
! comparisons. */
! for (unsigned i = 0; i < pscc->entry_len; ++i)
! {
! tree *mapp = map;
! num_scc_compare_collisions++;
! if (compare_tree_sccs_1 (pscc->entries[0], scc->entries[i], &mapp))
{
! /* Equal - no need to reset TREE_VISITED or TREE_ASM_WRITTEN
! on the scc as all trees will be freed. */
! return true;
! }
! /* Reset TREE_ASM_WRITTEN on scc for the next compare or in case
! the SCC prevails. */
! for (unsigned j = 0; j < scc->len; ++j)
! TREE_ASM_WRITTEN (scc->entries[j]) = 0;
! }

! return false;
! }
!
! /* Try to unify the SCC with nodes FROM to FROM + LEN in CACHE and
! hash value SCC_HASH with an already recorded SCC. Return true if
! that was successful, otherwise return false. */
!
! static bool
! unify_scc (struct streamer_tree_cache_d *cache, unsigned from,
! unsigned len, unsigned scc_entry_len, hashval_t scc_hash)
! {
! bool unified_p = false;
! tree_scc *scc
! = (tree_scc *) alloca (sizeof (tree_scc) + (len - 1) * sizeof (tree));
! scc->next = NULL;
! scc->hash = scc_hash;
! scc->len = len;
! scc->entry_len = scc_entry_len;
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = streamer_tree_cache_get_tree (cache, from + i);
! scc->entries[i] = t;
! /* Do not merge SCCs with local entities inside them. Also do
! not merge TRANSLATION_UNIT_DECLs. */
! if (TREE_CODE (t) == TRANSLATION_UNIT_DECL
! || (VAR_OR_FUNCTION_DECL_P (t)
! && !(TREE_PUBLIC (t) || DECL_EXTERNAL (t)))
! || TREE_CODE (t) == LABEL_DECL)
! {
! /* Avoid doing any work for these cases and do not worry to
! record the SCCs for further merging. */
! return false;
}
}

! /* Look for the list of candidate SCCs to compare against. */
! tree_scc **slot;
! slot = tree_scc_hash.find_slot_with_hash (scc, scc_hash, INSERT);
! if (*slot)
! {
! /* Try unifying against each candidate. */
! num_scc_compares++;
!
! /* Set TREE_VISITED on the scc so we can easily identify tree nodes
! outside of the scc when following tree edges. Make sure
! that TREE_ASM_WRITTEN is unset so we can use it as 2nd bit
! to track whether we visited the SCC member during the compare.
! We cannot use TREE_VISITED on the pscc members as the extended
! scc and pscc can overlap. */
! for (unsigned i = 0; i < scc->len; ++i)
! {
! TREE_VISITED (scc->entries[i]) = 1;
! gcc_assert (!TREE_ASM_WRITTEN (scc->entries[i]));
! }
!
! tree *map = XALLOCAVEC (tree, 2 * len);
! for (tree_scc *pscc = *slot; pscc; pscc = pscc->next)
! {
! if (!compare_tree_sccs (pscc, scc, map))
! continue;
!
! /* Found an equal SCC. */
! unified_p = true;
! num_scc_compare_collisions--;
! num_sccs_merged++;
! total_scc_size_merged += len;
!
! /* Fixup the streamer cache with the prevailing nodes according
! to the tree node mapping computed by compare_tree_sccs. */
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = map[2*i+1];
! enum tree_code code = TREE_CODE (t);
! unsigned ix;
! bool r;
! /* IDENTIFIER_NODEs should be singletons and are merged by the
! streamer. The others should be singletons, too, and we
! should not merge them in any way. */
! gcc_assert (code != TRANSLATION_UNIT_DECL
! && code != IDENTIFIER_NODE
! && !streamer_handle_as_builtin_p (t));
! r = streamer_tree_cache_lookup (cache, t, &ix);
! gcc_assert (r && ix >= from);
! streamer_tree_cache_replace_tree (cache, map[2 * i], ix);
! if (TYPE_P (t))
! num_merged_types++;
! }
! /* Free the tree nodes from the read SCC. */
! for (unsigned i = 0; i < len; ++i)
! ggc_free (scc->entries[i]);
! break;
! }
!
! /* Reset TREE_VISITED if we didn't unify the SCC with another. */
! if (!unified_p)
! for (unsigned i = 0; i < scc->len; ++i)
! TREE_VISITED (scc->entries[i]) = 0;
! }
!
! /* If we didn't unify it to any candidate duplicate the relevant
! pieces to permanent storage and link it into the chain. */
! if (!unified_p)
! {
! tree_scc *pscc
! = XOBNEWVAR (&tree_scc_hash_obstack, tree_scc, sizeof (tree_scc));
! memcpy (pscc, scc, sizeof (tree_scc));
! pscc->next = (*slot);
! *slot = pscc;
}
+ return unified_p;
}


*************** lto_read_decls (struct lto_file_decl_dat
*** 2036,2044 ****
{
tree t;
unsigned from = data_in->reader_cache->nodes.length ();
! t = stream_read_tree (&ib_main, data_in);
! gcc_assert (t && ib_main.p <= ib_main.len);
! uniquify_nodes (data_in, from);
}

/* Read in lto_in_decl_state objects. */
--- 2401,2523 ----
{
tree t;
unsigned from = data_in->reader_cache->nodes.length ();
! /* Read and uniquify SCCs as in the input stream. */
! enum LTO_tags tag = streamer_read_record_start (&ib_main);
! if (tag == LTO_tree_scc)
! {
! unsigned len_;
! unsigned scc_entry_len;
! hashval_t scc_hash = lto_input_scc (&ib_main, data_in, &len_,
! &scc_entry_len);
! unsigned len = data_in->reader_cache->nodes.length () - from;
! gcc_assert (len == len_);
!
! total_scc_size += len;
! num_sccs_read++;
!
! /* We have the special case of size-1 SCCs that are pre-merged
! by means of identifier and string sharing for example.
! ??? Maybe we should avoid streaming those as SCCs. */
! tree first = streamer_tree_cache_get_tree (data_in->reader_cache,
! from);
! if (len == 1
! && (TREE_CODE (first) == IDENTIFIER_NODE
! || TREE_CODE (first) == INTEGER_CST
! || TREE_CODE (first) == TRANSLATION_UNIT_DECL
! || streamer_handle_as_builtin_p (first)))
! continue;
!
! /* Try to unify the SCC with already existing ones. */
! if (!flag_ltrans
! && unify_scc (data_in->reader_cache, from,
! len, scc_entry_len, scc_hash))
! continue;
!
! /* Do remaining fixup tasks for prevailing nodes. */
! bool seen_type = false;
! bool not_merged_type_same_scc = false;
! bool not_merged_type_not_same_scc = false;
! for (unsigned i = 0; i < len; ++i)
! {
! tree t = streamer_tree_cache_get_tree (data_in->reader_cache,
! from + i);
! /* For statistics, see if the old code would have merged
! the type. */
! if (TYPE_P (t)
! && (flag_lto_report || (flag_wpa && flag_lto_report_wpa)))
! {
! tree newt = gimple_register_type (t);
! if (newt != t)
! {
! num_not_merged_types++;
! unsigned j;
! /* Check if we can never merge the types because
! they are in the same SCC and thus the old
! code was broken. */
! for (j = 0; j < len; ++j)
! if (i != j
! && streamer_tree_cache_get_tree
! (data_in->reader_cache, from + j) == newt)
! {
! num_not_merged_types_in_same_scc++;
! not_merged_type_same_scc = true;
! break;
! }
! if (j == len)
! not_merged_type_not_same_scc = true;
! }
! }
! /* Reconstruct the type variant and pointer-to/reference-to
! chains. */
! if (TYPE_P (t))
! {
! seen_type = true;
! num_prevailing_types++;
! lto_fixup_prevailing_type (t);
! }
! /* Compute the canonical type of all types.
! ??? Should be able to assert that !TYPE_CANONICAL. */
! if (TYPE_P (t) && !TYPE_CANONICAL (t))
! TYPE_CANONICAL (t) = gimple_register_canonical_type (t);
! /* Link shared INTEGER_CSTs into TYPE_CACHED_VALUEs of its
! type which is also member of this SCC. */
! if (TREE_CODE (t) == INTEGER_CST
! && !TREE_OVERFLOW (t))
! cache_integer_cst (t);
! /* Register TYPE_DECLs with the debuginfo machinery. */
! if (!flag_wpa
! && TREE_CODE (t) == TYPE_DECL)
! debug_hooks->type_decl (t, !DECL_FILE_SCOPE_P (t));
! if (!flag_ltrans)
! {
! /* Register variables and functions with the
! symbol table. */
! if (TREE_CODE (t) == VAR_DECL)
! lto_register_var_decl_in_symtab (data_in, t);
! else if (TREE_CODE (t) == FUNCTION_DECL
! && !DECL_BUILT_IN (t))
! lto_register_function_decl_in_symtab (data_in, t);
! /* Scan the tree for references to global functions or
! variables and record those for later fixup. */
! maybe_remember_with_vars (t);
! }
! }
! if (not_merged_type_same_scc)
! {
! num_not_merged_types_in_same_scc_trees += len;
! num_not_merged_types_trees += len;
! }
! else if (not_merged_type_not_same_scc)
! num_not_merged_types_trees += len;
! if (seen_type)
! num_type_scc_trees += len;
! }
! else
! {
! /* Pickle stray references. */
! t = lto_input_tree_1 (&ib_main, data_in, tag, 0);
! gcc_assert (t && data_in->reader_cache->nodes.length () == from);
! }
}

/* Read in lto_in_decl_state objects. */
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2898,2906 ****
type_hash_cache = htab_create_ggc (512, tree_int_map_hash,
tree_int_map_eq, NULL);
type_pair_cache = XCNEWVEC (struct type_pair_d, GIMPLE_TYPE_PAIR_SIZE);
- gimple_type_leader = ggc_alloc_cleared_vec_gimple_type_leader_entry_s
- (GIMPLE_TYPE_LEADER_SIZE);
gimple_types = htab_create_ggc (16381, gimple_type_hash, gimple_type_eq, 0);

if (!quiet_flag)
fprintf (stderr, "Reading object files:");
--- 3377,3385 ----
type_hash_cache = htab_create_ggc (512, tree_int_map_hash,
tree_int_map_eq, NULL);
type_pair_cache = XCNEWVEC (struct type_pair_d, GIMPLE_TYPE_PAIR_SIZE);
gimple_types = htab_create_ggc (16381, gimple_type_hash, gimple_type_eq, 0);
+ gcc_obstack_init (&tree_scc_hash_obstack);
+ tree_scc_hash.create (4096);

if (!quiet_flag)
fprintf (stderr, "Reading object files:");
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2933,2939 ****
lto_obj_file_close (current_lto_file);
free (current_lto_file);
current_lto_file = NULL;
- ggc_collect ();
}

lto_flatten_files (decl_data, count, last_file_ix);
--- 3412,3417 ----
*************** read_cgraph_and_symbols (unsigned nfiles
*** 2955,2961 ****
type_hash_cache = NULL;
free (type_pair_cache);
type_pair_cache = NULL;
! gimple_type_leader = NULL;
free_gimple_type_tables ();
ggc_collect ();

--- 3433,3440 ----
type_hash_cache = NULL;
free (type_pair_cache);
type_pair_cache = NULL;
! tree_scc_hash.dispose ();
! obstack_free (&tree_scc_hash_obstack, NULL);
free_gimple_type_tables ();
ggc_collect ();

*************** print_lto_report_1 (void)
*** 3121,3147 ****
const char *pfx = (flag_lto) ? "LTO" : (flag_wpa) ? "WPA" : "LTRANS";
fprintf (stderr, "%s statistics\n", pfx);

! if (gimple_types)
! fprintf (stderr, "[%s] GIMPLE type table: size %ld, %ld elements, "
! "%ld searches, %ld collisions (ratio: %f)\n", pfx,
! (long) htab_size (gimple_types),
! (long) htab_elements (gimple_types),
! (long) gimple_types->searches,
! (long) gimple_types->collisions,
! htab_collisions (gimple_types));
! else
! fprintf (stderr, "[%s] GIMPLE type table is empty\n", pfx);
! if (type_hash_cache)
! fprintf (stderr, "[%s] GIMPLE type hash cache table: size %ld, %ld elements, "
! "%ld searches, %ld collisions (ratio: %f)\n", pfx,
! (long) htab_size (type_hash_cache),
! (long) htab_elements (type_hash_cache),
! (long) type_hash_cache->searches,
! (long) type_hash_cache->collisions,
! htab_collisions (type_hash_cache));
! else
! fprintf (stderr, "[%s] GIMPLE type hash cache table is empty\n", pfx);
! fprintf (stderr, "[%s] Merged %lu types\n", pfx, num_merged_types);

print_gimple_types_stats (pfx);
print_lto_report (pfx);
--- 3600,3648 ----
const char *pfx = (flag_lto) ? "LTO" : (flag_wpa) ? "WPA" : "LTRANS";
fprintf (stderr, "%s statistics\n", pfx);

! fprintf (stderr, "[%s] read %lu SCCs of average size %f\n",
! pfx, num_sccs_read, total_scc_size / (double)num_sccs_read);
! fprintf (stderr, "[%s] %lu tree bodies read in total\n", pfx, total_scc_size);
! if (flag_wpa && tree_scc_hash.is_created ())
! {
! fprintf (stderr, "[%s] tree SCC table: size %ld, %ld elements, "
! "collision ratio: %f\n", pfx,
! (long) tree_scc_hash.size (),
! (long) tree_scc_hash.elements (),
! tree_scc_hash.collisions ());
! hash_table<tree_scc_hasher>::iterator hiter;
! tree_scc *scc, *max_scc = NULL;
! unsigned max_length = 0;
! FOR_EACH_HASH_TABLE_ELEMENT (tree_scc_hash, scc, x, hiter)
! {
! unsigned length = 0;
! tree_scc *s = scc;
! for (; s; s = s->next)
! length++;
! if (length > max_length)
! {
! max_length = length;
! max_scc = scc;
! }
! }
! fprintf (stderr, "[%s] tree SCC max chain length %u (size %u)\n",
! pfx, max_length, max_scc->len);
! fprintf (stderr, "[%s] Compared %lu SCCs, %lu collisions (%f)\n", pfx,
! num_scc_compares, num_scc_compare_collisions,
! num_scc_compare_collisions / (double) num_scc_compares);
! fprintf (stderr, "[%s] Merged %lu SCCs\n", pfx, num_sccs_merged);
! fprintf (stderr, "[%s] Merged %lu tree bodies\n", pfx,
! total_scc_size_merged);
! fprintf (stderr, "[%s] Merged %lu types\n", pfx, num_merged_types);
! fprintf (stderr, "[%s] %lu types prevailed (%lu associated trees)\n",
! pfx, num_prevailing_types, num_type_scc_trees);
! fprintf (stderr, "[%s] Old merging code merges an additional %lu types"
! " of which %lu are in the same SCC with their "
! "prevailing variant (%lu and %lu associated trees)\n",
! pfx, num_not_merged_types, num_not_merged_types_in_same_scc,
! num_not_merged_types_trees,
! num_not_merged_types_in_same_scc_trees);
! }

print_gimple_types_stats (pfx);
print_lto_report (pfx);
Index: trunk/gcc/lto/Make-lang.in
===================================================================
*** trunk.orig/gcc/lto/Make-lang.in 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto/Make-lang.in 2013-06-13 14:35:53.257108072 +0200
*************** lto/lto.o: lto/lto.c $(CONFIG_H) $(SYSTE
*** 85,91 ****
langhooks.h $(VEC_H) $(BITMAP_H) pointer-set.h $(IPA_PROP_H) \
$(COMMON_H) debug.h $(GIMPLE_H) $(LTO_H) $(LTO_TREE_H) \
$(LTO_TAGS_H) $(LTO_STREAMER_H) $(SPLAY_TREE_H) gt-lto-lto.h \
! $(TREE_STREAMER_H) lto/lto-partition.h
lto/lto-partition.o: lto/lto-partition.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
toplev.h $(TREE_H) $(TM_H) \
$(CGRAPH_H) $(TIMEVAR_H) \
--- 85,91 ----
langhooks.h $(VEC_H) $(BITMAP_H) pointer-set.h $(IPA_PROP_H) \
$(COMMON_H) debug.h $(GIMPLE_H) $(LTO_H) $(LTO_TREE_H) \
$(LTO_TAGS_H) $(LTO_STREAMER_H) $(SPLAY_TREE_H) gt-lto-lto.h \
! $(TREE_STREAMER_H) $(DATA_STREAMER_H) lto/lto-partition.h
lto/lto-partition.o: lto/lto-partition.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
toplev.h $(TREE_H) $(TM_H) \
$(CGRAPH_H) $(TIMEVAR_H) \
Index: trunk/gcc/tree-streamer-out.c
===================================================================
*** trunk.orig/gcc/tree-streamer-out.c 2013-06-13 14:35:48.000000000 +0200
--- trunk/gcc/tree-streamer-out.c 2013-06-17 10:38:31.909507744 +0200
*************** pack_ts_base_value_fields (struct bitpac
*** 95,104 ****
bp_pack_value (bp, TYPE_ARTIFICIAL (expr), 1);
else
bp_pack_value (bp, TREE_NO_WARNING (expr), 1);
- bp_pack_value (bp, TREE_USED (expr), 1);
bp_pack_value (bp, TREE_NOTHROW (expr), 1);
bp_pack_value (bp, TREE_STATIC (expr), 1);
! bp_pack_value (bp, TREE_PRIVATE (expr), 1);
bp_pack_value (bp, TREE_PROTECTED (expr), 1);
bp_pack_value (bp, TREE_DEPRECATED (expr), 1);
if (TYPE_P (expr))
--- 95,104 ----
bp_pack_value (bp, TYPE_ARTIFICIAL (expr), 1);
else
bp_pack_value (bp, TREE_NO_WARNING (expr), 1);
bp_pack_value (bp, TREE_NOTHROW (expr), 1);
bp_pack_value (bp, TREE_STATIC (expr), 1);
! if (TREE_CODE (expr) != TREE_BINFO)
! bp_pack_value (bp, TREE_PRIVATE (expr), 1);
bp_pack_value (bp, TREE_PROTECTED (expr), 1);
bp_pack_value (bp, TREE_DEPRECATED (expr), 1);
if (TYPE_P (expr))
*************** pack_ts_type_common_value_fields (struct
*** 298,309 ****
bp_pack_value (bp, TYPE_NONALIASED_COMPONENT (expr), 1);
bp_pack_value (bp, TYPE_PACKED (expr), 1);
bp_pack_value (bp, TYPE_RESTRICT (expr), 1);
- bp_pack_value (bp, TYPE_CONTAINS_PLACEHOLDER_INTERNAL (expr), 2);
bp_pack_value (bp, TYPE_USER_ALIGN (expr), 1);
bp_pack_value (bp, TYPE_READONLY (expr), 1);
bp_pack_var_len_unsigned (bp, TYPE_PRECISION (expr));
bp_pack_var_len_unsigned (bp, TYPE_ALIGN (expr));
! bp_pack_var_len_int (bp, TYPE_ALIAS_SET (expr) == 0 ? 0 : -1);
}


--- 298,312 ----
bp_pack_value (bp, TYPE_NONALIASED_COMPONENT (expr), 1);
bp_pack_value (bp, TYPE_PACKED (expr), 1);
bp_pack_value (bp, TYPE_RESTRICT (expr), 1);
bp_pack_value (bp, TYPE_USER_ALIGN (expr), 1);
bp_pack_value (bp, TYPE_READONLY (expr), 1);
bp_pack_var_len_unsigned (bp, TYPE_PRECISION (expr));
bp_pack_var_len_unsigned (bp, TYPE_ALIGN (expr));
! /* Make sure to preserve the fact whether the frontend would assign
! alias-set zero to this type. */
! bp_pack_var_len_int (bp, (TYPE_ALIAS_SET (expr) == 0
! || (!in_lto_p
! && get_alias_set (expr) == 0)) ? 0 : -1);
}


*************** streamer_write_chain (struct output_bloc
*** 491,499 ****
to the global decls section as we do not want to have them
enter decl merging. This is, of course, only for the call
for streaming BLOCK_VARS, but other callers are safe. */
if (VAR_OR_FUNCTION_DECL_P (t)
&& DECL_EXTERNAL (t))
! stream_write_tree_shallow_non_ref (ob, t, ref_p);
else
stream_write_tree (ob, t, ref_p);

--- 494,503 ----
to the global decls section as we do not want to have them
enter decl merging. This is, of course, only for the call
for streaming BLOCK_VARS, but other callers are safe. */
+ /* ??? FIXME wrt SCC streaming. Drop these for now. */
if (VAR_OR_FUNCTION_DECL_P (t)
&& DECL_EXTERNAL (t))
! ; /* stream_write_tree_shallow_non_ref (ob, t, ref_p); */
else
stream_write_tree (ob, t, ref_p);

*************** static void
*** 553,559 ****
write_ts_decl_minimal_tree_pointers (struct output_block *ob, tree expr,
bool ref_p)
{
! stream_write_tree (ob, DECL_NAME (expr), ref_p);
stream_write_tree (ob, DECL_CONTEXT (expr), ref_p);
}

--- 557,569 ----
write_ts_decl_minimal_tree_pointers (struct output_block *ob, tree expr,
bool ref_p)
{
! /* Drop names that were created for anonymous entities. */
! if (DECL_NAME (expr)
! && TREE_CODE (DECL_NAME (expr)) == IDENTIFIER_NODE
! && ANON_AGGRNAME_P (DECL_NAME (expr)))
! stream_write_tree (ob, NULL_TREE, ref_p);
! else
! stream_write_tree (ob, DECL_NAME (expr), ref_p);
stream_write_tree (ob, DECL_CONTEXT (expr), ref_p);
}

*************** write_ts_list_tree_pointers (struct outp
*** 716,722 ****
{
stream_write_tree (ob, TREE_PURPOSE (expr), ref_p);
stream_write_tree (ob, TREE_VALUE (expr), ref_p);
! streamer_write_chain (ob, TREE_CHAIN (expr), ref_p);
}


--- 726,732 ----
{
stream_write_tree (ob, TREE_PURPOSE (expr), ref_p);
stream_write_tree (ob, TREE_VALUE (expr), ref_p);
! stream_write_tree (ob, TREE_CHAIN (expr), ref_p);
}


*************** streamer_write_tree_body (struct output_
*** 842,847 ****
--- 852,859 ----
{
enum tree_code code;

+ lto_stats.num_tree_bodies_output++;
+
code = TREE_CODE (expr);

if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
Index: trunk/gcc/tree-streamer.c
===================================================================
*** trunk.orig/gcc/tree-streamer.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree-streamer.c 2013-06-13 14:35:53.258108084 +0200
*************** streamer_check_handled_ts_structures (vo
*** 92,107 ****

static void
streamer_tree_cache_add_to_node_array (struct streamer_tree_cache_d *cache,
! unsigned ix, tree t)
{
/* Make sure we're either replacing an old element or
appending consecutively. */
gcc_assert (ix <= cache->nodes.length ());

if (ix == cache->nodes.length ())
! cache->nodes.safe_push (t);
else
! cache->nodes[ix] = t;
}


--- 92,115 ----

static void
streamer_tree_cache_add_to_node_array (struct streamer_tree_cache_d *cache,
! unsigned ix, tree t, hashval_t hash)
{
/* Make sure we're either replacing an old element or
appending consecutively. */
gcc_assert (ix <= cache->nodes.length ());

if (ix == cache->nodes.length ())
! {
! cache->nodes.safe_push (t);
! if (cache->hashes.exists ())
! cache->hashes.safe_push (hash);
! }
else
! {
! cache->nodes[ix] = t;
! if (cache->hashes.exists ())
! cache->hashes[ix] = hash;
! }
}


*************** streamer_tree_cache_add_to_node_array (s
*** 117,123 ****

static bool
streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
! tree t, unsigned *ix_p,
bool insert_at_next_slot_p)
{
void **slot;
--- 125,131 ----

static bool
streamer_tree_cache_insert_1 (struct streamer_tree_cache_d *cache,
! tree t, hashval_t hash, unsigned *ix_p,
bool insert_at_next_slot_p)
{
void **slot;
*************** streamer_tree_cache_insert_1 (struct str
*** 136,142 ****
ix = *ix_p;
*slot = (void *)(size_t) (ix + 1);

! streamer_tree_cache_add_to_node_array (cache, ix, t);

/* Indicate that the item was not present in the cache. */
existed_p = false;
--- 144,150 ----
ix = *ix_p;
*slot = (void *)(size_t) (ix + 1);

! streamer_tree_cache_add_to_node_array (cache, ix, t, hash);

/* Indicate that the item was not present in the cache. */
existed_p = false;
*************** streamer_tree_cache_insert_1 (struct str
*** 151,157 ****
location, and ENTRY->TO does not match *IX_P, add T to
the requested location slot. */
ix = *ix_p;
! streamer_tree_cache_add_to_node_array (cache, ix, t);
*slot = (void *)(size_t) (ix + 1);
}

--- 159,165 ----
location, and ENTRY->TO does not match *IX_P, add T to
the requested location slot. */
ix = *ix_p;
! streamer_tree_cache_add_to_node_array (cache, ix, t, hash);
*slot = (void *)(size_t) (ix + 1);
}

*************** streamer_tree_cache_insert_1 (struct str
*** 174,203 ****

bool
streamer_tree_cache_insert (struct streamer_tree_cache_d *cache, tree t,
! unsigned *ix_p)
{
! return streamer_tree_cache_insert_1 (cache, t, ix_p, true);
}


! /* Insert tree node T in CACHE at slot IX. If T already
! existed in the cache return true. Otherwise, return false. */

! bool
! streamer_tree_cache_insert_at (struct streamer_tree_cache_d *cache,
! tree t, unsigned ix)
{
! return streamer_tree_cache_insert_1 (cache, t, &ix, false);
}


/* Appends tree node T to CACHE, even if T already existed in it. */

void
! streamer_tree_cache_append (struct streamer_tree_cache_d *cache, tree t)
{
unsigned ix = cache->nodes.length ();
! streamer_tree_cache_insert_1 (cache, t, &ix, false);
}

/* Return true if tree node T exists in CACHE, otherwise false. If IX_P is
--- 182,214 ----

bool
streamer_tree_cache_insert (struct streamer_tree_cache_d *cache, tree t,
! hashval_t hash, unsigned *ix_p)
{
! return streamer_tree_cache_insert_1 (cache, t, hash, ix_p, true);
}


! /* Replace the tree node with T in CACHE at slot IX. */

! void
! streamer_tree_cache_replace_tree (struct streamer_tree_cache_d *cache,
! tree t, unsigned ix)
{
! hashval_t hash = 0;
! if (cache->hashes.exists ())
! hash = streamer_tree_cache_get_hash (cache, ix);
! streamer_tree_cache_insert_1 (cache, t, hash, &ix, false);
}


/* Appends tree node T to CACHE, even if T already existed in it. */

void
! streamer_tree_cache_append (struct streamer_tree_cache_d *cache,
! tree t, hashval_t hash)
{
unsigned ix = cache->nodes.length ();
! streamer_tree_cache_insert_1 (cache, t, hash, &ix, false);
}

/* Return true if tree node T exists in CACHE, otherwise false. If IX_P is
*************** record_common_node (struct streamer_tree
*** 257,263 ****
if (!node)
node = error_mark_node;

! streamer_tree_cache_append (cache, node);

if (POINTER_TYPE_P (node)
|| TREE_CODE (node) == COMPLEX_TYPE
--- 268,277 ----
if (!node)
node = error_mark_node;

! /* ??? FIXME, devise a better hash value. But the hash needs to be equal
! for all frontend and lto1 invocations. So just use the position
! in the cache as hash value. */
! streamer_tree_cache_append (cache, node, cache->nodes.length ());

if (POINTER_TYPE_P (node)
|| TREE_CODE (node) == COMPLEX_TYPE
*************** preload_common_nodes (struct streamer_tr
*** 305,317 ****
/* Create a cache of pickled nodes. */

struct streamer_tree_cache_d *
! streamer_tree_cache_create (void)
{
struct streamer_tree_cache_d *cache;

cache = XCNEW (struct streamer_tree_cache_d);

cache->node_map = pointer_map_create ();

/* Load all the well-known tree nodes that are always created by
the compiler on startup. This prevents writing them out
--- 319,334 ----
/* Create a cache of pickled nodes. */

struct streamer_tree_cache_d *
! streamer_tree_cache_create (bool with_hashes)
{
struct streamer_tree_cache_d *cache;

cache = XCNEW (struct streamer_tree_cache_d);

cache->node_map = pointer_map_create ();
+ cache->nodes.create (165);
+ if (with_hashes)
+ cache->hashes.create (165);

/* Load all the well-known tree nodes that are always created by
the compiler on startup. This prevents writing them out
*************** streamer_tree_cache_delete (struct strea
*** 332,336 ****
--- 349,354 ----

pointer_map_destroy (c->node_map);
c->nodes.release ();
+ c->hashes.release ();
free (c);
}
Index: trunk/gcc/tree-streamer.h
===================================================================
*** trunk.orig/gcc/tree-streamer.h 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree-streamer.h 2013-06-13 14:35:53.258108084 +0200
*************** along with GCC; see the file COPYING3.
*** 43,48 ****
--- 43,49 ----
T. The reconstructed T is inserted in some array so that when
the reference index for T is found in the input stream, it can be
used to look up into the array to get the reconstructed T. */
+
struct streamer_tree_cache_d
{
/* The mapping between tree nodes and slots into the nodes array. */
*************** struct streamer_tree_cache_d
*** 50,55 ****
--- 51,58 ----

/* The nodes pickled so far. */
vec<tree> nodes;
+ /* The node hashes (if available). */
+ vec<hashval_t> hashes;
};

/* Return true if tree node EXPR should be streamed as a builtin. For
*************** tree streamer_alloc_tree (struct lto_inp
*** 71,77 ****
void streamer_read_tree_body (struct lto_input_block *, struct data_in *, tree);
tree streamer_get_pickled_tree (struct lto_input_block *, struct data_in *);
tree streamer_get_builtin_tree (struct lto_input_block *, struct data_in *);
- tree streamer_read_integer_cst (struct lto_input_block *, struct data_in *);
struct bitpack_d streamer_read_tree_bitfields (struct lto_input_block *,
struct data_in *, tree);

--- 74,79 ----
*************** void streamer_write_builtin (struct outp
*** 89,110 ****
/* In tree-streamer.c. */
void streamer_check_handled_ts_structures (void);
bool streamer_tree_cache_insert (struct streamer_tree_cache_d *, tree,
! unsigned *);
! bool streamer_tree_cache_insert_at (struct streamer_tree_cache_d *, tree,
! unsigned);
! void streamer_tree_cache_append (struct streamer_tree_cache_d *, tree);
bool streamer_tree_cache_lookup (struct streamer_tree_cache_d *, tree,
unsigned *);
! struct streamer_tree_cache_d *streamer_tree_cache_create (void);
void streamer_tree_cache_delete (struct streamer_tree_cache_d *);

/* Return the tree node at slot IX in CACHE. */

static inline tree
! streamer_tree_cache_get (struct streamer_tree_cache_d *cache, unsigned ix)
{
return cache->nodes[ix];
}


#endif /* GCC_TREE_STREAMER_H */
--- 91,121 ----
/* In tree-streamer.c. */
void streamer_check_handled_ts_structures (void);
bool streamer_tree_cache_insert (struct streamer_tree_cache_d *, tree,
! hashval_t, unsigned *);
! void streamer_tree_cache_replace_tree (struct streamer_tree_cache_d *, tree,
! unsigned);
! void streamer_tree_cache_append (struct streamer_tree_cache_d *, tree,
! hashval_t);
bool streamer_tree_cache_lookup (struct streamer_tree_cache_d *, tree,
unsigned *);
! struct streamer_tree_cache_d *streamer_tree_cache_create (bool);
void streamer_tree_cache_delete (struct streamer_tree_cache_d *);

/* Return the tree node at slot IX in CACHE. */

static inline tree
! streamer_tree_cache_get_tree (struct streamer_tree_cache_d *cache, unsigned ix)
{
return cache->nodes[ix];
}

+ /* Return the tree hash value at slot IX in CACHE. */
+
+ static inline hashval_t
+ streamer_tree_cache_get_hash (struct streamer_tree_cache_d *cache, unsigned ix)
+ {
+ return cache->hashes[ix];
+ }
+

#endif /* GCC_TREE_STREAMER_H */
Index: trunk/gcc/lto-streamer.c
===================================================================
*** trunk.orig/gcc/lto-streamer.c 2013-06-13 14:35:48.000000000 +0200
--- trunk/gcc/lto-streamer.c 2013-06-17 10:21:05.752878808 +0200
*************** print_lto_report (const char *s)
*** 227,232 ****
--- 227,239 ----
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_output_symtab_nodes);

+ fprintf (stderr, "[%s] # of output tree pickle references: "
+ HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
+ lto_stats.num_pickle_refs_output);
+ fprintf (stderr, "[%s] # of output tree bodies: "
+ HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
+ lto_stats.num_tree_bodies_output);
+
fprintf (stderr, "[%s] # callgraph partitions: "
HOST_WIDE_INT_PRINT_UNSIGNED "\n", s,
lto_stats.num_cgraph_partitions);
Index: trunk/gcc/tree.c
===================================================================
*** trunk.orig/gcc/tree.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree.c 2013-06-13 14:35:53.261108120 +0200
*************** build_int_cst_wide (tree type, unsigned
*** 1266,1271 ****
--- 1266,1364 ----
return t;
}

+ void
+ cache_integer_cst (tree t)
+ {
+ tree type = TREE_TYPE (t);
+ HOST_WIDE_INT hi = TREE_INT_CST_HIGH (t);
+ unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (t);
+ int ix = -1;
+ int limit = 0;
+
+ gcc_assert (!TREE_OVERFLOW (t));
+
+ switch (TREE_CODE (type))
+ {
+ case NULLPTR_TYPE:
+ gcc_assert (hi == 0 && low == 0);
+ /* Fallthru. */
+
+ case POINTER_TYPE:
+ case REFERENCE_TYPE:
+ /* Cache NULL pointer. */
+ if (!hi && !low)
+ {
+ limit = 1;
+ ix = 0;
+ }
+ break;
+
+ case BOOLEAN_TYPE:
+ /* Cache false or true. */
+ limit = 2;
+ if (!hi && low < 2)
+ ix = low;
+ break;
+
+ case INTEGER_TYPE:
+ case OFFSET_TYPE:
+ if (TYPE_UNSIGNED (type))
+ {
+ /* Cache 0..N */
+ limit = INTEGER_SHARE_LIMIT;
+ if (!hi && low < (unsigned HOST_WIDE_INT)INTEGER_SHARE_LIMIT)
+ ix = low;
+ }
+ else
+ {
+ /* Cache -1..N */
+ limit = INTEGER_SHARE_LIMIT + 1;
+ if (!hi && low < (unsigned HOST_WIDE_INT)INTEGER_SHARE_LIMIT)
+ ix = low + 1;
+ else if (hi == -1 && low == -(unsigned HOST_WIDE_INT)1)
+ ix = 0;
+ }
+ break;
+
+ case ENUMERAL_TYPE:
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+
+ if (ix >= 0)
+ {
+ /* Look for it in the type's vector of small shared ints. */
+ if (!TYPE_CACHED_VALUES_P (type))
+ {
+ TYPE_CACHED_VALUES_P (type) = 1;
+ TYPE_CACHED_VALUES (type) = make_tree_vec (limit);
+ }
+
+ gcc_assert (TREE_VEC_ELT (TYPE_CACHED_VALUES (type), ix) == NULL_TREE);
+ TREE_VEC_ELT (TYPE_CACHED_VALUES (type), ix) = t;
+ }
+ else
+ {
+ /* Use the cache of larger shared ints. */
+ void **slot;
+
+ slot = htab_find_slot (int_cst_hash_table, t, INSERT);
+ /* If there is already an entry for the number verify it's the
+ same. */
+ if (*slot)
+ {
+ gcc_assert (TREE_INT_CST_LOW ((tree)*slot) == low
+ && TREE_INT_CST_HIGH ((tree)*slot) == hi);
+ return;
+ }
+ /* Otherwise insert this one into the hash table. */
+ *slot = t;
+ }
+ }
+
+
/* Builds an integer constant in TYPE such that lowest BITS bits are ones
and the rest are zeros. */

Index: trunk/gcc/tree.h
===================================================================
*** trunk.orig/gcc/tree.h 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/tree.h 2013-06-17 10:36:36.005108356 +0200
*************** extern const_tree strip_invariant_refs (
*** 5637,5642 ****
--- 5637,5643 ----
extern tree lhd_gcc_personality (void);
extern void assign_assembler_name_if_neeeded (tree);
extern void warn_deprecated_use (tree, tree);
+ extern void cache_integer_cst (tree);


/* In cgraph.c */
*************** builtin_decl_implicit_p (enum built_in_f
*** 6582,6585 ****
--- 6583,6609 ----
&& builtin_info.implicit_p[uns_fncode]);
}

+
+ /* For anonymous aggregate types, we need some sort of name to
+ hold on to. In practice, this should not appear, but it should
+ not be harmful if it does. */
+ #ifndef NO_DOT_IN_LABEL
+ #define ANON_AGGRNAME_FORMAT "._%d"
+ #define ANON_AGGRNAME_P(ID_NODE) (IDENTIFIER_POINTER (ID_NODE)[0] == '.' \
+ && IDENTIFIER_POINTER (ID_NODE)[1] == '_')
+ #else /* NO_DOT_IN_LABEL */
+ #ifndef NO_DOLLAR_IN_LABEL
+ #define ANON_AGGRNAME_FORMAT "$_%d"
+ #define ANON_AGGRNAME_P(ID_NODE) (IDENTIFIER_POINTER (ID_NODE)[0] == '$' \
+ && IDENTIFIER_POINTER (ID_NODE)[1] == '_')
+ #else /* NO_DOLLAR_IN_LABEL */
+ #define ANON_AGGRNAME_PREFIX "__anon_"
+ #define ANON_AGGRNAME_P(ID_NODE) \
+ (!strncmp (IDENTIFIER_POINTER (ID_NODE), ANON_AGGRNAME_PREFIX, \
+ sizeof (ANON_AGGRNAME_PREFIX) - 1))
+ #define ANON_AGGRNAME_FORMAT "__anon_%d"
+ #endif /* NO_DOLLAR_IN_LABEL */
+ #endif /* NO_DOT_IN_LABEL */
+
+
#endif /* GCC_TREE_H */
Index: trunk/gcc/lto-symtab.c
===================================================================
*** trunk.orig/gcc/lto-symtab.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/lto-symtab.c 2013-06-14 11:11:41.056800028 +0200
*************** lto_varpool_replace_node (struct varpool
*** 97,103 ****
ipa_clone_referring ((symtab_node)prevailing_node, &vnode->symbol.ref_list);

/* Be sure we can garbage collect the initializer. */
! if (DECL_INITIAL (vnode->symbol.decl))
DECL_INITIAL (vnode->symbol.decl) = error_mark_node;
/* Finally remove the replaced node. */
varpool_remove_node (vnode);
--- 97,104 ----
ipa_clone_referring ((symtab_node)prevailing_node, &vnode->symbol.ref_list);

/* Be sure we can garbage collect the initializer. */
! if (DECL_INITIAL (vnode->symbol.decl)
! && vnode->symbol.decl != prevailing_node->symbol.decl)
DECL_INITIAL (vnode->symbol.decl) = error_mark_node;
/* Finally remove the replaced node. */
varpool_remove_node (vnode);
Index: trunk/gcc/varpool.c
===================================================================
*** trunk.orig/gcc/varpool.c 2013-06-13 14:35:21.000000000 +0200
--- trunk/gcc/varpool.c 2013-06-14 11:12:32.873423669 +0200
*************** varpool_remove_initializer (struct varpo
*** 84,90 ****
/* Keep vtables for BINFO folding. */
&& !DECL_VIRTUAL_P (node->symbol.decl)
/* FIXME: http://gcc.gnu.org/PR55395 */
! && debug_info_level == DINFO_LEVEL_NONE)
DECL_INITIAL (node->symbol.decl) = error_mark_node;
}

--- 84,95 ----
/* Keep vtables for BINFO folding. */
&& !DECL_VIRTUAL_P (node->symbol.decl)
/* FIXME: http://gcc.gnu.org/PR55395 */
! && debug_info_level == DINFO_LEVEL_NONE
! /* When doing declaration merging we have duplicate
! entries for given decl. Do not attempt to remove
! the boides, or we will end up remiving
! wrong one. */
! && cgraph_state != CGRAPH_LTO_STREAMING)
DECL_INITIAL (node->symbol.decl) = error_mark_node;
}

Index: trunk/gcc/cp/cp-tree.h
===================================================================
*** trunk.orig/gcc/cp/cp-tree.h 2013-06-11 09:33:00.000000000 +0200
--- trunk/gcc/cp/cp-tree.h 2013-06-17 10:36:48.086254224 +0200
*************** extern GTY(()) vec<tree, va_gc> *local_c
*** 4345,4352 ****
#define VFIELD_NAME "_vptr."
#define VFIELD_NAME_FORMAT "_vptr.%s"

- #define ANON_AGGRNAME_FORMAT "._%d"
-
#else /* NO_DOT_IN_LABEL */

#ifndef NO_DOLLAR_IN_LABEL
--- 4345,4350 ----
*************** extern GTY(()) vec<tree, va_gc> *local_c
*** 4357,4363 ****
#define VFIELD_BASE "$vf"
#define VFIELD_NAME "_vptr$"
#define VFIELD_NAME_FORMAT "_vptr$%s"
- #define ANON_AGGRNAME_FORMAT "$_%d"

#else /* NO_DOLLAR_IN_LABEL */

--- 4355,4360 ----
*************** extern GTY(()) vec<tree, va_gc> *local_c
*** 4376,4387 ****
sizeof (VFIELD_NAME) - 1))
#define VFIELD_NAME_FORMAT "__vptr_%s"

- #define ANON_AGGRNAME_PREFIX "__anon_"
- #define ANON_AGGRNAME_P(ID_NODE) \
- (!strncmp (IDENTIFIER_POINTER (ID_NODE), ANON_AGGRNAME_PREFIX, \
- sizeof (ANON_AGGRNAME_PREFIX) - 1))
- #define ANON_AGGRNAME_FORMAT "__anon_%d"
-
#endif /* NO_DOLLAR_IN_LABEL */
#endif /* NO_DOT_IN_LABEL */

--- 4373,4378 ----
*************** extern GTY(()) vec<tree, va_gc> *local_c
*** 4418,4428 ****
#define VFIELD_NAME_P(ID_NODE) \
(!strncmp (IDENTIFIER_POINTER (ID_NODE), VFIELD_NAME, sizeof(VFIELD_NAME)-1))

- /* For anonymous aggregate types, we need some sort of name to
- hold on to. In practice, this should not appear, but it should
- not be harmful if it does. */
- #define ANON_AGGRNAME_P(ID_NODE) (IDENTIFIER_POINTER (ID_NODE)[0] == JOINER \
- && IDENTIFIER_POINTER (ID_NODE)[1] == '_')
#endif /* !defined(NO_DOLLAR_IN_LABEL) || !defined(NO_DOT_IN_LABEL) */


--- 4409,4414 ----
Andi Kleen
2013-06-17 17:51:57 UTC
Permalink
Current trunk cannot build LTO kernels now, with your patch,
as reported earlier.

Please fix ASAP. I'm surprised that you checked in a patchkit
with such serious reported problems.

-Andi


backup/lsrc/git/linux-lto-2.6/lib/decompress_unlzo.c: In function 'unlzo':
/backup/lsrc/git/linux-lto-2.6/lib/decompress_unlzo.c:79:8: internal compiler error: in expand_expr_real_1, at expr.c:9361
parse += 7;
^
0x5ea175 expand_expr_real_1(tree_node*, rtx_def*, machine_mode, expand_modifier, rtx_def**)
../../gcc/gcc/expr.c:9356
0x53fe49 expand_expr
../../gcc/gcc/expr.h:444
0x53fe49 expand_gimple_stmt_1
../../gcc/gcc/cfgexpand.c:2283
0x53fe49 expand_gimple_stmt
../../gcc/gcc/cfgexpand.c:2370
0x5408a8 expand_gimple_basic_block
../../gcc/gcc/cfgexpand.c:4204
0x542996 gimple_expand_cfg
../../gcc/gcc/cfgexpand.c:4723
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <http://gcc.gnu.org/bugs.html> for instructions.
make[3]: *** [/home/ak/lsrc/git/obj-lto/ccUSzSrf.ltrans18.ltrans.o] Error 1
--
***@linux.intel.com -- Speaking for myself only.
Richard Biener
2013-06-17 21:02:08 UTC
Permalink
Post by Andi Kleen
Current trunk cannot build LTO kernels now, with your patch,
as reported earlier.
Please fix ASAP. I'm surprised that you checked in a patchkit
with such serious reported problems.
Instructions for reproducing this issue appreciated. I've never built lto kernels.

Richard.
Post by Andi Kleen
-Andi
/backup/lsrc/git/linux-lto-2.6/lib/decompress_unlzo.c:79:8: internal
compiler error: in expand_expr_real_1, at expr.c:9361
parse += 7;
^
0x5ea175 expand_expr_real_1(tree_node*, rtx_def*, machine_mode,
expand_modifier, rtx_def**)
../../gcc/gcc/expr.c:9356
0x53fe49 expand_expr
../../gcc/gcc/expr.h:444
0x53fe49 expand_gimple_stmt_1
../../gcc/gcc/cfgexpand.c:2283
0x53fe49 expand_gimple_stmt
../../gcc/gcc/cfgexpand.c:2370
0x5408a8 expand_gimple_basic_block
../../gcc/gcc/cfgexpand.c:4204
0x542996 gimple_expand_cfg
../../gcc/gcc/cfgexpand.c:4723
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <http://gcc.gnu.org/bugs.html> for instructions.
make[3]: *** [/home/ak/lsrc/git/obj-lto/ccUSzSrf.ltrans18.ltrans.o] Error 1
Andi Kleen
2013-06-17 21:13:19 UTC
Permalink
Post by Richard Biener
Post by Andi Kleen
Current trunk cannot build LTO kernels now, with your patch,
as reported earlier.
Please fix ASAP. I'm surprised that you checked in a patchkit
with such serious reported problems.
Instructions for reproducing this issue appreciated. I've never built lto kernels.
Install recent HJ Lu's Linux binutils somewhere
(https://www.kernel.org/pub/linux/devel/binutils/)

Build a gcc that uses the linker from above as plugin ld

Check out git://github.com/andikleen/linux-misc -b lto-3.9

Put attached config as ".config" into build dir.

make oldconfig
make CC=gcc LD=ld-from-linux-binutils AR=gcc-ar -j ..

-Andi
Richard Biener
2013-06-18 10:20:36 UTC
Permalink
Post by Andi Kleen
Post by Richard Biener
Post by Andi Kleen
Current trunk cannot build LTO kernels now, with your patch,
as reported earlier.
Please fix ASAP. I'm surprised that you checked in a patchkit
with such serious reported problems.
Instructions for reproducing this issue appreciated. I've never built lto kernels.
Install recent HJ Lu's Linux binutils somewhere
(https://www.kernel.org/pub/linux/devel/binutils/)
Build a gcc that uses the linker from above as plugin ld
Check out git://github.com/andikleen/linux-misc -b lto-3.9
Put attached config as ".config" into build dir.
make oldconfig
make CC=gcc LD=ld-from-linux-binutils AR=gcc-ar -j ..
Ok, it doesn't use LTO for me, not even with adding CFLAGS="-O2 -flto"
here.

Richard.
Post by Andi Kleen
-Andi
--
Richard Biener <***@suse.de>
SUSE / SUSE Labs
SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746
GF: Jeff Hawn, Jennifer Guild, Felix Imend
Andi Kleen
2013-06-18 14:14:00 UTC
Permalink
Post by Richard Biener
Post by Andi Kleen
make oldconfig
make CC=gcc LD=ld-from-linux-binutils AR=gcc-ar -j ..
Ok, it doesn't use LTO for me, not even with adding CFLAGS="-O2 -flto"
here.
Can you send me a build log with V=1 ?

There are some checks for the environment at the beginning, maybe they fail.

-Andi
--
***@linux.intel.com -- Speaking for myself only.
Martin Liška
2013-06-17 22:28:19 UTC
Permalink
Hello,
I tried to compile LTO kernel with latest gcc, applied patch by
Jan http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57334#c6:

lto1: internal compiler error: in lto_symtab_prevailing_decl, at
lto-symtab.c:644
0x783c63 lto_symtab_prevailing_decl(tree_node*)
../../gcc/lto-symtab.c:644
0x50afe4 lto_fixup_prevailing_decls
../../gcc/lto/lto.c:3220
0x50afe4 lto_fixup_decls
../../gcc/lto/lto.c:3284
0x50afe4 read_cgraph_and_symbols
../../gcc/lto/lto.c:3490
0x50afe4 lto_main()
../../gcc/lto/lto.c:3834

I commented gcc_checking_assert in Jan's patch and compilation was
'almost' successful:
+ sortextable vmlinux
+ /home/marxin/Programming/linux-andikleen/scripts/sortextable vmlinux
Error:
no main_extable_sort_needed symbol in file: vmlinux

Both vmlinux1 and vmlinux2 were linker without any problem.

ld --version
GNU ld (Linux/GNU Binutils) 2.23.51.0.9.20130118

Build was done with Andi's kernel config and lto-3.9 branch of git repository.

Thanks,
Martin
Post by Andi Kleen
Current trunk cannot build LTO kernels now, with your patch,
as reported earlier.
Please fix ASAP. I'm surprised that you checked in a patchkit
with such serious reported problems.
-Andi
/backup/lsrc/git/linux-lto-2.6/lib/decompress_unlzo.c:79:8: internal compiler error: in expand_expr_real_1, at expr.c:9361
parse += 7;
^
0x5ea175 expand_expr_real_1(tree_node*, rtx_def*, machine_mode, expand_modifier, rtx_def**)
../../gcc/gcc/expr.c:9356
0x53fe49 expand_expr
../../gcc/gcc/expr.h:444
0x53fe49 expand_gimple_stmt_1
../../gcc/gcc/cfgexpand.c:2283
0x53fe49 expand_gimple_stmt
../../gcc/gcc/cfgexpand.c:2370
0x5408a8 expand_gimple_basic_block
../../gcc/gcc/cfgexpand.c:4204
0x542996 gimple_expand_cfg
../../gcc/gcc/cfgexpand.c:4723
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <http://gcc.gnu.org/bugs.html> for instructions.
make[3]: *** [/home/ak/lsrc/git/obj-lto/ccUSzSrf.ltrans18.ltrans.o] Error 1
--
Richard Biener
2013-06-18 09:59:52 UTC
Permalink
Post by Andi Kleen
Current trunk cannot build LTO kernels now, with your patch,
as reported earlier.
Please fix ASAP. I'm surprised that you checked in a patchkit
with such serious reported problems.
-Andi
/backup/lsrc/git/linux-lto-2.6/lib/decompress_unlzo.c:79:8: internal compiler error: in expand_expr_real_1, at expr.c:9361
parse += 7;
^
0x5ea175 expand_expr_real_1(tree_node*, rtx_def*, machine_mode, expand_modifier, rtx_def**)
../../gcc/gcc/expr.c:9356
That suggests

Index: gcc/expr.c
===================================================================
--- gcc/expr.c (revision 200164)
+++ gcc/expr.c (working copy)
@@ -9353,7 +9353,7 @@ expand_expr_real_1 (tree exp, rtx target
/* Variables inherited from containing functions should have
been lowered by this point. */
context = decl_function_context (exp);
- gcc_assert (!context
+ gcc_assert (SCOPE_FILE_SCOPE_P (context)
|| context == current_function_decl
|| TREE_STATIC (exp)
|| DECL_EXTERNAL (exp)

might fix it.

Richard.
Andi Kleen
2013-06-18 16:17:02 UTC
Permalink
Post by Richard Biener
That suggests
Index: gcc/expr.c
===================================================================
--- gcc/expr.c (revision 200164)
+++ gcc/expr.c (working copy)
@@ -9353,7 +9353,7 @@ expand_expr_real_1 (tree exp, rtx target
/* Variables inherited from containing functions should have
been lowered by this point. */
context = decl_function_context (exp);
- gcc_assert (!context
+ gcc_assert (SCOPE_FILE_SCOPE_P (context)
|| context == current_function_decl
|| TREE_STATIC (exp)
|| DECL_EXTERNAL (exp)
might fix it.
Just confirmed with the small build. It does. Running the large build
now.

Please check in.

-Andi
--
***@linux.intel.com -- Speaking for myself only.
Andi Kleen
2013-06-18 18:04:15 UTC
Permalink
Post by Andi Kleen
Just confirmed with the small build. It does. Running the large build
now.
Large build worked too.
Post by Andi Kleen
Please check in.
Andi Kleen
2013-06-18 20:37:35 UTC
Permalink
Post by Andi Kleen
Post by Andi Kleen
Just confirmed with the small build. It does. Running the large build
now.
Large build worked too.
Also it seems to be drastically faster. I haven't done a proper
measurement run, but the initial run was 58% faster than 4.8,
using 42% less peak RSS.

-Andi
--
***@linux.intel.com -- Speaking for myself only.
Richard Biener
2013-06-19 08:16:47 UTC
Permalink
Post by Andi Kleen
Post by Andi Kleen
Post by Andi Kleen
Just confirmed with the small build. It does. Running the large build
now.
Large build worked too.
Also it seems to be drastically faster. I haven't done a proper
measurement run, but the initial run was 58% faster than 4.8,
using 42% less peak RSS.
That was the intent. As a side-effect it should also behave
correctly and not have weird effects on dwarf2out.c expectations.

Richard.

Continue reading on narkive:
Loading...